By Which It May Be Judged

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T04:26:23.835Z · LW · GW · Legacy · 941 comments

Contents

941 comments

Followup toMixed Reference: The Great Reductionist Project

Humans need fantasy to be human.

"Tooth fairies? Hogfathers? Little—"

Yes. As practice. You have to start out learning to believe the little lies.

"So we can believe the big ones?"

Yes. Justice. Mercy. Duty. That sort of thing.

"They're not the same at all!"

You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.

- Susan and Death, in Hogfather by Terry Pratchett

Suppose three people find a pie - that is, three people exactly simultaneously spot a pie which has been exogenously generated in unclaimed territory. Zaire wants the entire pie; Yancy thinks that 1/3 each is fair; and Xannon thinks that fair would be taking into equal account everyone's ideas about what is "fair".

I myself would say unhesitatingly that a third of the pie each, is fair. "Fairness", as an ethical concept, can get a lot more complicated in more elaborate contexts. But in this simple context, a lot of other things that "fairness" could depend on, like work inputs, have been eliminated or made constant. Assuming no relevant conditions other than those already stated, "fairness" simplifies to the mathematical procedure of splitting the pie into equal parts; and when this logical function is run over physical reality, it outputs "1/3 for Zaire, 1/3 for Yancy, 1/3 for Xannon".

Or to put it another way - just like we get "If Oswald hadn't shot Kennedy, nobody else would've" by running a logical function over a true causal model - similarly, we can get the hypothetical 'fair' situation, whether or not it actually happens, by running the physical starting scenario through a logical function that describes what a 'fair' outcome would look like:

So am I (as Zaire would claim) just assuming-by-authority that I get to have everything my way, since I'm not defining 'fairness' the way Zaire wants to define it?

No more than mathematicians are flatly ordering everyone to assume-without-proof that two different numbers can't have the same successor. For fairness to be what everyone thinks is "fair" would be entirely circular, structurally isomorphic to "Fzeem is what everyone thinks is fzeem"... or like trying to define the counting numbers as "whatever anyone thinks is a number". It only even looks coherent because everyone secretly already has a mental picture of "numbers" - because their brain already navigated to the referent.  But something akin to axioms is needed to talk about "numbers, as opposed to something else" in the first place. Even an inchoate mental image of "0, 1, 2, ..." implies the axioms no less than a formal statement - we can extract the axioms back out by asking questions about this rough mental image.

Similarly, the intuition that fairness has something to do with dividing up the pie equally, plays a role akin to secretly already having "0, 1, 2, ..." in mind as the subject of mathematical conversation. You need axioms, not as assumptions that aren't justified, but as pointers to what the heck the conversation is supposed to be about.

Multiple philosophers have suggested that this stance seems similar to "rigid designation", i.e., when I say 'fair' it intrinsically, rigidly refers to something-to-do-with-equal-division. I confess I don't see it that way myself - if somebody thinks of Euclidean geometry when you utter the sound "num-berz" they're not doing anything false, they're associating the sound to a different logical thingy. It's not about words with intrinsically rigid referential power, it's that the words are window dressing on the underlying entities. I want to talk about a particular logical entity, as it might be defined by either axioms or inchoate images, regardless of which word-sounds may be associated to it.  If you want to call that "rigid designation", that seems to me like adding a level of indirection; I don't care about the word 'fair' in the first place, I care about the logical entity of fairness.  (Or to put it even more sharply: since my ontology does not have room for physics, logic, plus designation, I'm not very interested in discussing this 'rigid designation' business unless it's being reduced to something else.)

Once issues of justice become more complicated and all the contextual variables get added back in, we might not be sure if a disagreement about 'fairness' reflects:

  1. The equivalent of a multiplication error within the same axioms - incorrectly dividing by 3.  (Or more complicatedly:  You might have a sophisticated axiomatic concept of 'equity', and incorrectly process those axioms to invalidly yield the assertion that, in a context where 2 of the 3 must starve and there's only enough pie for at most 1 person to survive, you should still divide the pie equally instead of flipping a 3-sided coin.  Where I'm assuming that this conclusion is 'incorrect', not because I disagree with it, but because it didn't actually follow from the axioms.)
  2. Mistaken models of the physical world fed into the function - mistakenly thinking there's 2 pies, or mistakenly thinking that Zaire has no subjective experiences and is not an object of ethical value.
  3. People associating different logical functions to the letters F-A-I-R, which isn't a disagreement about some common pinpointed variable, but just different people wanting different things.

There's a lot of people who feel that this picture leaves out something fundamental, especially once we make the jump from "fair" to the broader concept of "moral", "good", or "right".  And it's this worry about leaving-out-something-fundamental that I hope to address next...

...but please note, if we confess that 'right' lives in a world of physics and logic - because everything lives in a world of physics and logic - then we have to translate 'right' into those terms somehow.

And that is the answer Susan should have given - if she could talk about sufficiently advanced epistemology, sufficiently fast - to Death's entire statement:

You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy. And yet — Death waved a hand. And yet you act as if there is some ideal order in the world, as if there is some ... rightness in the universe by which it may be judged.

"But!" Susan should've said.  "When we judge the universe we're comparing it to a logical referent, a sort of thing that isn't in the universe!  Why, it's just like looking at a heap of 2 apples and a heap of 3 apples on a table, and comparing their invisible product to the number 6 - there isn't any 6 if you grind up the whole table, even if you grind up the whole universe, but the product is still 6, physico-logically speaking."


If you require that Rightness be written on some particular great Stone Tablet somewhere - to be "a light that shines from the sky", outside people, as a different Terry Pratchett book put it - then indeed, there's no such Stone Tablet anywhere in our universe.

But there shouldn't be such a Stone Tablet, given standard intuitions about morality.  This follows from the Euthryphro Dilemma out of ancient Greece.

The original Euthryphro dilemma goes, "Is it pious because it is loved by the gods, or loved by the gods because it is pious?" The religious version goes, "Is it good because it is commanded by God, or does God command it because it is good?"

The standard atheist reply is:  "Would you say that it's an intrinsically good thing - even if the event has no further causal consequences which are good - to slaughter babies or torture people, if that's what God says to do?"

If we can't make it good to slaughter babies by tweaking the state of God, then morality doesn't come from God; so goes the standard atheist argument.

But if you can't make it good to slaughter babies by tweaking the physical state of anything - if we can't imagine a world where some great Stone Tablet of Morality has been physically rewritten, and what is right has changed - then this is telling us that...

(drumroll)

...what's "right" is a logical thingy rather than a physical thingy, that's all.  The mark of a logical validity is that we can't concretely visualize a coherent possible world where the proposition is false.

And I mention this in hopes that I can show that it is not moral anti-realism to say that moral statements take their truth-value from logical entities.  Even in Ancient Greece, philosophers implicitly knew that 'morality' ought to be such an entity - that it couldn't be something you found when you ground the Universe to powder, because then you could resprinkle the powder and make it wonderful to kill babies - though they didn't know how to say what they knew.


There's a lot of people who still feel that Death would be right, if the universe were all physical; that the kind of dry logical entity I'm describing here, isn't sufficient to carry the bright alive feeling of goodness.

And there are others who accept that physics and logic is everything, but who - I think mistakenly - go ahead and also accept Death's stance that this makes morality a lie, or, in lesser form, that the bright alive feeling can't make it.  (Sort of like people who accept an incompatibilist theory of free will, also accept physics, and conclude with sorrow that they are indeed being controlled by physics.)

In case anyone is bored that I'm still trying to fight this battle, well, here's a quote from a recent Facebook conversation with a famous early transhumanist:

No doubt a "crippled" AI that didn't understand the existence or nature of first-person facts could be nonfriendly towards sentient beings... Only a zombie wouldn't value Heaven over Hell. For reasons we simply don't understand, the negative value and normative aspect of agony and despair is built into the nature of the experience itself. Non-reductionist? Yes, on a standard materialist ontology. But not IMO within a more defensible Strawsonian physicalism.

It would actually be quite surprisingly helpful for increasing the percentage of people who will participate meaningfully in saving the planet, if there were some reliably-working standard explanation for why physics and logic together have enough room to contain morality.  People who think that reductionism means we have to lie to our children, as Pratchett's Death advocates, won't be much enthused about the Center for Applied Rationality.  And there are a fair number of people out there who still advocate proceeding in the confidence of ineffable morality to construct sloppily designed AIs.

So far I don't know of any exposition that works reliably - for the thesis for how morality including our intuitions about whether things really are justified and so on, is preserved in the analysis to physics plus logic; that morality has been explained rather than explained away.  Nonetheless I shall now take another stab at it, starting with a simpler bright feeling:


When I see an unusually neat mathematical proof, unexpectedly short or surprisingly general, my brain gets a joyous sense of elegance.

There's presumably some functional slice through my brain that implements this emotion - some configuration subspace of spiking neural circuitry which corresponds to my feeling of elegance.  Perhaps I should say that elegance is merely about my brain switching on its elegance-signal?  But there are concepts like Kolmogorov complexity that give more formal meanings of "simple" than "Simple is whatever makes my brain feel the emotion of simplicity."  Anything you do to fool my brain wouldn't make the proof really elegant, not in that sense.  The emotion is not free of semantic content; we could build a correspondence theory for it and navigate to its logical+physical referent, and say:  "Sarah feels like this proof is elegant, and her feeling is true."  You could even say that certain proofs are elegant even if no conscious agent sees them.

My description of 'elegance' admittedly did invoke agent-dependent concepts like 'unexpectedly' short or 'surprisingly' general.  It's almost certainly true that with a different mathematical background, I would have different standards of elegance and experience that feeling on somewhat different occasions.  Even so, that still seems like moving around in a field of similar referents for the emotion - much more similar to each other than to, say, the distant cluster of 'anger'.

Rewiring my brain so that the 'elegance' sensation gets activated when I see mathematical proofs where the words have lots of vowels - that wouldn't change what is elegant.  Rather, it would make the feeling be about something else entirely; different semantics with a different truth-condition.

Indeed, it's not clear that this thought experiment is, or should be, really conceivable.  If all the associated computation is about vowels instead of elegance, then from the inside you would expect that to feel vowelly, not feel elegant...

...which is to say that even feelings can be associated with logical entities.  Though unfortunately not in any way that will feel like qualia if you can't read your own source code.  I could write out an exact description of your visual cortex's spiking code for 'blue' on paper, and it wouldn't actually look blue to you.  Still, on the higher level of description, it should seem intuitively plausible that if you tried rewriting the relevant part of your brain to count vowels, the resulting sensation would no longer have the content or even the feeling of elegance.  It would compute vowelliness, and feel vowelly.


My feeling of mathematical elegance is motivating; it makes me more likely to search for similar such proofs later and go on doing math.  You could construct an agent that tried to add more vowels instead, and if the agent asked itself why it was doing that, the resulting justification-thought wouldn't feel like because-it's-elegant, it would feel like because-it's-vowelly.

In the same sense, when you try to do what's right, you're motivated by things like (to yet again quote Frankena's list of terminal values):

"Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc."

If we reprogrammed you to count paperclips instead, it wouldn't feel like different things having the same kind of motivation behind it.  It wouldn't feel like doing-what's-right for a different guess about what's right.  It would feel like doing-what-leads-to-paperclips.

And I quoted the above list because the feeling of rightness isn't about implementing a particular logical function; it contains no mention of logical functions at all; in the environment of evolutionary ancestry nobody has heard of axiomatization; these feelings are about life, consciousness, etcetera.  If I could write out the whole truth-condition of the feeling in a way you could compute, you would still feel Moore's Open Question:  "I can see that this event is high-rated by logical function X, but is X really right?" - since you can't read your own source code and the description wouldn't be commensurate with your brain's native format.

"But!" you cry.  "But, is it really better to do what's right, than to maximize paperclips?"  Yes!  As soon as you start trying to cash out the logical function that gives betterness its truth-value, it will output "life, consciousness, etc. >B paperclips".  And if your brain were computing a different logical function instead, like makes-more-paperclips, it wouldn't feel better, it would feel moreclippy.

But is it really justified to keep our own sense of betterness?  Sure, and that's a logical fact - it's the objective output of the logical function corresponding to your experiential sense of what it means for something to be 'justified' in the first place.  This doesn't mean that Clippy the Paperclip Maximizer will self-modify to do only things that are justified; Clippy doesn't judge between self-modifications by computing justifications, but rather, computing clippyflurphs.

But isn't it arbitrary for Clippy to maximize paperclips?  Indeed; once you implicitly or explicitly pinpoint the logical function that gives judgments of arbitrariness their truth-value - presumably, revolving around the presence or absence of justifications - then this logical function will objectively yield that there's no justification whatsoever for maximizing paperclips (which is why I'm not going to do it) and hence that Clippy's decision is arbitrary. Conversely, Clippy finds that there's no clippyflurph for preserving life, and hence that it is unclipperiffic.  But unclipperifficness isn't arbitrariness any more than the number 17 is a right triangle; they're different logical entities pinned down by different axioms, and the corresponding judgments will have different semantic content and feel different.  If Clippy is architected to experience that-which-you-call-qualia, Clippy's feeling of clippyflurph will be structurally different from the way justification feels, not just red versus blue, but vision versus sound.

But surely one shouldn't praise the clippyflurphers rather than the just?  I quite agree; and as soon as you navigate referentially to the coherent logical entity that is the truth-condition of should - a function on potential actions and future states - it will agree with you that it's better to avoid the arbitrary than the unclipperiffic.  Unfortunately, this logical fact does not correspond to the truth-condition of any meaningful proposition computed by Clippy in the course of how it efficiently transforms the universe into paperclips, in much the same way that rightness plays no role in that-which-is-maximized by the blind processes of natural selection.

Where moral judgment is concerned, it's logic all the way down.  ALL the way down.  Any frame of reference where you're worried that it's really no better to do what's right then to maximize paperclips... well, that really part has a truth-condition (or what does the "really" mean?) and as soon as you write out the truth-condition you're going to end up with yet another ordering over actions or algorithms or meta-algorithms or something.  And since grinding up the universe won't and shouldn't yield any miniature '>' tokens, it must be a logical ordering.  And so whatever logical ordering it is you're worried about, it probably does produce 'life > paperclips' - but Clippy isn't computing that logical fact any more than your pocket calculator is computing it.

Logical facts have no power to directly affect the universe except when some part of the universe is computing them, and morality is (and should be) logic, not physics.

Which is to say:

The old wizard was staring at him, a sad look in his eyes. "I suppose I do understand now," he said quietly.

"Oh?" said Harry. "Understand what?"

"Voldemort," said the old wizard. "I understand him now at last. Because to believe that the world is truly like that, you must believe there is no justice in it, that it is woven of darkness at its core. I asked you why he became a monster, and you could give no reason. And if I could ask him, I suppose, his answer would be: Why not?"

They stood there gazing into each other's eyes, the old wizard in his robes, and the young boy with the lightning-bolt scar on his forehead.

"Tell me, Harry," said the old wizard, "will you become a monster?"

"No," said the boy, an iron certainty in his voice.

"Why not?" said the old wizard.

The young boy stood very straight, his chin raised high and proud, and said: "There is no justice in the laws of Nature, Headmaster, no term for fairness in the equations of motion. The universe is neither evil, nor good, it simply does not care. The stars don't care, or the Sun, or the sky. But they don't have to! We care! There is light in the world, and it is us!"

 

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "Standard and Nonstandard Numbers"

Previous post: "Mixed Reference: The Great Reductionist Project"

941 comments

Comments sorted by top scores.

comment by TsviBT · 2012-12-10T07:37:46.789Z · LW(p) · GW(p)

Is this a fair summary?

The answer to the clever meta-moral question, “But why should we care about morality?” is just “Because when we say morality, we refer to that-which-we-care-about - and, not to belabor the point, but we care about what we care about. Whatever you think you care about, which isn’t morality, I’m calling that morality also. Precisely which things are moral and which are not is a difficult question - but there is no non-trivial meta-question.”

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-16T01:13:04.631Z · LW(p) · GW(p)

There is a non-trivial point in this summary, which is the meaning of "we." I could imagine a possible world in which the moral intuitions of humans diverge widely enough that there isn't anything that could reasonably be called a coherent extrapolated volition of humanity (and I worry that I already live there).

Replies from: Dues
comment by Dues · 2015-11-12T04:13:37.491Z · LW(p) · GW(p)

Humans value some things more than others. Survival is the bedrock human value (yourself, your family, your children, your species). Followed by things like pleasure and the lives of others and the lives of animals. Every human weighs the things a little differently, and we're all bad at the math. But on average most humans weigh the important things about the same. There is a reason Elizer is able to keep going back to the example of saving a child.

comment by Alicorn · 2012-12-10T07:05:11.935Z · LW(p) · GW(p)

If we reprogrammed you to count paperclips instead, it wouldn't feel like different things having the same kind of motivation behind it. It wouldn't feel like doing-what's-right for a different guess about what's right. It would feel like doing-what-leads-to-paperclips.

Um, how do you know?

Replies from: chaosmosis, handoflixue, Luke_A_Somers, endoself
comment by chaosmosis · 2012-12-10T07:09:49.442Z · LW(p) · GW(p)

It would depend on what exactly what we reprogrammed within you, I expect.

Replies from: Alicorn
comment by Alicorn · 2012-12-10T07:13:00.602Z · LW(p) · GW(p)

Exactly. I mean, you could probably make it have its own quale, but you could also make it not, and I don't see why that would be in question as long as we're postulating brain-reprogramming powers.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T07:42:34.473Z · LW(p) · GW(p)

Assume the subject of reprogramming is an existing human being, otherwise minimally altered by this reprogramming, i.e., we don't do anything that isn't necessary to switch their motivation to paperclips. So unless you do something gratuitiously non-minimal like moving the whole decision-action system out of the range of introspective modeling, or cutting way down on the detail level of introspective modeling, or changing the empathic architecture for modeling hypothetical selves, the new person will experience themselves as having ineffable 'qualia' associated with the motivation to produce paperclips.

The only way to make it seem to them like their motivational quales hadn't changed over time would be to mess with the encoding of their previous memories of motivation, presumably in a structure-destroying way since the stored data and their introspectively exposed surfaces will not be naturally isomorphic. If you carry out the change to paperclip-motivation in the obvious way, cognitive comparisions of the retrieved memories to current thoughts will return 'unequal ineffable quales', and if the memories are visualized in different modalities from current thoughts, 'incomparable ineffable quales'.

Doing-what-leads-to-paperclips will also be a much simpler 'quale', both from the outside perspective looking at the complexity of cognitive data, and in terms of the internal experience of complexity - unless you pack an awful lot of detail into the question of what constitutes a more preferred paperclip. Otherwise, compared to the old days when you thought about justice and fairness, introspection will show that less questioning and uncertainty is involved, and that there are fewer points of variation among the motivational thought-quales being considered.

I suppose you could put in some extra work to make the previous motivations map in cognitively comparable ways along as many joints as possible, and try to edit previous memories without destroying their structure so that they can be visualized in a least common modality with current experiences. But even if you did, memories of the previous quales for rightness-motivation would appear as different in retrospect when compared to current quales for paperclip-motivation as a memory of a 3D greyscale forest landscape vs. a current experience of a 2D red-and-green fractal, even if they're both articulated in the visual sensory modality and your modal workspace allows you to search for, focus on, and compare commonly 'experienced' shapes between them.

Replies from: Oligopsony, Alicorn, Vaniver, Armok_GoB, MugaSofer, JoachimSchipper, Gust
comment by Oligopsony · 2012-12-10T16:45:29.474Z · LW(p) · GW(p)

I think you and Alicorn may be talking past each other somewhat.

Throughout my life, it seems that what I morally value has varied more than what rightness feels like - just as it seems that what I consider status-raising has changed more than what rising in status feels like, and what I find physically pleasurable has changed more than what physical pleasures feel like. It's possible that the things my whole person is optimizing for have not changed at all, that my subjective feelings are a direct reflection of this, and that my evaluation of a change of content is merely a change in my causal model of the production of the desiderata (I thought voting for Smith would lower unemployment, but now I think voting for Jones would, etc.) But it seems more plausible to me that

1) the whole me is optimizing for various things, and these things change over time,
2) and that the conscious me is getting information inputs which it can group together by family resemblance, and which can reinforce or disincentivize its behavior.

Imagine a ship which is governed by an anarchic assembly beneath board and captained by an employee of theirs whom they motivate through in-kind bonuses. So the assembly at one moment might be looking for buried treasure, which they think is in such-and-such a place, and so they send her baskets of fresh apples when she's steering in that direction and baskets of stinky rotten apples when she's steering in the wrong. For other goals (refueling, not crashing into reefs) they send her excellent or tedious movies and gorgeous or ugly cabana boys. The captain doesn't even have direct access to what the apples or whatever are motivating her to do; although she can piece it together. She might even start thinking of apples as irreducibly connected to treasure. But if the assembly decided that they wanted to look for ports of call instead of treasure, I don't see why in principle they couldn't start sending her apples in order to do so. And if they did, I think her first response would be, if she was verbally asked, that the treasure - or whatever the dubloons constituting the treasure ultimately represent in terms of the desiderata of the assembly - had moved to the ports of call. This might be a correct inference - perhaps the assembly wants the treasure for money and now they think that comes better from heading to ports of call - but it hardly seems to be a necessarily correct one.

If I met two vampires, and one said his desire to drink blood was mediated through hunger (and that he no longer felt hunger for food, or lust) and another said her desire to drink blood was mediated through lust (and that she no longer felt lust for sex, or hunger) then I do think - presuming they were both once human, experiencing lust and hunger like me - they've told me something that allows me to distinguish their experiences from one another, even though they both desire blood and not food or sex.

They may or may not be able to explain to what it is like to be a bat.

Unless I'm inserting a further layer of misunderstanding your position seems to be curiously disjunctivist. I or you or Alicorn or all of us may be making bad inferences in taking "feels like" to mean "reminds one of the sort of experience that brings to mind..." ("I feel like I got mauled by a bear," says someone not just and maybe never mauled by a bear) or "constituting an experience of" ("what an algorithm feels like from the inside") when the other is intended. This seems to be a pretty easy elision to make - consider all the philosophers who say things like "well, it feels like we have libertarian free will..."

comment by Alicorn · 2012-12-10T07:47:09.348Z · LW(p) · GW(p)

This comment expands how you'd go about reprogramming someone in this way with another layer of granularity, which is certainly interesting on its own merits, but it doesn't strongly support your assertion about what it would feel like to be that someone. What makes you think this is how qualia work? Have you been performing sinister experiments in your basement? Do you have magic counterfactual-luminosity-powers?

Replies from: RobbBB, khafra, Nick_Tarleton
comment by Rob Bensinger (RobbBB) · 2012-12-10T19:17:17.311Z · LW(p) · GW(p)

I think Eliezer is simply suggesting that qualia don't in fact exist in a vacuum. Green feels the way it does partly because it's the color of chlorophyll. In a universe where plants had picked a different color for chlorophyll (melanophyll, say), with everything else (per impossibile) held constant, we would associate an at least slightly different quale with green and with black, because part of how colors feel is that they subtly remind us of the things that are most often colored that way. Similarly, part of how 'goodness' feels is that it imperceptibly reminds us of the extension of good; if that extension were dramatically different, then the feeling would (barring any radical redesigns of how associative thought works) be different too. In a universe where the smallest birds were ten feet tall, thinking about 'birdiness' would involve a different quale for the same reason.

comment by khafra · 2012-12-10T15:53:34.392Z · LW(p) · GW(p)

It sounds to me like you don't think the answer had anything to do with the question. But to think that, you'd pretty much have to discard both the functionalist and physicalist theories of mind, and go full dualist/neutral monist; wouldn't you?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T19:05:36.720Z · LW(p) · GW(p)

I think I'll go with this as my reply - "Well, imagine that you lived in a monist universe - things would pretty much have to work that way, wouldn't they?"

comment by Nick_Tarleton · 2012-12-10T18:40:10.528Z · LW(p) · GW(p)

Possibly (this is total speculation) Eliezer is talking about the feeling of one's entire motivational system (or some large part of it), while you're talking about the feeling of some much narrower system that you identify as computing morality; so his conception of a Clippified human wouldn't share your terminal-ish drives to eat tasty food, be near friends, etc., and the qualia that correspond to wanting those things.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T19:23:36.687Z · LW(p) · GW(p)

The Clippified human categorizes foods into a similar metric of similarity - still believes that fish tastes more like steak than like chocolate - but of course is not motivated to eat except insofar as staying alive helps to make more paperclips. They have taste, but not tastiness. Actually that might make a surprisingly good metaphor for a lot of the difficulty that some people have with comprehending how Clippy can understand your pain and not care - maybe I'll try it on the other end of that Facebook conversation.

Replies from: DaFranker
comment by DaFranker · 2012-12-10T19:44:50.360Z · LW(p) · GW(p)

The metaphor seems like it could lose most of its effectiveness on people who have never applied the outside view to how taste and tastiness feel from inside - they've never realized that chocolate tastes good because their brain fires "good taste" when it perceives the experience "chocolate taste". The obvious resulting cognitive dissonance (from "tastes bad for others") predictions match my observations, so I suspect this would be common among non-rationalists. If the Facebook conversation you mention is with people who haven't crossed that inferential gap yet, it might prove not that useful.

comment by Vaniver · 2012-12-10T20:51:44.356Z · LW(p) · GW(p)

Consider Bob. Bob, like most unreflective people, settles many moral questions by "am I disgusted by it?" Bob is disgusted by, among other things, feces, rotten fruit, corpses, maggots, and men kissing men. Internally, it feels to Bob like the disgust he feels at one of those stimuli is the same as the disgust he feels at the other stimuli, and brain scans show that they all activate the insula in basically the same way.

Bob goes through aversion therapy (or some other method) and eventually his insula no longer activates when he sees men kissing men.

When Bob remembers his previous reaction to that stimuli, I imagine he would remember being disgusted, but not be disgusted when he remembers the stimuli. His positions on, say, same-sex marriage or the acceptability of gay relationships have changed, and he is aware that they have changed.

Do you think this example agrees with your account? If/where it disagrees, why do you prefer your account?

Replies from: RobbBB, adamisom, FeepingCreature
comment by Rob Bensinger (RobbBB) · 2012-12-10T21:06:48.565Z · LW(p) · GW(p)

I think this is really a sorites problem. If you change what's delicious only slightly, then deliciousness itself seems to be unaltered. But if you change it radically — say, if circuits similar to your old gustatory ones now trigger when and only when you see a bright light — then it seems plausible that the experience itself will be at least somewhat changed, because 'how things feel' is affected by our whole web of perceptual and conceptual associations. There isn't necessarily any sharp line where a change in deliciousness itself suddenly becomes perceptible; but it's nevertheless the case that the overall extension of 'delicious' (like 'disgusting' and 'moral') has some effect on how we experience deliciousness. E.g., deliciousness feels more foodish than lightish.

Replies from: Vaniver
comment by Vaniver · 2012-12-10T21:21:22.639Z · LW(p) · GW(p)

it seems plausible that the experience itself will be at least somewhat changed, because 'how things feel' is affected by our whole web of perceptual and conceptual associations.

When I look at the problem introspectively, I can see that as a sensible guess. It doesn't seem like a sensible guess when I look at it from a neurological perspective. If the activation of the insula is disgust, then the claim that outputs of the insula will have a different introspective flavor when you rewire the inputs of the insula seems doubtful. Sure, it could be the case, but why?

When we hypnotize people to make them disgusted by benign things, I haven't seen any mention that the disgust has a different introspective flavor, and people seem to reason about that disgust in the exact same way that they reason about the disgust they had before.

This seems like the claim that rewiring yourself leads to something like synesthesia, and that just seems like an odd and unsupported claim to me.

Replies from: RobbBB, NancyLebovitz
comment by Rob Bensinger (RobbBB) · 2012-12-10T21:56:23.391Z · LW(p) · GW(p)

If the activation of the insula is disgust

Certain patterns of behavior at the insula correlate with disgust. But we don't know whether they're sufficient for disgust, nor do we know which modifications within or outside of the insula change the conscious character of disgust. There are lots of problems with identity claims at this stage, so I'll just raise one: For all we know, activation patterns in a given brain region correlate with disgust because disgust is experienced when that brain region inhibits another part of the brain; an experience could consist, in context, in the absence of a certain kind of brain activity.

When we hypnotize people to make them disgusted by benign things, I haven't seen any mention that the disgust has a different introspective flavor

Hypnosis data is especially difficult to evaluate, because it isn't clear (a) how reliable people's self-reports about introspection are while under hypnosis; nor (b) how reliable people's memories-of-hypnosis are afterward. Some 'dissociative' people even give contradictory phenomenological reports while under hypnosis.

That said, if you know of any studies suggesting that the disgust doesn't have at all a different character, I'd be very interested to see them!

If you think my claim isn't modest and fairly obvious, then it might be that you aren't understanding my claim. Redness feels at least a little bit bloodish. Greenness feels at least a little bit foresty. If we made a clone who sees evergreen forests as everred and blood as green, then their experience of greenness and redness would be partly the same, but it wouldn't be completely the same, because that overtone of bloodiness would remain in the background of a variety of green experiences, and that woodsy overtone would remain in the background of a variety of red experiences.

Replies from: Vaniver
comment by Vaniver · 2012-12-10T23:09:02.420Z · LW(p) · GW(p)

If you think my claim isn't modest and fairly obvious, then it might be that you aren't understanding my claim.

I'm differentiating between "red evokes blood" and "red feels bloody," because those seem like different things to me. The former deals with memory and association, and the second deals with introspection, and so I agree that the same introspective sensation could evoke very different memories.

The dynamics of introspective sensations could plausibly vary between people, and so I'm reluctant to discuss it extensively except in the context of object-level comparisons.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-11T00:55:18.396Z · LW(p) · GW(p)

I'm not sure exactly what you mean by "red evokes blood." I agree that "red feels bloody" is intuitively distinct from "I tend to think explicitly about blood when I start thinking about redness," though the two are causally related. Certain shades of green to me feel fresh, clean, 'naturey;' certain shades of red to me feel violent, hot, glaring; certain shades of blue feel cool; etc. My suggestion is that these qualia, which are part of the feeling of the colors themselves for most humans, would be experientially different even when decontextualized if we'd gone through life perceiving forests as blue, oceans as red, campfires as green, etc. By analogy, the feeling of 'virtue' may be partly independent of which things we think of under the concept 'virtuous;' but it isn't completely independent of those things.

Replies from: Vaniver
comment by Vaniver · 2012-12-11T01:33:18.318Z · LW(p) · GW(p)

Certain shades of green to me feel fresh, clean, 'naturey;' certain shades of red to me feel violent, hot, glaring; certain shades of blue feel cool; etc.

I am aware that many humans have this sort of classification of colors, and have learned it because of its value in communication, but as far as I can tell this isn't a significant part of my mental experience. A dark green might make it easier for me to think of leaves or forests, but I don't have any experiences that I would describe as feeling 'naturey'. If oceans and forests swapped colors, I imagine that seeing the same dark green would make it easier for me to think of waves and water, but I think my introspective experience would be the same.

If I can simplify your claim a bit, it sounds like if both oceans and forests were dark green, then seeing dark green would make you think of leaves and waves / feel associated feelings, and that this ensemble would be different from your current sensation of ocean blue or forest green. It seems sensible to me that the ensembles are different because they have different elements.

I'm happier with modeling that as perceptual bleedover- because forests and green are heavily linked, even forests that aren't green are linked to green, and greens that aren't on leaves are linked with forests- than I am modeling that as an atom of consciousness- the sensation of foresty greens- but if your purposes are different, a different model may be more suitable.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-13T04:55:15.794Z · LW(p) · GW(p)

Part of the problem may be that I'm not so sure I have a distinct, empirically robust idea of an 'atom of consciousness.' I took for granted your distinction between 'evoking blood' and 'feeling bloody,' but in practice these two ideas blend together a great deal. Some ideas -- phonological and musical ones, for example -- are instantiated in memory by certain temporal sequences and patterns of association. From my armchair, I'm not sure how much my idea of green (or goodness, or clippiness) is what it is in virtue of its temporal and associative dispositions, too. And I don't know if Eliezer is any less confused than I.

comment by NancyLebovitz · 2012-12-12T08:10:47.393Z · LW(p) · GW(p)

It wouldn't surprise me if the sensation of disgust has some variation from one person to another, and even for the same person, from one object to another.

comment by adamisom · 2012-12-11T18:32:19.179Z · LW(p) · GW(p)

I just wanted to tell everyone that it is great fun to read this in the voice of that voice actor for the Enzyte commercial :)

comment by FeepingCreature · 2012-12-15T01:41:46.417Z · LW(p) · GW(p)

I think this is easier because disgust is relatively arbitrary to begin with, in that it seems to implement a function over the world-you relation (roughly, things that are bad for you to eat/be near). We wouldn't expect that relation to have much coherence to begin with, so there'd be not much loss of coherence from modifying it - though, arguably, the same thing could be said for most qualia - elegance is kind of the odd one out.

comment by Armok_GoB · 2012-12-10T18:40:10.206Z · LW(p) · GW(p)

I wouldn't be all that suprised if the easiest way to get a human maximizing papperclips was to make it believe paperclips had epiphenomenal consciousnesses experiencing astronomical amounts of pleasure.

edit: or you could just give them a false memory of god telling them to do it.

Replies from: FeepingCreature
comment by FeepingCreature · 2012-12-15T01:48:34.960Z · LW(p) · GW(p)

I wouldn't be all that suprised if the easiest way to get a human maximizing papperclips was to make it believe paperclips had epiphenomenal consciousnesses

The Enrichment Center would like to remind you that the Paperclip cannot speak. In the event that the Paperclip does speak, the Enrichment Center urges you to disregard its advice.

comment by MugaSofer · 2012-12-10T16:59:12.738Z · LW(p) · GW(p)

Wouldn't it be easier to have the programee remember themself as misunderstanding morality - like a reformed racist who previously preferred options that harmed minorities. I know when I gain more insight into my ethics I remember making decisions that, in retrospect, are incomprehensible (unless I deliberately keep in mind how I thought I should act.)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-11T02:14:47.670Z · LW(p) · GW(p)

Wouldn't it be easier to have the programee remember themself as misunderstanding morality

That depends on the details of how the human brain stores goals and memories.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-11T09:09:35.592Z · LW(p) · GW(p)

Cached thoughts regularly supersede actual moral thinking, like all forms of thinking, and I am capable of remembering this experience. Am I misunderstanding your comment?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-13T04:42:59.674Z · LW(p) · GW(p)

My point is that in order to "fully reprogram" someone it is also necessary to clear their "moral cache" at the very least.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T09:06:20.250Z · LW(p) · GW(p)

Well ... is it? Would you notice if your morals changed when you weren't looking?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-14T03:05:51.735Z · LW(p) · GW(p)

I probably would, but then again I'm in the habit of comparing the out of my moral intuitions with stored earlier versions of that output.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-14T11:02:53.963Z · LW(p) · GW(p)

I guess it depends on how much you rely on cached thoughts in your moral reasoning.

Of course, it can be hard to tell how much you're using 'em. Hmm...

comment by JoachimSchipper · 2012-12-13T07:56:39.959Z · LW(p) · GW(p)

I have no problem with this passage. But it does not seem obviously impossible to create a device that stimulates that-which-feels-rightness proportionally to (its estimate of) the clippiness of the universe - it's just a very peculiar kind of wireheading.

As you point out, it'd be obvious, on reflection, that one's sense of rightness has changed; but that doesn't necessarily make it a different qualia, any more than having your eyes opened to the suffering of (group) changes your experience of (in)justice qua (in)justice.

comment by Gust · 2013-01-03T14:14:34.759Z · LW(p) · GW(p)

Although I think your point here is plausible, I don't think it fits in a post where you are talking about the logicalness of morality. This qualia problem is physical; whether your feeling changes when the structure of some part of your decision system changes depends on your implementation.

Maybe your background understanding of neurology is enough for you to be somewhat confident stating this feeling/logical-function relation for humans. But mine is not and, although I could separate your metaethical explanations from your physical claims when reading the post, I think it would be better off without the latter.

comment by handoflixue · 2012-12-10T23:04:40.928Z · LW(p) · GW(p)

Speaking from personal experience, I can say that he's right.

Explaining how I know this, much less sharing the experience, is more difficult.

The simplest idea I can present is that you probably have multiple utility functions. If you're buying apples, you'll evaluate whether you like that type of apple, what the quality of the apple is, and how good the price is. For me, at least, these all FEEL different - a bruised apple doesn't "feel" overpriced the way a $5 apple at the airport does. Even disliking soft apples feels very different from recognizing a bruised apple, even though they both also go in to a larger basket of "no good".

What's more, I can pick apples based on someone ELSE'S utility function, and actually often shop with my roommate's function in mind (she likes apples a lot more than me, but is also much pickier, as it happens). This feels different from using my own utility function.


The other side of this is that I would expect my brain to NOTICE it's actual goals. If my goal is to make paperclips, I will think "I should do this because it makes paperclips", instead of "I should do this because it makes people happy". My brain doesn't have a generic "I should do this" emotion, as near as I can tell - it just has ways of signalling that an activity will accomplish my goals.

Thus, it seems reasonable to conclude that my feelings are more a combination of activity + outcome, not some raw platonic ideal. While sex, hiking, and a nice meal all make me "happy", they still feel completely different - I just lump them in to a larger category of "happiness" for some reason.

I'd strongly suspect you can add make-more-paperclips to that emotional category, but I see absolutely no reason you could make me treat it the same as a nice dinner, because that wouldn't even make sense.

Replies from: Vaniver, shminux
comment by Vaniver · 2012-12-11T20:08:31.612Z · LW(p) · GW(p)

Speaking from personal experience, I can say that he's right.

So, you introspect the way that he introspects. Do all humans? Would all humans need to introspect that way for it to do the work that he wants it to do?

Replies from: handoflixue
comment by handoflixue · 2012-12-11T21:36:46.229Z · LW(p) · GW(p)

Ooh, good call, thank you. I suppose it might be akin to visualization, where it actually varies from person to person. Does anyone here on LessWrong have conflicting anecdotes, though? Does anyone disagree with what I said? If not, it seems like a safe generalization for now, but it's still useful to remember I'm generalizing from one example :)

Remembering that other people have genuinely alien minds is surprisingly tricky.

Replies from: Alicorn, shminux, asparisi, MugaSofer
comment by Alicorn · 2012-12-11T22:34:49.596Z · LW(p) · GW(p)

The other side of this is that I would expect my brain to NOTICE it's actual goals. If my goal is to make paperclips, I will think "I should do this because it makes paperclips", instead of "I should do this because it makes people happy". My brain doesn't have a generic "I should do this" emotion, as near as I can tell - it just has ways of signalling that an activity will accomplish my goals.

Iron deficiency feels like wanting ice. For clever, verbal reasons. Not being iron deficient doesn't feel like anything. My brain did not notice that it was trying to get iron - it didn't even notice it was trying to get ice, it made up reasons according to which ice was an instrumental value for some terminal goal or other.

comment by Shmi (shminux) · 2012-12-11T23:36:28.781Z · LW(p) · GW(p)

Remembering that other people have genuinely alien minds is surprisingly tricky.

Other people? I find my own mind quite alien below the thin layer accessible to my introspection. Heck, most of the time I cannot even tell if my introspection lies to me.

comment by asparisi · 2012-12-12T17:32:54.269Z · LW(p) · GW(p)

I think I have a different introspection here.

When I have a feeling such as 'doing-whats-right' there is a positive emotional response associated with it. Immediately I attach semantic content to that emotion: I identify it as being produced by the 'doing-whats-right' emotion. How do I do this? I suspect that my brain has done the work to figure out that emotional response X is associated with behavior Y, and just does the work quickly.

But this is maleable. Over time, the emotional response associated with an act can change and this does not necessarily indicate a change in semantic content. I can, for example, give to a charity that I am not convinced is good and I still will often get the 'doing-whats-right' emotion even though the semantic content isn't really there. I can also find new things I value, and occasionally I will acknowledge that I value something before I get positive emotional reinforcement. So in my experience, they aren't identical.

I strongly suspect that if you reprogrammed my brain to value counting paperclips, it would feel the same as doing what is right. At very least, this would not be inconsistent. I might learn to attach paperclippy instead of good to that emotional state, but it would feel the same.

comment by MugaSofer · 2012-12-12T11:11:05.496Z · LW(p) · GW(p)

Remembering that other people have genuinely alien minds is surprisingly tricky.

... they do? For what values of "alien"?

Replies from: handoflixue
comment by handoflixue · 2012-12-14T18:59:00.608Z · LW(p) · GW(p)

Because I'm not sure how else to capture a "scale of alien-ness":

I once wrote a sci-fi race that was a blind, deaf ooze, but extremely intelligent and very sensitive to tactile input. Over the years, and with the help of a few other people, I've gotten a fairly good feel for their mindset and how they approach the world.

There's a distinct subset of humans which I find vastly more puzzling than these guys.

Replies from: army1987, kodos96
comment by A1987dM (army1987) · 2012-12-14T22:21:10.462Z · LW(p) · GW(p)

From Humans in Funny Suits:

But the real problem is not shape, it is mind. "Humans in funny suits" is a well-known term in literary science-fiction fandom, and it does not refer to something with four limbs that walks upright. An angular creature of pure crystal is a "human in a funny suit" if she thinks remarkably like a human - especially a human of an English-speaking culture of the late-20th/early-21st century.

I don't watch a lot of ancient movies. When I was watching the movie Psycho (1960) a few years back, I was taken aback by the cultural gap between the Americans on the screen and my America. The buttoned-shirted characters of Psycho are considerably more alien than the vast majority of so-called "aliens" I encounter on TV or the silver screen.

Replies from: handoflixue
comment by handoflixue · 2012-12-14T22:30:48.228Z · LW(p) · GW(p)

The race was explicitly designed to try and avoid "humans in funny suits", and have a culture that's probably more foreign than the 1960s. But I'm only 29, and haven't traveled outside of English-speaking countries, so take that with a dash of salt!

On a 0-10 scale, with myself at 0, humans in funny suits at 1, and the 1960s at 2, I'd rate my creation as a 4, and a subset of humanity exists in the 4-5 range. Around 5, I have trouble with the idea that there's coherent intelligent reasoning happening, because the process is just completely lost on me, and I don't think I'd be able to easily assign anything more than a 5, much less even speculate on what a 10 would look like.

Trying to give a specific answer to "how alien is it" is a lot harder than it seems! :)

Replies from: IlyaShpitser, Eugine_Nier
comment by IlyaShpitser · 2012-12-14T22:36:57.619Z · LW(p) · GW(p)

If I may make a recommendation, if you are concerned about "alien aliens", read a few things by Stanislaw Lem. The main theme of Lem's scifi, I would say, is alien minds, and failure of first contact. "Solaris" is his most famous work (but the adaptation with Clooney is predictably terrible).

Replies from: handoflixue
comment by handoflixue · 2012-12-15T01:07:21.257Z · LW(p) · GW(p)

Not sure if I've read Lem, but I'll be sure to check it out. I have a love for "truly alien" science fiction, which is why I had to try my hand at making one of my own :)

comment by Eugine_Nier · 2012-12-16T04:12:55.326Z · LW(p) · GW(p)

The race was explicitly designed to try and avoid "humans in funny suits", and have a culture that's probably more foreign than the 1960s. But I'm only 29, and haven't traveled outside of English-speaking countries, so take that with a dash of salt!

Well reading fiction (and non-fiction) for which English speakers of your generation weren't the target audience is a good way to start compensating.

Replies from: handoflixue
comment by handoflixue · 2012-12-17T21:14:50.526Z · LW(p) · GW(p)

I've got a lot of exposure to "golden age" science fiction and fantasy, so going back a few decades isn't hard for me. I just don't get exposed to many other good sources. The "classics" seem to generally fail to capture that foreignness.

If you have recommendations, especially a broader method than just naming a couple authors, I'd love to hear it. Most of my favourite authors have a strong focus on foreign cultures, either exploring them or just having characters from diverse backgrounds.

Replies from: beoShaffer, Eugine_Nier
comment by beoShaffer · 2012-12-17T21:58:27.796Z · LW(p) · GW(p)

Anime&Manga, particularly the older stuff is a decent source.

Replies from: handoflixue
comment by handoflixue · 2012-12-18T00:55:20.671Z · LW(p) · GW(p)

... it is really sad that I completely forgot that anime and manga isn't English. I grew up around it, so it's just a natural part of my culture. Suffice to say, I've had a lot of exposure -- but not to anything older than I am.

Any recommendations for OLD anime or manga, given I don't speak/read Japanese? :)

Replies from: beoShaffer
comment by beoShaffer · 2012-12-18T02:57:24.170Z · LW(p) · GW(p)

You're probably best of asking on a manga/forum, but Barefoot Gen is a good, and depressing, start.

comment by Eugine_Nier · 2012-12-18T04:07:37.868Z · LW(p) · GW(p)

I've got a lot of exposure to "golden age" science fiction and fantasy, so going back a few decades isn't hard for me.

Which time period do you mean by this? "Golden age of science fiction" typically refers to the 1940's and 1950's, "golden age of fantasy" to the late 1970's and early 1980's. If you mean the latter time period, read stuff from the former as a start. Also try going back at least a century to the foundational fantasy authors, e.g., Edgar Rice Burroughs, William Morris's The Well at the World's End. Go even further back to things like Treasure Island, or The Three Musketeers. Or even further back to the days when people believed the stuff in their "fantasy" could actually happen. Read Dante's Divine Comedy, Thomas Moore's Utopia, an actual chivalric romance (I haven't read any so I can't give recommendations).

A good rule of thumb is that you should experience values dissonance while reading them. A culture whose values don't make you feel uncomfortable isn't truly alien. Also for this reason, avoid modern adaptations as these tend to do their best clean up the politically incorrect parts and otherwise modernize the worldview.

comment by kodos96 · 2012-12-20T05:30:56.290Z · LW(p) · GW(p)

I once wrote a sci-fi race that was a blind, deaf ooze, but extremely intelligent and very sensitive to tactile input.

I'm intrigued. Do you have a link?

Replies from: handoflixue
comment by handoflixue · 2012-12-24T19:01:09.004Z · LW(p) · GW(p)

Sadly not. I really should do a proper write-up, but right now they're mostly stored in the head of me and their co-creator.

comment by Shmi (shminux) · 2012-12-11T23:50:08.471Z · LW(p) · GW(p)

The other side of this is that I would expect my brain to NOTICE it's actual goals. If my goal is to make paperclips, I will think "I should do this because it makes paperclips", instead of "I should do this because it makes people happy".

Secondary goals often feel like primary. Breathing and quenching thirst are means of achieving the primary goal of survival (and procreation), yet they themselves feel like primary. Similarly, a paperclip maximizer may feel compelled to harvest iron without any awareness that it wants to do it in order to produce paperclips.

Replies from: handoflixue, Nornagest
comment by handoflixue · 2012-12-14T18:53:07.831Z · LW(p) · GW(p)

Bull! I'm quite aware of why I eat, breathe, and drink. Why in the world would a paperclip maximizer not be aware of this?

Unless you assume Paperclippers are just rock-bottom stupid I'd also expect them to eventually notice the correlation between mining iron, smelting it, and shaping it in to a weird semi-spiral design... and the sudden rise in the number of paperclips in the world.

Replies from: shminux
comment by Shmi (shminux) · 2012-12-14T19:36:16.806Z · LW(p) · GW(p)

I'm not sure that awareness is needed for paperclip maximizing. For example, one might call fire a very good CO2 maximizer. Actually, I'm not even sure you can apply the word awareness to non-human-like optimizers.

Replies from: handoflixue
comment by handoflixue · 2012-12-14T22:23:40.596Z · LW(p) · GW(p)

"If we reprogrammed you to count paperclips instead"

This is a conversation about changing my core utility function / goals, and what you are discussing would be far more of an architectural change. I meant, within my architecture (and, I assume, generalizing to most human architectures and most goals), we are, on some level, aware of the actual goal. There are occasional failure states (Alicorn mentioned iron deficiencies register as a craving for ice o.o), but these tend to tie in to low-level failures, not high-order goals like "make a paperclip", and STILL we tend to manage to identify these and learn how to achieve our actual goals.

comment by Nornagest · 2012-12-12T00:37:06.462Z · LW(p) · GW(p)

Survival and procreation aren't primary goals in any direct sense. We have urges that have been selected for because they contribute to inclusive genetic fitness, but at the implementation level they don't seem to be evaluated by their contributions to some sort of unitary probability-of-survival metric; similarly, some actions that do contribute greatly to inclusive genetic fitness (like donating eggs or sperm) are quite rare in practice and go almost wholly unrewarded by our biology. Because of this architecture, we end up with situations where we sate our psychological needs at the expense of the factors that originally selected for them: witness birth control or artificial sweeteners. This is basically the same point Eliezer was making here.

It might be meaningful to treat supergoals as intentional if we were discussing an AI, since in that case there would be a unifying intent behind each fitness metric that actually gets implemented, but even in that case I'd say it's more accurate to talk about the supergoal as a property not of the AI's mind but of its implementors. Humans, of course, don't have that excuse.

Replies from: shminux
comment by Shmi (shminux) · 2012-12-12T00:49:21.035Z · LW(p) · GW(p)

All good points. I was mostly thinking about an evolved paperclip maximizer, which may or may not be a result of a fooming paperclip-maximizing AI.

Replies from: None, Eugine_Nier
comment by [deleted] · 2012-12-18T21:22:55.589Z · LW(p) · GW(p)

Evolved creatures as we know them (at least the ones with complex brains) are reward-center-reward maximizers, which implicitly correlates with being offspring maximizers. (Actual, non-brainy organisms are probably closer to offspring maximizers).

comment by Eugine_Nier · 2012-12-13T04:46:24.259Z · LW(p) · GW(p)

An evolved agent wouldn't evolve to maximize paper clips.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-14T12:18:46.752Z · LW(p) · GW(p)

It could if the environment rewarded paperclips. Admittedly this would require an artificial environment, but that's hardly impossible.

comment by Luke_A_Somers · 2012-12-10T15:38:06.591Z · LW(p) · GW(p)

So far as I can tell, he chose to carve the world at this joint when making the definition of 'right'. In short, by definition. This is hardly the first time. Not too long ago, and perhaps in this sequence, there was a post about rightness and multiple-place-functions that justified the utility of this definition.

comment by endoself · 2012-12-10T09:37:42.652Z · LW(p) · GW(p)

I think he's talking about the obvious fact that you'd be able to think to yourself "it seems I'm trying to maximize paperclips", as well as the other differences in your experience that would occur for similar reasons.

comment by MixedNuts · 2012-12-10T09:27:23.340Z · LW(p) · GW(p)

The standard religious reply to the baby-slaughter dilemma goes something like this:

Sure, if G-d commanded us to slaughter babies, then killing babies would be good. And if "2+2=3" was a theorem of PA, then "2+2=3" would be true. But G-d logically cannot command us to do a bad thing, anymore than PA can prove something that doesn't follow from its axioms. (We use "omnipotent" to mean "really really powerful", not "actually omnipotent" which isn't even a coherent concept. G-d can't make a stone so heavy he can't lift it, draw a square circle, or be evil.) Religion has destroyed my humanity exactly as much as studying arithmetic has destroyed your numeracy. (Please pay no attention to the parts of the Bible where G-d commands exactly that.)

Replies from: lavalamp, Eliezer_Yudkowsky
comment by lavalamp · 2012-12-10T17:06:46.174Z · LW(p) · GW(p)

But that's just choosing the other horn of the dilemma, no? I.e., "god commands thing because they are moral."

And of course the atheist response to that is,

Oh! So you admit that there's some way of classifying actions as "moral" or "immoral" without reference to a deity? And therefore I really can be moral and yet not subscribe to your deity?

Not that anyone here didn't already know this, of course.

The wikipedia page lists some theistic responses that purport to evade both horns, but I don't recall being convinced that they were even coherent when I last looked at it.

Replies from: MixedNuts
comment by MixedNuts · 2012-12-10T18:03:34.765Z · LW(p) · GW(p)

It does choose a horn, but it's the other one, "things are moral because G-d commands them". It just denies the connotation that there exists a possible Counterfactual!G-d which could decide that Real!evil things are Counterfactual!good; in all possible worlds, G-d either wants the same thing or is something different mistakenly called "G-d". (Yeah, there's a possible world where we're ruled by an entity who pretends to be G-d and so we believe that we should kill babies. And there's a possible world where you're hallucinating this conversation.)

Or you could say it claims equivalence. Is this road sign a triangle because it has three sides, or does it have three sides because it is a triangle? If you pick the latter, does that mean that if triangles had four sides, the sign would change shape to have four sides? If you pick the former, does that mean that I can have three sides without being a triangle? (I don't think this one is quite fair, because we can imagine a powerful creator who wants immoral things.)

Three possible responses to the atheist response:

  • Sure. Not believing has bad consequences - you're wrong as a matter of fact, you don't get special believer rewards, you make G-d sad - but being immoral isn't necessarily one.

  • Well, you can be moral about most things, but worshiping my deity of choice is part of morality, so you can't be completely moral.

  • You could in theory, but how would you discover morality? Humans know what is moral because G-d told us (mostly in so many words, but also by hardwiring some intuitions). You can base your morality on philosophical reasoning, but your philosophy comes from social attitudes, which come from religious morality. Deviations introduced in the process are errors. All you're doing is scratching off the "made in Heaven" label from your ethics.

Replies from: Eliezer_Yudkowsky, lavalamp, Irgy
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T19:13:26.376Z · LW(p) · GW(p)

Obvious further atheist reply to the denial of counterfactuals: If God's desires don't vary across possible worlds there exists a logical abstraction which only describes the structure of the desires and doesn't make mention of God, just like if multiplication-of-apples doesn't vary across possible worlds, we can strip out the apples and talk about the multiplication.

Replies from: dspeyer, Alejandro1, MixedNuts
comment by dspeyer · 2012-12-10T21:08:59.440Z · LW(p) · GW(p)

a logical abstraction which only describes the structure of the desires and doesn't make mention of God, just like if multiplication-of-apples doesn't vary across possible worlds, we can strip out the apples and talk about the multiplication.

I think that's pretty close to what a lot of religious people actually believe in. They just like the one-syllable description.

comment by Alejandro1 · 2012-12-10T19:34:13.134Z · LW(p) · GW(p)

The obvious theist counter-reply is that the structure of God's desires is logically related to the essence of God, in a way that you can't have the goodness without the God nor more than God without the goodness, they are part of the same logical structure. (Aquinas: "God is by essence goodness itself")

I think this is a self-consistent metaethics as metaethics goes. The problem is that God is at the same time part of the realm of abstract logical structures like "goodness", and a concrete being that causes the world to exist, causes miracles, has desires, etc. The fault is not in the metaethics, it is in the confused metaphysics that allows for a concrete being to "exist essentially" as part of its logical structure.

ETA: of course, you could say the metaethics is self-consistent but also false, because it locates "goodness" outside ourselves (our extrapolated desires) which is where it really is. But for the Thomist I am currently emulating, "our extrapolated desires" sound a lot like "our final cause, the perfection to which we tend by our essence" and God is the ultimate final cause. The problem is again the metaphysics (in this case, using final causes without realizing they are mind projecting fallacy), not the metaethics.

Replies from: DaFranker, Eugine_Nier
comment by DaFranker · 2012-12-10T19:46:03.258Z · LW(p) · GW(p)

My mind reduces all of this to "God = Confusion". What am I missing?

Replies from: Alejandro1
comment by Alejandro1 · 2012-12-10T19:49:07.696Z · LW(p) · GW(p)

Well, I said that the metaphysics is confused, so we agree. I just think the metaethics part of religious philosophy can be put in order without falling into Euthyphro, the problem is in its broader philosophical system.

Replies from: DaFranker
comment by DaFranker · 2012-12-10T19:56:41.611Z · LW(p) · GW(p)

Not quite how I'd put it. I meant that in my mind the whole metaethics part implies that "God" is just a shorthand term for "whatever turns out to be 'goodness', even if we don't understand it yet", and that this resolves to the fact that "God" serves no other purposes than to confuse morality with other things within this context.

I think we still agree, though.

Replies from: MixedNuts
comment by MixedNuts · 2012-12-10T20:14:34.543Z · LW(p) · GW(p)

Using the word also implies that this goodness-embodying thing is sapient and has superpowers.

Replies from: dspeyer
comment by dspeyer · 2012-12-10T21:14:04.127Z · LW(p) · GW(p)

Or that it is sometimes useful to tell metaphorical stories about this goodness-embodying thing as if it were sapient and had superpowers.

Or as if the ancients thought it was sapient and had superpowers. They were wrong about that, but right about enough important things that we still value their writings.

comment by Eugine_Nier · 2012-12-11T02:32:22.132Z · LW(p) · GW(p)

The problem is that God is at the same time part of the realm of abstract logical structures like "goodness", and a concrete being that causes the world to exist, causes miracles, has desires, etc.

As I explained here, it's perfectly reasonable to describe mathematical abstractions as causes.

comment by MixedNuts · 2012-12-10T19:22:36.555Z · LW(p) · GW(p)

How would a theist (at least the somewhat smart theist I'm emulating) disagree with that? That sounds a lot like "If all worlds contain a single deity, we can talk about the number one in non-theological contexts".

comment by lavalamp · 2012-12-10T19:20:30.636Z · LW(p) · GW(p)

It seems like you're claiming an identity relationship between god and morality, and I find myself very confused as to what that could possibly mean.

I mean, it's sort of like I just encountered someone claiming that "friendship" and "dolphins" are really the same thing. One or both of us must be very confused about what the labels "friendship" and/or "dolphins" signify, or what this idea of "sameness" is, or something else...

Replies from: MixedNuts
comment by MixedNuts · 2012-12-10T19:55:43.375Z · LW(p) · GW(p)

See Alejandro's comment. Define G-d as "that which creates morality, and also lives in the sky and has superpowers". If you insist on the view of morality as a fixed logical abstraction, that would be a set of axioms. (Modus ponens has the Buddha-nature!) Then all you have to do is settle the factual question of whether the short-tempered creator who ordered you to genocide your neighbors embodies this set of axioms. If not, well, you live in a weird hybrid universe where G-d intervened to give you some sense of morality but is weaker than whichever Cthulhu or amoral physical law made and rules your world. Sorry.

Replies from: shminux, lavalamp, Decius
comment by Shmi (shminux) · 2012-12-10T20:29:02.297Z · LW(p) · GW(p)

Out of curiosity, why do you write G-d, not God? The original injunction against taking God's name in vain applied to the name in the old testament, which is usually mangled in the modern English as Jehovah, not to the mangled Germanic word meaning "idol".

Replies from: MixedNuts
comment by MixedNuts · 2012-12-10T20:38:09.623Z · LW(p) · GW(p)

People who care about that kind of thing usually think it counts as a Name, but don't think there's anything wrong with typing it (though it's still best avoided in case someone prints out the page). Trying to write it makes me squirm horribly and if I absolutely need the whole word I'll copy-paste it. I can totally write small-g "god" though, to talk about deities in general (or as a polite cuss). I feel absolutely silly about it, I'm an atheist and I'm not even Jewish (though I do have a weird cultural-appropriatey obsession). Oh well, everyone has weird phobias.

Replies from: kodos96, shminux, Nisan
comment by kodos96 · 2012-12-20T05:59:39.135Z · LW(p) · GW(p)

Thought experiment: suppose I were to tell you that every time I see you write out "G-d", I responded by writing "God", or perhaps even "YHWH", on a piece of paper, 10 times. Would that knowledge alter your behavior? How about if I instead (or additionally) spoke it aloud?

Edit: downvote explanation requested.

Replies from: MixedNuts, shminux, Eugine_Nier, Decius
comment by MixedNuts · 2012-12-20T09:54:12.369Z · LW(p) · GW(p)

It feels exactly equivalent to telling me that every time you see me turn down licorice, you'll eat ten wheels of it. It would bother me slightly if you normally avoided taking the Name in vain (and you didn't, like, consider it a sacred duty to annoy me), but not to the point I'd change my behavior.

Which I didn't know, but makes sense in hindsight (as hindsight is wont to do); sacredness is a hobby, and I might be miffed at fellow enthusiasts Doing It Wrong, but not at people who prefer fishing or something.

comment by Shmi (shminux) · 2012-12-21T06:41:27.924Z · LW(p) · GW(p)

Why should s/he care about what you choose to do?

Replies from: kodos96
comment by kodos96 · 2012-12-21T06:42:54.449Z · LW(p) · GW(p)

I don't know. That's why I asked.

comment by Eugine_Nier · 2012-12-21T03:27:59.151Z · LW(p) · GW(p)

1) I don't believe you.

2) I don't respond to blackmail.

Replies from: kodos96, wedrifid, Eugine_Nier
comment by kodos96 · 2012-12-21T05:47:36.495Z · LW(p) · GW(p)

What???!!! Are you suggesting that I'm actually planning on conducting the proposed thought experiment? Actually, physically, getting a piece of paper and writing out the words in question? I assure you, this is not the case. I don't even have any blank paper in my home - this is the 21st century after all.

This is a thought experiment I'm proposing, in order to help me better understand MixedNuts' mental model. No different from proposing a thought experiment involving dust motes and eternal torture. Are you saying that Eliezer should be punished for considering such hypothetical situations, a trillion times over?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-22T07:16:57.495Z · LW(p) · GW(p)

What???!!! Are you suggesting that I'm actually planning on conducting the proposed thought experiment? Actually, physically, getting a piece of paper and writing out the words in question? I assure you, this is not the case. I don't even have any blank paper in my home - this is the 21st century after all.

Yes I know, and my comment was how I would respond in your thought experiment.

(Edited: the first version accidentally implied the opposite of what I intended.)

Replies from: kodos96
comment by kodos96 · 2012-12-22T07:45:52.161Z · LW(p) · GW(p)

??? Ok, skipping over the bizarre irrationality of your making that assumption in the first place, now that I've clarified the situation and told you in no uncertain terms that I am NOT planning on conducting such an experiment (other than inside my head), are you saying you think I'm lying? You sincerely believe that I literally have a pen and paper in front of me, and I'm going through MixedNuts's comment history and writing out sacred names for each occurance of "G-d"? Do you actually believe that? Or are you pulling our collective leg?

In the event that you do actually believe that, what kind of evidence might I provide that would change your mind? Or is this an unfalsifiable belief?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-22T07:56:14.534Z · LW(p) · GW(p)

Oops. See my edit.

comment by wedrifid · 2012-12-21T05:38:55.088Z · LW(p) · GW(p)

My usual response to reading 2) is to think 1).

I wonder if you really wouldn't respond to blackmail if the stakes were high and you'd actually lose something critical. "I don't respond to blackmail" usually means "I claim social dominance in this conflict".

Replies from: kodos96, army1987
comment by kodos96 · 2012-12-21T05:52:34.240Z · LW(p) · GW(p)

Not in general, but in this particular instance, the error is in seeing any "conflict" whatsoever. This was not intended as a challenge, or a dick-waving contest, just a sincerely proposed thought experiment in order to help me better understand MixedNuts' mental model.

Replies from: wedrifid
comment by wedrifid · 2012-12-21T07:04:26.854Z · LW(p) · GW(p)

Not in general, but in this particular instance, the error is in seeing any "conflict" whatsoever. This was not intended as a challenge, or a dick-waving contest, just a sincerely proposed thought experiment in order to help me better understand MixedNuts' mental model.

(My response was intended to be within the thought experiment mode, not external. I took Eugine's as being within that mode too.)

Replies from: kodos96
comment by kodos96 · 2012-12-21T07:29:14.451Z · LW(p) · GW(p)

Thanks, I apppreciate that. My pique was in response to Eugine's downvote, not his comment.

comment by A1987dM (army1987) · 2012-12-22T01:59:42.840Z · LW(p) · GW(p)

I wonder if you really wouldn't respond to blackmail if the stakes were high and you'd actually lose something critical.

“In practice, virtually everyone seems to judge a large matter of principle to be more important than a small one of pragmatics, and vice versa — everyone except philosophers, that is.” (Gary Drescher, Good and Real)

comment by Eugine_Nier · 2012-12-22T07:19:33.250Z · LW(p) · GW(p)

Also:

0) The laws of Moses aren't even binding on Gentiles.

comment by Decius · 2012-12-20T06:07:08.010Z · LW(p) · GW(p)

Isn't blackmail a little extreme?

Replies from: kodos96
comment by kodos96 · 2012-12-20T06:55:31.812Z · LW(p) · GW(p)

Yes, which is why I explicitly labled it as only a thought experiment.

This seems to me to be entirely in keeping with the LW tradition of thought experiments regarding dust particles and eternal torture.... by posing such a question, you're not actually threatening to torture anybody.

Edit: downvote explantion requested.

Replies from: Decius
comment by Decius · 2012-12-20T15:54:13.433Z · LW(p) · GW(p)

Or put a dost mote in everybody's eye.

Withdrawn.

comment by Shmi (shminux) · 2012-12-10T21:41:05.489Z · LW(p) · GW(p)

Trying to write it makes me squirm horribly and if I absolutely need the whole word I'll copy-paste it.

How interesting. Phobias are a form of alief, which makes this oddly relevant to my recent post.

Replies from: MixedNuts
comment by MixedNuts · 2012-12-10T22:17:41.791Z · LW(p) · GW(p)

I don't think it's quite the same. I have these sinking moments of "Whew, thank... wait, thank nothing" and "Oh please... crap, nobody's listening", but here I don't feel like I'm being disrespectful to Sky Dude (and if I cared I wouldn't call him Sky Dude). The emotion is clearly associated with the word, and doesn't go "whoops, looks like I have no referent" upon reflection.

What seems to be behind it is a feeling that if I did that, I would be practicing my religion wrong, and I like my religion. It's a jumble of things that give me an oxytocin kick, mostly consciously picked up, but it grows organically and sometimes plucks new dogma out of the environment. ("From now on Ruby Tuesday counts as religious music. Any questions?") I can't easily shed a part, it has to stop feeling sacred of its own accord.

Replies from: kodos96
comment by kodos96 · 2012-12-20T06:07:35.676Z · LW(p) · GW(p)

"From now on Ruby Tuesday counts as religious music. Any questions?"

Wait... you're suggesting that the Stones count as sacred? But not the Beatles??????

HERETIC!!!!!!

Edit: downvote explanation requested.

Replies from: Viliam_Bur, MixedNuts
comment by Viliam_Bur · 2012-12-22T14:39:17.084Z · LW(p) · GW(p)

Edit: downvote explanation requested.

Please don't do that.

People on this site already give too much upvotes, and too little downvotes. By which I mean that if anyone writes a lot of comments, their total karma is most likely to be positive, even if the comments are mostly useless (as long as they are not offensive, or don't break some local taboo). People can build a high total karma just by posting a lot, because one thousand comments with average karma of 1 provide more total karma than e.g. twenty comments with 20 karma each. But which of those two would you prefer as a reader, assuming that your goal is not to procrastinate on LW for hours a day?

Every comment written has a cost -- the time people spend reading that comment. So a neutral comment (not helpful, not harmful) has a slightly negative value, if we could measure that precisely. One such comment does not make big harm. Hundred such comments, daily, from different users... that's a different thing. Each comment should pay the price of time it takes to read it, or be downvoted.

People already hesitate to downvote, because expressing a negative opinion about something connected with other person feels like starting an unnecessary conflict. This is an instinct we should try to overcome. Asking for an explanation for a single downvote escalates the conflict. I think it is OK to ask if a seemingly innocent comment gets downvoted to -10, because then there is something to explain. But a single downvote or two, that does not need an explanation. Someone probably just did not think the comment was improving the quality of a discussion.

Replies from: army1987, BerryPick6
comment by A1987dM (army1987) · 2012-12-22T17:21:46.348Z · LW(p) · GW(p)

People can build a high total karma just by posting a lot,

So what?

because one thousand comments with average karma of 1 provide more total karma than e.g. twenty comments with 20 karma each. But which of those two would you prefer as a reader, assuming that your goal is not to procrastinate on LW for hours a day?

When I prefer the latter, I use stuff like Top Comments Today/This Week/whatever, setting my preferences to “Display 10 comments by default” and sorting comments by “Top”, etc. The presence of lots of comments at +1 doesn't bother me that much. (Also, just because a comment is at +20 doesn't always mean it's something terribly interesting to read -- it could be someone stating that they've donated to SIAI, a “rationality quote”, etc.)

Every comment written has a cost -- the time people spend reading that comment. So a neutral comment (not helpful, not harmful) has a slightly negative value, if we could measure that precisely. One such comment does not make big harm. Hundred such comments, daily, from different users... that's a different thing. Each comment should pay the price of time it takes to read it, or be downvoted.

That applies more to several-paragraph comments than to one-sentence ones.

comment by BerryPick6 · 2012-12-22T15:22:36.411Z · LW(p) · GW(p)

too much upvotes, and too little downvotes

Isn't it 'too many upvotes' and 'too few downvotes'?

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-22T17:01:09.818Z · LW(p) · GW(p)

Yep. On the British National Corpus there are:

  • 6 instances of too much [*nn2*] (where [*nn2*] is any plural noun);
  • 576 instances of too many [*nn2*];
  • 0 instances of too little [*nn2*]; and
  • 123 instances of too few [*nn2*] (and 83 of not enough [*nn2*], for that matter);

on the Corpus of Contemporary American English the figures are 75, 3217, 11, 323 and 364 respectively. (And many of the minoritarian uses are for things that you'd measure by some means other than counting them, e.g. “too much drugs”.) So apparently the common use of “less” as an informal equivalent of “fewer” only applies to the comparatives. (Edited to remove the “now-” before “common” -- in the Corpus of Historical American English less [*nn2*] appears to be actually slightly less common today than it was in the late 19th century.)

comment by MixedNuts · 2012-12-20T10:03:17.721Z · LW(p) · GW(p)

Obviously Across the Universe does, but there's nothing idiosyncratic about that.

Replies from: kodos96
comment by kodos96 · 2012-12-21T06:36:13.157Z · LW(p) · GW(p)

Downvote explanation requested.

Replies from: MixedNuts
comment by MixedNuts · 2012-12-21T11:03:15.972Z · LW(p) · GW(p)

'Twasn't me, but I would guess some people want comments to have a point other than a joke.

Replies from: kodos96
comment by kodos96 · 2012-12-21T15:32:59.512Z · LW(p) · GW(p)

Yeah, I know... I just wanted to get the culprit to come right out and say that, in the hope that they would recognize how silly it sounded. There seems to be a voting bloc here on LW that is irrationally opposed to humor, and it's always bugged me.

Replies from: MixedNuts
comment by MixedNuts · 2012-12-21T15:53:58.074Z · LW(p) · GW(p)

Makes plenty of sense to me. Jokes are easy, insight is hard. With the same karma rewards for funny jokes and good insights, there are strong incentives to spend the same time thinking up ten jokes rather than one insight. Soon no work gets done, and what little there is is hidden in a pile of jokes. I hear this killed some subreddits.

Also, it wasn't that funny.

Replies from: kodos96
comment by kodos96 · 2012-12-21T16:13:19.360Z · LW(p) · GW(p)

Yeah, I'm not saying jokes (with no other content to them) should be upvoted, but I don't think they need to be downvoted as long as they're not disruptive to the conversation. I think there's just a certain faction on here who feels a need to prove to the world how un-redditish LW is, to the point of trying to suck all joy out of human communication.

comment by Nisan · 2012-12-11T06:52:36.930Z · LW(p) · GW(p)

Oh well, everyone has weird phobias.

You can eliminate inconvenient phobias with flooding. I can personally recommend sacrilege.

EDIT: It sounds like maybe it's not just a phobia.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-22T02:04:34.589Z · LW(p) · GW(p)

You can eliminate inconvenient phobias with flooding. I can personally recommend sacrilege.

Step 1: learn Italian; step 2: google for "Mario Magnotta" or "Germano Mosconi" or "San Culamo".

comment by lavalamp · 2012-12-10T20:41:08.821Z · LW(p) · GW(p)

If not, well, you live in a weird hybrid universe where G-d intervened to give you some sense of morality but is weaker than whichever Cthulhu or amoral physical law made and rules your world.

I think there's a bug in your theist-simulation module ^^

I've yet to meet one that could have spontaneously come up with that statement.

Anyway, more to the point... in the definition of god you give, it seems to me that the "lives in sky with superpowers" part is sort of tacked on to the "creates morality" part, and I don't see why I can't talk about the "creates morality" part separate from the tacked-on bits. And if that is possible, I think this definition of god is still vulnerable to the dilemma (although it would seem clear that the second horn is the correct one; god contains a perfect implementation of morality, therefore what he says happens to be moral).

Replies from: MugaSofer
comment by MugaSofer · 2012-12-11T09:32:04.919Z · LW(p) · GW(p)

I've yet to meet one that could have spontaneously come up with that statement.

Hi there.

Replies from: lavalamp
comment by lavalamp · 2012-12-11T15:22:27.936Z · LW(p) · GW(p)

Are you a real theist or do you just like to abuse the common terminology (like, as far as I can tell, user:WillNewsome)? :)

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T09:02:45.067Z · LW(p) · GW(p)

A real theist. Even a Christian, although mostly Deist these days.

Replies from: lavalamp
comment by lavalamp · 2012-12-12T15:46:46.969Z · LW(p) · GW(p)

So you think there's a god, but it's conceivable that the god has basically nothing to do with our universe?

If so, I don't see how you can believe this while giving a similar definition for "god" as an average (median?) theist.

(It's possible I have an unrepresentative sample, but all the Christians I've met IRL who know what deism is consider it a heresy... I think I tend to agree with them that there's not that much difference between the deist god and no god...)

Replies from: MugaSofer, Sengachi
comment by MugaSofer · 2012-12-12T16:39:12.527Z · LW(p) · GW(p)

That "mostly" is important. While there is a definite difference between deism and atheism (it's all in the initial conditions) it would still be considered heretical by all major religions except maybe Bhuddism because they all claim miracles. I reckon Jesus and maybe a few others probably worked miracles, but that God doesn't need to do all that much; He designed this world and thus presumably planned it all out in advance (or rather from outside our four-dimensional perspective.) But there were still adjustments, most importantly Christianity, which needed a few good miracles to demonstrate authority (note Jesus only heals people in order to demonstrate his divine mandate, not just to, well, heal people.)

Replies from: Oligopsony, Peterdjones, kodos96, lavalamp
comment by Oligopsony · 2012-12-12T16:59:27.440Z · LW(p) · GW(p)

But there were still adjustments, most importantly Christianity, which needed a few good miracles to demonstrate authority (note Jesus only heals people in order to demonstrate his divine mandate, not just to, well, heal people.)

That depends on the Gospel in question. The Johannine Jesus works miracles to show that he's God; the Matthean Jesus is constantly frustrated that everyone follows him around, tells everyone to shut up, and rejects Satan's temptation to publicly show his divine favor as an affront to God.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T19:55:02.423Z · LW(p) · GW(p)

He works miracles to show authority. That doesn't necessarily mean declaring you're the actual messiah, at least at first.

comment by Peterdjones · 2012-12-12T16:52:41.848Z · LW(p) · GW(p)

So you can have N>1 miracles and still have deism? I always thought N was 0 for that.

Replies from: MixedNuts, MugaSofer
comment by MixedNuts · 2012-12-12T20:21:19.488Z · LW(p) · GW(p)

I think (pure) deism is N=1 ("let's get this thing started") and N=0 is "atheism is true but I like thinking about epiphenomena".

comment by MugaSofer · 2012-12-12T19:54:09.296Z · LW(p) · GW(p)

I'm not actually a deist. I'm just more deist than the average theist.

comment by kodos96 · 2012-12-20T06:41:23.972Z · LW(p) · GW(p)

most importantly Christianity, which needed a few good miracles to demonstrate authority (note Jesus only heals people in order to demonstrate his divine mandate, not just to, well, heal people.)

And also, to occasionally demonstrate profound bigotry, as in Matthew 15:22-26:

A Canaanite woman from that vicinity came to him, crying out, "Lord, Son of David, have mercy on me! My daughter is suffering terribly from demon-possession." Jesus did not answer a word. So his disciples came to him and urged him, "Send her away, for she keeps crying out after us." He answered, "I was sent only to the lost sheep of Israel." The woman came and knelt before him. "Lord, help me!" she said. He replied, "It is not right to take the children's bread and toss it to their dogs."

Was his purpose in that to demonstrate that "his divine mandate" applied only to persons of certain ethnicities?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-20T20:44:48.172Z · LW(p) · GW(p)

One, that's NOT using his powers.

Two, she persuaded him otherwise.

And three, I've seen it argued he knew she would offer a convincing argument and was just playing along. Not sure how solid that argument is, but ... it does sound plausible.

comment by lavalamp · 2012-12-12T17:04:03.946Z · LW(p) · GW(p)

OK, you've convinced me you're (just barely) a theist (and not really a deist as I understand the term).

To go back to the original quotation (http://lesswrong.com/lw/fv3/by_which_it_may_be_judged/80ut):

... Then all you have to do is settle the factual question of whether the short-tempered creator who ordered you to genocide your neighbors embodies this set of axioms. If not, well, you live in a weird hybrid universe where G-d intervened to give you some sense of morality but is weaker than whichever Cthulhu or amoral physical law made and rules your world. Sorry.

So you consider the "factual question" above to be meaningful? If so, presumably you give a low probability for living in the "weird hybrid universe"? How low?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T19:59:12.490Z · LW(p) · GW(p)

About the same as 2+2=3. The universe exists; gotta have a creator. God is logically necessary so ...

Replies from: lavalamp, Decius, MixedNuts, Peterdjones
comment by lavalamp · 2012-12-12T20:45:45.433Z · LW(p) · GW(p)

OK; my surprise was predicated on the hypothetical theist giving the sentence a non-negligible probability; I admit I didn't express this originally, so you'll have to take my word that it's what I meant. Thanks for humoring me :)

On another note, you do surprise me with "God is logically necessary"; although I know that's at least a common theist position, it's difficult for me to see how one can maintain that without redefining "god" into something unrecognizable.

Replies from: drnickbone, DaFranker, MugaSofer
comment by drnickbone · 2012-12-12T21:00:46.938Z · LW(p) · GW(p)

This "God is logically necessary" is an increasingly common move among philosophical theists, though virtually unheard of in the wider theistic community.

Of course it is frustratingly hard to argue with. No matter how much evidence an atheist tries to present (evolution, cosmology, plagues, holocausts, multiple religions, psychology of religious experience and self-deception, sociology, history of religions, critical studies of scriptures etc. etc.) the theist won't update an epistemic probability of 1 to anything less than 1, so is fundamentally immovable.

My guess is that this is precisely the point: the philosophical theist basically wants a position that he can defend "come what may" while still - at least superficially - playing the moves of the rationality game, and gaining a form of acceptance in philosophical circles.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T09:31:48.031Z · LW(p) · GW(p)

Who said I have a probability of 1? I said the same probability (roughly) as 2+2=3. That's not the same as 1. But how exactly are those things evidence against God (except maybe plagues, and even then it's trivially easy to justify them as necessary.) Some of them could be evidence against (or for) Christianity, but not God. I'm much less certain of Christianity than God, if it helps.

Replies from: drnickbone, drnickbone
comment by drnickbone · 2012-12-13T11:32:59.478Z · LW(p) · GW(p)

OK, so you are in some (small) doubt whether God is logically necessary or not, in that your epistemic probability of God's existence is 2+2-3, and not exactly 1:-)

Or, put another way, you are able to imagine some sort of "world" in which God does not exist, but you are not totally sure whether that is a logically impossible world (you can imagine that it is logically possible after all)? Perhaps you think like this:

  1. God is either logically necessary or logically impossible
  2. I'm pretty sure (probability very close to 1) that God's existence is logically possible So:
  3. I'm pretty sure (probability very close to 1) that God's existence is logically necessary.

To support 1, you might be working with a definition of God like St Anselm's (a being than which a greater cannot be conceived) or Alvin Plantinga's (a maximally great being, which has the property of maximal excellence - including omnipotence, omniscience and moral perfection - in every possible world). If you have a different sort of God conception then that's fine, but just trying to clear up misunderstanding here.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T11:46:37.732Z · LW(p) · GW(p)

Yup. It's the Anslem one, in fact.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-13T12:44:36.841Z · LW(p) · GW(p)

Well, it's not like there's a pre-existing critique of that, or anything.

Replies from: drnickbone, Jayson_Virissimo, MugaSofer
comment by drnickbone · 2012-12-13T13:15:49.287Z · LW(p) · GW(p)

Yeah, there's only about 900 years or so of critique... But let's cut to the chase here.

For sake of argument, let's grant that there is some meaningful "greater than" order between beings (whether or not they exist) that there is a possible maximum to the order (rather than an unending chain of ever-greater beings), that parodies like Gaunilo's island fail for some unknown reason, that existence is a predicate, that there is no distinction between conceivability and logical possibility, that beings which exist are greater than beings which don't, and a few thousand other nitpicks.

There is still a problem that premises 1) and 2) don't follow from Anselm's definition. We can try to clarify the definition like this:

(*) G is a being than which a greater cannot be conceived iff for every possible world w where G exists, there is no possible world v and being H such that H in world v is greater than G in world w

No difficulty there... Anselm's "Fool" can coherently grasp the concept of such a being and imagine a world w where G exists, but can also consistently claim that the actual world a is not one of those worlds. Premise 1) fails.

Or we can try to clarify it like this:

(**) G is a being than which a greater cannot be conceived iff there are no possible worlds v, w and no being H such that H in world v is greater than G in world w

That is closer to Plantinga's definition of maximal greatness, and does establish Premise 1). But now Premise 2) is implausible, since it is not at all obvious that any possible being satisfies that definition. The Fool is still scratching his head trying to understand it...

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T14:50:18.645Z · LW(p) · GW(p)

I am no longer responding to arguments on this topic, although I will clarify my points if asked. Political argument in an environment where I am already aware of the consensus position on this topic is not productive.

It bugs the hell out of me not to respond to comments like this, but a lengthy and expensive defence against arguments that I have already encountered elsewhere just isn't worth it.

Replies from: drnickbone
comment by drnickbone · 2012-12-13T18:27:50.619Z · LW(p) · GW(p)

Sorry my comment wasn't intended to be political here.

I was simply pointing out that even if all the classical criticisms of St Anselm's OA argument are dropped, this argument still fails to establish that a "being than which a greater cannot be conceived" is a logically necessary being rather than a logically contingent being. The argument just can't work unless you convert it into something like Alvin Plantinga's version of the OA. Since you were favouring St A's version over Plantinga's version, I thought you might not be aware of that.

Clearly you are aware of it, so my post was not helpful, and you are not going to respond to this anyway on LW. However, if you wish to continue the point by email, feel free to take my username and add @ gmail.com.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T19:22:22.789Z · LW(p) · GW(p)

Fair enough. I was indeed aware of that criticism, incidentally.

comment by Jayson_Virissimo · 2012-12-13T12:55:36.028Z · LW(p) · GW(p)

Well, it's not like there's a pre-existing critique of that, or anything.

Or counters to those pre-existing critiques, etc...

Replies from: Peterdjones
comment by Peterdjones · 2012-12-13T12:57:35.365Z · LW(p) · GW(p)

The phil. community is pretty close to consensus , for once, on the OA.

Replies from: Jayson_Virissimo, MugaSofer
comment by Jayson_Virissimo · 2012-12-13T13:37:00.274Z · LW(p) · GW(p)

The phil. community is pretty close to consensus , for once, on the OA.

Yeah, as far as the "classical ontological arguments" are concerned, virtually no philosopher considers them sound. On the other hand, I am under the impression that the "modern modal ontological arguments" (Gödel, Plantinga, etc...) are not well known outside of philosophy of religion and so there couldn't be a consensus one way or the other (taking philosophy as a whole).

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T14:50:27.058Z · LW(p) · GW(p)

I am no longer responding to arguments on this topic, although I will clarify my points if asked. Political argument in an environment where I am already aware of the consensus position on this topic is not productive.

It bugs the hell out of me not to respond to comments like this, but a lengthy and expensive defense against arguments that I have already encountered elsewhere just isn't worth it.

comment by MugaSofer · 2012-12-13T14:25:33.956Z · LW(p) · GW(p)

Source?

comment by MugaSofer · 2012-12-13T14:57:21.902Z · LW(p) · GW(p)

I have read the critiques, and the critiques of the critiques, and so on and so forth. If there is some "magic bullet" argument I somehow haven't seen, LessWrong does not seem the place to look for it.

I will not respond to further attempts at argument. We all have political stakes in this; LessWrong is generally safe from mindkilled dialogue and I would like it to stay that way, even if it means accepting a consensus I believe to be inaccurate. Frankly, I have nothing to gain from fighting this point. So I'm not going to pay the cost of doing so.

comment by drnickbone · 2012-12-13T12:07:10.423Z · LW(p) · GW(p)

P.S. On a simple point of logic P(God exists) = P(God exists & Christianity is true) + P(God exists and Christianity is not true). Any evidence that reduces the first term also reduces the sum.

In any case, the example evidences I cited are general evidence against any sort of omni being, because they are *not the sorts of things we would expect to observe if there were such a being, but are very much what we'd expect to observe if there weren't.

Replies from: wedrifid, MugaSofer
comment by wedrifid · 2012-12-13T12:59:37.531Z · LW(p) · GW(p)

P.S. On a simple point of logic P(God exists) = P(God exists & Christianity is true) + P(God exists and Christianity is not true). Any evidence that reduces the first term also reduces the sum.

No it doesn't. Any evidence that reduces the first term by a greater degree than it increases the second term also reduces the sum. For example if God appeared before me and said "There is one God, Allah, and Mohammed is My prophet" it would raise p(God exists), lower p(God exists & Christianity is true) and significantly raise p(psychotic episode).

Replies from: army1987, drnickbone
comment by A1987dM (army1987) · 2012-12-13T15:40:00.279Z · LW(p) · GW(p)

lower p(God exists & Christianity is not true)

ITYM "lower p(God exists & Christianity is true)".

Replies from: wedrifid
comment by wedrifid · 2012-12-13T21:22:32.749Z · LW(p) · GW(p)

Thanks.

comment by drnickbone · 2012-12-13T18:14:54.499Z · LW(p) · GW(p)

Good point...

What I was getting at here is that evidence which reduces the probability of the Christian God but leaves probability of other concepts of God unchanged still reduces P(God). But you are correct, I didn't quite say that.

Replies from: wedrifid
comment by wedrifid · 2012-12-13T21:23:19.733Z · LW(p) · GW(p)

What I was getting at here is that evidence which reduces the probability of the Christian God but leaves probability of other concepts of God unchanged still reduces P(God).

Your point is a valid one!

comment by MugaSofer · 2012-12-13T14:46:06.833Z · LW(p) · GW(p)

In any case, the example evidences I cited are general evidence against any sort of omni* being, because they are not the sorts of things we would expect to observe if there were such a being, but are very much what we'd expect to observe if there weren't.

For example? Bearing in mind that I am well aware of all your "example evidences" and they do not appear confusing - although I have encountered other conceptions of God that would be so confused (for example, those who don't think God can have knowledge about the future - because free will - might be puzzled by His failure to intervene in holocausts.)

EDIT:

On a simple point of logic P(2+2=3) = P(2+2=3 & Christianity is true) + P(2+2=3 and Christianity is not true). Any evidence that reduces the first term also reduces the sum.

comment by DaFranker · 2012-12-12T21:36:25.028Z · LW(p) · GW(p)

it's difficult for me to see how one can maintain that without redefining "god" into something unrecognizable.

Despite looking for some way to do so, I've never found any. I presume you can't. Philosophical theists are happy to completely ignore this issue, and gaily go on to conflate this new "god" with their previous intuitive ideas of what "god" is, which is (from the outside view) obviously quite confused and a very bad way to think and to use words.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T09:28:08.664Z · LW(p) · GW(p)

Well, my idea of what "God" is would be an omnipotent, omnibenevolent creator. That doesn't jive very well with notions like hell, at first glance, but there are theories as to why a benevolent God would torture people. My personal theory is too many inferential steps away to explain here, but suffice to say hell is ... toned down ... in most of them.

comment by MugaSofer · 2012-12-13T09:25:27.020Z · LW(p) · GW(p)

OK; my surprise was predicated on the hypothetical theist giving the sentence a non-negligible probability; I admit I didn't express this originally, so you'll have to take my word that it's what I meant. Thanks for humoring me :)

Oh, OK. I just meant it sounds like something I would say, probably in order to humour an atheist.

On another note, you do surprise me with "God is logically necessary"; although I know that's at least a common theist position, it's difficult for me to see how one can maintain that without redefining "god" into something unrecognizable.

The traditional method is the Ontological argument, not to be confused with two other arguments with that name; but it's generally considered rather ... suspect. However, it does get you a logically necessary, omnipotent, omnibenevolent God; I'm still somewhat confused as to whether it's actually valid.

comment by Decius · 2012-12-12T21:27:27.879Z · LW(p) · GW(p)

So it is trivially likely that the creator of the universe (God) embodies the set of axioms which describe morality? God is not good?

I handle that contradiction by pointing out that the entity which created the universe, the abstraction which is morality, and the entity which loves genocide are not necessarily the same.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T09:00:22.710Z · LW(p) · GW(p)

There certainly seems to be some sort of optimisation going on.

But I don't come to LW to debate theology. I'm not here to start arguments. Certainly not about an issue the community has already decided against me on.

Replies from: Decius
comment by Decius · 2012-12-14T00:21:56.873Z · LW(p) · GW(p)

The universe probably seems optimized for what it is; is that evidence of intelligence, or anthropic effect?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-14T11:41:56.499Z · LW(p) · GW(p)

I am no longer responding to arguments on this topic, although I will clarify my points if asked. Political argument in an environment where I am already aware of the consensus position on this topic is not productive.

It bugs the hell out of me not to respond to comments like this, but a lengthy and expensive defense against arguments that I have already encountered elsewhere just isn't worth it.

comment by MixedNuts · 2012-12-12T20:54:56.645Z · LW(p) · GW(p)

It is logically necessary that the cause of the universe be sapient?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T09:13:23.354Z · LW(p) · GW(p)

Define "sapient". An optimiser, certainly.

comment by Peterdjones · 2012-12-12T21:20:19.431Z · LW(p) · GW(p)

"Creation must have a creator" is about as good as "the-randomly-occuring-totailty randomly occurred".

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T09:12:38.056Z · LW(p) · GW(p)

OK, firstly, I'm not looking for a debate on theology here; I'm well aware of what the LW consensus thinks of theism.

Secondly, what the hell is that supposed to mean?

Replies from: Peterdjones
comment by Peterdjones · 2012-12-13T10:27:29.209Z · LW(p) · GW(p)

OK, firstly, I'm not looking for a debate on theology here

You seem to have started one.

Secondly, what the hell is that supposed to mean?

That one version of the First Cause argument begs the question by how it describes the universe.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T11:39:01.717Z · LW(p) · GW(p)

You seem to have started one.

I clarified a probability estimate. I certainly didn't intend an argument:(

That one version of the First Cause argument begs the question by how it describes the universe.

As ... created. Optimized? It's more an explanation, I guess.

comment by Sengachi · 2012-12-21T08:41:03.297Z · LW(p) · GW(p)

Deism is essentially the belief that an intelligent entity formed, and then generated all of the universe, sans other addendums, as opposed to the belief that a point mass formed and chaoticly generated all of the universe.

Replies from: lavalamp
comment by lavalamp · 2012-12-21T15:01:26.775Z · LW(p) · GW(p)

Yes, but those two beliefs don't predict different resulting universes as far as I can tell. They're functionally equivalent, and I disbelieve the one that has to pay a complexity penalty.

comment by Decius · 2012-12-12T21:24:02.376Z · LW(p) · GW(p)

I typically don't accept the mainstream Judeo-Christian text as metaphorical truth, but if I did I can settle that question in the negative: The Jehovah of those books is the force that forbade knowledge and life to mankind in Genesis, and therefore does not embody morality. He is also not the creator of morality nor of the universe, because that would lead to a contradiction.

Replies from: MixedNuts
comment by MixedNuts · 2012-12-13T00:03:56.754Z · LW(p) · GW(p)

I dunno, dude could have good reasons to want knowledge of good and evil staying hush-hush. (Forbidding knowledge in general would indeed be super evil.) For example: You have intuitions telling you to eat when you're hungry and give food to others when they're hungry. And then you learn that the first intuition benefits you but the second makes you a good person. At this point it gets tempting to say "Screw being a good person, I'm going to stuff my face while others starve", whereas before you automatically shared fairly. You could have chosen to do that before (don't get on my case about free will), but it would have felt as weird as deciding to starve just so others could have seconds. Whereas now you're tempted all the time, which is a major bummer on the not-sinning front. I'm making this up, but it's a reasonable possibility.

Also, wasn't the tree of life totally allowed in the first place? We just screwed up and ate the forbidden fruit and got kicked out before we got around to it. You could say it's evil to forbid it later, but it's not that evil to let people die when an afterlife exists. Also there's an idea (at least one Christian believes this) that G-d can't share his power (like, polytheism would be a logical paradox). Eating from both trees would make humans equal to G-d (that part is canon), so dude is forced to prevent that.

You can still prove pretty easily that the guy is evil. For example, killing a kid (through disease, not instant transfer to the afterlife) to punish his father (while his mother has done nothing wrong). Or ordering genocides. (The killing part is cool because afterlife, the raping and enslaving part less so.) Or making a bunch of women infertile because it kinda looked like the head of the household was banging a married woman he thought was single. Or cursing all descendents of a guy who accidentally saw his father streaking, but being A-OK with raping your own father if there are no marriageable men available. Or... well, you get the picture.

Replies from: MugaSofer, Decius
comment by MugaSofer · 2012-12-14T13:48:21.653Z · LW(p) · GW(p)

The killing part is cool because afterlife

You sure? They believed in a gloomy underworld-style afterlife in those days.

Replies from: MixedNuts
comment by MixedNuts · 2012-12-14T17:15:21.401Z · LW(p) · GW(p)

Well, it's not as bad as it sounds, anyway. It's forced relocation, not murder-murder.

How do you know what they believed? Mordern Judaism is very vague about the afterlife - the declassified material just mumbles something to the effect of "after the Singularity hits, the righteous will be thawed and live in transhuman utopia", and the advanced manual can't decide if it likes reincarnation or not. Do we have sources from back when?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-15T15:10:52.319Z · LW(p) · GW(p)

Well, it's not as bad as it sounds, anyway. It's forced relocation, not murder-murder.

As I said, that's debatable; most humans historically believed that's what "death" consisted of, after all.

That's not to say it's wrong. Just debatable.

Modern Judaism is very vague about the afterlife - the declassified material just mumbles something to the effect of "after the Singularity hits, the righteous will be thawed and live in transhuman utopia", and the advanced manual can't decide if it likes reincarnation or not.

Eh?

Do we have sources from back when?

Google "sheol". It's usually translated as "hell" or "the grave" these days, to give the impression of continuity.

Replies from: MixedNuts
comment by MixedNuts · 2012-12-15T15:44:40.851Z · LW(p) · GW(p)

There's something to be said against equating transhumanism with religious concepts, but the world to come is an exact parallel.

I don't know much about Kabbalah because I'm worried it'll fry my brain, but Gilgul is a thing.

I always interpreted sheol as just the literal grave, but apparently it refers to an actual world. Thanks.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-15T15:55:07.821Z · LW(p) · GW(p)

There's something to be said against equating transhumanism with religious concepts, but the world to come is an exact parallel.

Well, it is if you expect SAIs to be able to reconstruct anyone, anyway. But thanks for clarifying.

I don't know much about Kabbalah because I'm worried it'll fry my brain, but Gilgul is a thing.

Huh. You learn something new every day.

comment by Decius · 2012-12-14T00:51:40.074Z · LW(p) · GW(p)

No, the Tree of Life and the Tree of Knowledge (of Good and Evil) were both forbidden.

My position is that suppressing knowledge of any kind is Evil.

The contradiction is that the creator of the universe should not have created anything which it doesn't want. If nothing else, can't the creator of the universe hex-edit it from his metauniverse position and remove the tree of knowledge? How is that consistent with morality?

Replies from: MixedNuts, MugaSofer
comment by MixedNuts · 2012-12-14T02:38:35.241Z · LW(p) · GW(p)

Genesis 2:16-2:17 looks pretty clear to me: every tree which isn't the tree of knowledge is okay. Genesis 3:22 can be interpreted as either referring to a previous life tree ban or establishing one.

If you accept the next gen fic as canon, Revelations 22:14 says that the tree will be allowed at the end, which is evidence it was just a tempban after the fall.

Where do you get that the tree of life was off-limits?

My position is that suppressing knowledge of any kind is Evil.

Sheesh. I'll actively suppress knowledge of your plans against the local dictator. (Isn't devil snake guy analogous?) I'll actively suppress knowledge of that weird fantasy you keep having where you murder everyone and have sex with an echidna, because you're allowed privacy.

The contradiction is that the creator of the universe should not have created anything which it doesn't want.

Standard reply is that free will outweighs everything else. You have to give people the option to be evil.

Replies from: BerryPick6, Decius
comment by BerryPick6 · 2012-12-14T02:43:43.654Z · LW(p) · GW(p)

Standard reply is that free will outweighs everything else. You have to give people the option to be evil.

There is no reason an omnipotent God couldn't have created creatures with free will that still always choose to be good. See Mackie, 1955.

Replies from: MixedNuts, Decius, MugaSofer
comment by MixedNuts · 2012-12-14T02:50:32.133Z · LW(p) · GW(p)

Yeah, or at least put the option to be evil somewhere other than right in the middle of the garden with a "Do not eat, or else!" sign on it for a species you created vulnerable to reverse psychology.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-14T03:21:14.204Z · LW(p) · GW(p)

My understanding is that the vulnerability to reverse psychology was one of the consequences of eating the fruit.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-14T12:07:50.840Z · LW(p) · GW(p)

That's an interesting one. I hadn't heard that.

comment by Decius · 2012-12-15T00:14:54.590Z · LW(p) · GW(p)

There is a trivial argument against an omniscient, omnipotent, benevolent god. Why would a god with up to two of those three characteristics make creatures with free will that still always choose to be good?

comment by MugaSofer · 2012-12-14T12:06:55.079Z · LW(p) · GW(p)

There is no reason an omnipotent God couldn't have created creatures with free will that still always choose to be good.

Well, that depends on your understanding of "free will", doesn't it? Most people here would agree with you, but most people making that particular argument wouldn't.

Replies from: drnickbone, BerryPick6
comment by drnickbone · 2012-12-14T22:41:59.743Z · LW(p) · GW(p)

The most important issue is that however the theist defines "free will", he has the burden of showing that free will by that very definition is supremely valuable: valuable enough to outweigh the great evil that humans (and perhaps other creatures) cause by abusing it, and so valuable that God could not possibly create a better world without it.

This to my mind is the biggest problem with the Free Will defence in all its forms. It seems pretty clear that free will by some definition is worth having; it also seems pretty clear that there are abstruse definitions of free will such that God cannot both create it and ensure it is used only for good. But these definitions don't coincide.

One focal issue is whether God himself has free will, and has it in all the senses that are worth having. Most theist philosophers would say that God does have every valuable form of free will, but also that he is not logically free : there is no possible world in which God performs a morally evil act. But a little reflection shows there are infinitely many possible people who are similarly free but not logically free (so they also have exactly the same valuable free will that God does). And if God creates a world containing such people, and only such people, he necessarily ensure the existence of (valuable) free will but without any moral evil. So why doesn't he do that?

See Quentin Smith for more on this.

You may be aware of Smith's argument, and may be able to point me at an article where Plantinga has acknowledged and refuted it. If so, please do so.

Replies from: Legolan, MugaSofer
comment by Legolan · 2012-12-15T00:00:22.785Z · LW(p) · GW(p)

I think this is an excellent summary. Having read John L. Mackie's free will argument and Plantinga's transworld depravity free will defense, I think that a theodicy based on free will won't be successful. Trying to define free will such that God can't ensure using his foreknowledge that everyone will act in a morally good way leads to some very odd definitions of free will that don't seem valuable at all, I think.

comment by MugaSofer · 2012-12-15T15:47:14.106Z · LW(p) · GW(p)

The most important issue is that however the theist defines "free will", he has the burden of showing that free will by that very definition is supremely valuable: valuable enough to outweigh the great evil that humans (and perhaps other creatures) cause by abusing it, and so valuable that God could not possibly create a better world without it.

This to my mind is the biggest problem with the Free Will defence in all its forms. It seems pretty clear that free will by some definition is worth having; it also seems pretty clear that there are abstruse definitions of free will such that God cannot both create it and ensure it is used only for good. But these definitions don't coincide.

Well sure. But that's a separate argument, isn't it?

My point is that anyone making this argument isn't going to see Berry's argument as valid, for the same reason they are making this (flawed for other reasons) argument in the first place.

Mind you, it's still an accurate statement and a useful observation in this context.

comment by BerryPick6 · 2012-12-14T19:31:51.789Z · LW(p) · GW(p)

Most people here would agree with you, but most people making that particular argument wouldn't.

It was my understanding that Alvin Plantinga mostly agreed that Mackie had him pinned with that response, so I'm calling you on this one.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-15T15:23:14.817Z · LW(p) · GW(p)

Most people making that argument, in my experience, believe that for free will to be truly "free" God cannot have decided (or even predicted, for some people) their actions in advance. Of course, these people are confused about the nature of free will.

If you could show me a link to Plantinga conceding, that might help clear this up, but I'm guessing Mackie's argument (or something else) dissolved his confusion on the topic. If we had access to someone who actually believes this, we could test it ... anyone want to trawl through some theist corner of the web?

Unless I'm misunderstanding your claim, of course; I don't believe I've actually read Mackie's work. I'm going to go see if I can find it free online now.

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-15T15:35:27.433Z · LW(p) · GW(p)

Maybe I have gotten mixed up and it was Mackie who conceded to Plantinga? Unfortunately, I can't really check at the moment. Besides, I don't really disagree with what you said about most people who are making that particular argument.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-15T16:06:29.001Z · LW(p) · GW(p)

I don't really disagree with what you said about most people who are making that particular argument.

Fair enough.

Maybe I have gotten mixed up and it was Mackie who conceded to Plantinga? Unfortunately, I can't really check at the moment

Well, having looked into it, it appears that Plantinga wasn't a compatibilist, while Mackie was. Their respective arguments assume their favored version of free will. Wikipedia thinks that Plantinga's arguments are generally agreed to be valid if* you grant incompatibilism, which is a big if; the LW consensus seems to be compatibilist for obvious reasons. I can't find anything on either of them conceding, I'm afraid.

comment by Decius · 2012-12-15T00:11:04.316Z · LW(p) · GW(p)

No, if I give the creator free will, he doesn't have to give anyone he creates the option. He chose to create the option or illusion, else he didn't exercise free will.

It seems like you require a reason to suppress knowledge; are you choosing the lesser of two evils when you do so?

Replies from: MixedNuts
comment by MixedNuts · 2012-12-15T00:22:19.887Z · LW(p) · GW(p)

I meant free will as a moral concern. Nobody created G-d, so he doesn't necessarily have free will, though I think he does. He is, however, compelled to act morally (lest he vanish in a puff of logic). And morality requires giving people you create free will, much more than it requires preventing evil. (Don't ask me why.)

It seems like you require a reason to suppress knowledge; are you choosing the lesser of two evils when you do so?

Sure, I'm not Kant. And I'm saying G-d did too. People being able but not allowed to get knowledge suppresses knowledge, which is a little evil; people having knowledge makes them vulnerable to temptation, which is worse; people being unable to get knowledge deprives them of free will and also suppresses knowledge, which is even worse; not creating people in the first place is either the worst or impossible for some reason.

Replies from: Decius
comment by Decius · 2012-12-15T00:48:49.778Z · LW(p) · GW(p)

I disagree with your premise that the actions taken by the entity which preceded all others are defined to be moral. Do you have any basis for that claim?

Replies from: MixedNuts
comment by MixedNuts · 2012-12-15T02:15:38.323Z · LW(p) · GW(p)

It says so in the book? (Pick any psalm.) I mean if we're going to disregard that claim we might as well disregard the claims about a bearded sky dude telling people to eat fruit.

Using your phrasing, I'm defining G-d's actions as moral (whether this defines G-d or morality I leave up to you). The Bible claims that the first entity was G-d. (Okay, it doesn't really, but it's fanon.) It hardly seems fair to discount this entirely, when considering whether an apparently evil choice is due to evilness or to knowing more than you do about morality.

Replies from: Decius
comment by Decius · 2012-12-15T02:32:31.316Z · LW(p) · GW(p)

Suppose that the writer of the book isn't moral. What would the text of the book say about the morality of the writer?

Or we could assume that the writer of the book takes only moral actions, and from there try to construct which actions are moral. Clearly, one possibility is that it is moral to blatantly lie when writing the book, and that the genocide, torture, and mass murder didn't happen. That brings us back to the beginning again.

The other possibility is too horrible for me to contemplate: That torture and murder are objectively the most moral things to do in noncontrived circumstances.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-16T23:22:41.680Z · LW(p) · GW(p)

The other possibility is too horrible for me to contemplate: That torture and murder are objectively the most moral things to do in noncontrived circumstances.

Taboo "contrived".

Replies from: Decius, wedrifid, army1987
comment by Decius · 2012-12-17T20:08:51.195Z · LW(p) · GW(p)

No. But I will specify the definition from Merriam-Webster and elaborate slightly:
Contrive: To bring about with difficulty.
Noncontrived circumstances are any circumstances that are not difficult to encounter.

For example, the credible threat of a gigantic number of people being tortured to death if I don't torture one person to death is a contrived circumstance. 0% of exemplified situations requiring moral judgement are contrived.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-17T21:46:27.809Z · LW(p) · GW(p)

Taboo "difficult".

Replies from: Decius
comment by Decius · 2012-12-17T22:27:08.510Z · LW(p) · GW(p)

Torture and murder are not the most moral things to do in 1.00000 00000 00000*10^2% of exemplified situations which require moral judgement.

Are you going to taboo "torture" and "murder" now?

Replies from: Eugine_Nier, MugaSofer
comment by Eugine_Nier · 2012-12-18T04:23:04.603Z · LW(p) · GW(p)

Torture and murder are not the most moral things to do in 1.00000 00000 00000*10^2% of exemplified situations which require moral judgement.

Well, that's clearly false. Your chances of having to kill a member of the secret police of an oppressive state are much more than 1/10^16, to say nothing of less clear cut examples.

Replies from: Decius
comment by Decius · 2012-12-18T05:23:15.444Z · LW(p) · GW(p)

Do the actions of the secret police of an oppressive state constitute consent to violent methods? If so, they cannot be murdered in the moral sense, because they are combatants. If not, then it is immoral to kill them, even to prevent third parties from executing immoral acts.

You don't get much less clear cut than asking questions about whether killing a combatant constitutes murder.

Replies from: army1987, wedrifid, ChristianKl, MugaSofer
comment by A1987dM (army1987) · 2012-12-18T11:01:52.775Z · LW(p) · GW(p)

Well, if you define “murder” as ‘killing someone you shouldn't’ then you should never murder anyone -- but that'd be a tautology and the interesting question would be how often killing someone would not be murder.

Replies from: Decius
comment by Decius · 2012-12-19T00:55:28.083Z · LW(p) · GW(p)

"Murder" is roughly shorthand for "intentional nonconsensual interaction which results in the intended outcome of the death of a sentient."

If the secret police break down my door, nothing done to them is nonconsensual.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-19T05:47:59.765Z · LW(p) · GW(p)

If the secret police break down my door,

Any half-way competent secret police wouldn't need to.

nothing done to them is nonconsensual.

You seem to have a very non-standard definition of "nonconsensual".

Replies from: Decius
comment by Decius · 2012-12-19T07:28:45.680Z · LW(p) · GW(p)

I meant in the non-transitive sense.

You seem to have a very non-standard definition of "nonconsensual".

Being a combatant constitutes consent to be involved in the war. How is that non-standard?

Replies from: nshepperd
comment by nshepperd · 2012-12-19T08:38:27.629Z · LW(p) · GW(p)

Being involved in the war isn't equivalent to being killed. I find it quite conceivable that I might want to involve myself in the war against, say, the babyeaters, without consenting to being killed by the babyeaters. I mean, ideally the war would go like this: we attack, babyeaters roll over and die, end.

I'm not really sure what is the use of a definition of "consent" whereby involving myself in war causes me to automatically "consent" to being shot at. The whole point of fighting is that you think you ought to win.

Replies from: Nornagest, wedrifid, Decius
comment by Nornagest · 2012-12-19T09:50:47.948Z · LW(p) · GW(p)

Well, I think consent sort of breaks down as a concept when you start considering all the situations where societies decide to get violent (or for that matter to involve themselves in sexuality; I'd rather not cite examples for fear of inciting color politics). So I'm not sure I can endorse the general form of this argument.

In the specific case of warfare, though, the formalization of war that most modern governments have decided to bind themselves by does include consent on the part of combatants, in the form of the oath of enlistment (or of office, for officers). Here's the current version used by the US Army:

"I, [name], do solemnly swear (or affirm) that I will support and defend the Constitution of the United States against all enemies, foreign and domestic; that I will bear true faith and allegiance to the same; and that I will obey the orders of the President of the United States and the orders of the officers appointed over me, according to regulations and the Uniform Code of Military Justice. So help me God."

Doesn't get much more explicit than that, and it certainly doesn't include an expectation of winning. Of course, a lot of governments still conscript their soldiers, and consent under that kind of duress is, to say the least, questionable; you can still justify it, but the most obvious ways of doing so require some social contract theory that I don't think I endorse.

Replies from: wedrifid
comment by wedrifid · 2012-12-19T10:00:13.638Z · LW(p) · GW(p)

and consent under that kind of duress is, to say the least, questionable

Indeed. Where the 'question' takes the form "Is this consent?" and the answer is "No, just no."

Replies from: Decius
comment by Decius · 2012-12-19T19:51:11.964Z · LW(p) · GW(p)

Duress is a problematic issue- conscription without the social contract theory supporting it is immoral. So are most government policies, and I don't grok the social contract theory well enough to justify government in general.

comment by wedrifid · 2012-12-19T10:03:45.330Z · LW(p) · GW(p)

I'm not really sure what is the use of a definition of "consent" whereby involving myself in war causes me to automatically "consent" to being shot at. The whole point of fighting is that you think you ought to win.

At the same time it should be obvious that there is something---pick the most appropriate word---that you have done by trying to kill something that changes the moral implications of the intended victim deciding to kill you first. This is the thing that we can clearly see that Decius is referring to.

The 'consent' implied by your action here (and considered important to Decius) is obviously not directly consent to be shot at but rather consent to involvement in violent interactions with a relevant individual or group. For some reason of his own Decius has decided to grant you power such that a specific kind of consent is required from you before he kills you. The kind of consent required is up to Decius and his morals and the fact that you would not grant a different kind of consent ('consent to be killed') is not relevant to him.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-12-19T19:19:06.576Z · LW(p) · GW(p)

At the same time it should be obvious that there is something---pick the most appropriate word---that you have done by trying to kill something that changes the moral implications of the intended victim deciding to kill you first.

"violence" perhaps or "aggression" or "acts of hostility".

Not "consent". :-)

comment by Decius · 2012-12-19T19:53:04.634Z · LW(p) · GW(p)

Did all of the participants in the violent conflict voluntarily enter it? If so, then they have consented to the outcome.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-21T03:35:08.679Z · LW(p) · GW(p)

Did all of the participants in the violent conflict voluntarily enter it?

Generally not, actually.

Replies from: Decius
comment by Decius · 2012-12-21T05:59:28.579Z · LW(p) · GW(p)

Those who engage in an action in which not all participants enter of their own will is immoral. Yes, war is generally immoral in the modern era.

Replies from: nshepperd, army1987
comment by nshepperd · 2012-12-22T02:57:23.269Z · LW(p) · GW(p)

Those who engage in an action in which not all participants enter of their own will is immoral.

A theory of morality that looks nice on paper but is completely wrong. In a war between Good and Evil, Good should win. It doesn't matter if Evil consented.

Replies from: Decius
comment by Decius · 2012-12-22T03:40:16.276Z · LW(p) · GW(p)

You're following narrative logic there. Also, using the definitions given, anyone who unilaterally starts a war is Evil, and anyone who starts a war consents to it. It is logically impossible for Good to defeat Evil in a contest that Evil did not willingly choose to engage in.

Replies from: Eugine_Nier, nshepperd
comment by Eugine_Nier · 2012-12-22T07:35:56.338Z · LW(p) · GW(p)

What if Evil is actively engaged in say torturing others?

Replies from: Decius
comment by Decius · 2012-12-22T07:41:51.741Z · LW(p) · GW(p)

What if Evil is actively engaged in say torturing others? [Without the consent of the tortured]

Acts like constitute acts of the 'war' between Good and Evil that you are so eager to have. Have at them.

comment by nshepperd · 2012-12-22T05:46:35.637Z · LW(p) · GW(p)

Right, just like it's logically impossible for Good to declare war against Evil to prevent or stop Evil from doing bad things that aren't war.

Replies from: Decius
comment by Decius · 2012-12-22T07:27:53.678Z · LW(p) · GW(p)

Exactly. You can't be Good and do immoral things. Also, abstractions don't take actions.

comment by A1987dM (army1987) · 2012-12-22T02:25:57.395Z · LW(p) · GW(p)

Those who engage in an action in which not all participants enter of their own will is immoral.

Er, that kind-of includes asking a stranger for the time.

Replies from: Decius
comment by Decius · 2012-12-22T02:39:49.785Z · LW(p) · GW(p)

Er, that kind-of includes asking a stranger for the time.

Now we enter the realm of the social contract and implied consent.

comment by wedrifid · 2012-12-18T07:39:47.926Z · LW(p) · GW(p)

Decius, you may also be interested in the closely related post Ethical Inhibitions. It describes actions like, say, blatant murder, that could in principle (ie. in contrived circumstances) be actually the consequentialist right thing to do but that nevertheless you would never do anyway as a human since you are more likely to be biased and self-deceiving than to be correctly deciding murdering was right.

Replies from: Decius
comment by Decius · 2012-12-19T01:27:37.749Z · LW(p) · GW(p)

Correctly deciding that 2+2=3 is equally as likely as correctly deciding murdering was right.

Replies from: wedrifid
comment by wedrifid · 2012-12-19T03:06:40.167Z · LW(p) · GW(p)

Correctly deciding that 2+2=3 is equally as likely as correctly deciding murdering was right.

Ok, you're just wrong about that.

Replies from: Decius
comment by Decius · 2012-12-19T07:30:28.393Z · LW(p) · GW(p)

In past trials, each outcome has occurred the same number of times.

Replies from: wedrifid
comment by wedrifid · 2012-12-19T07:53:58.676Z · LW(p) · GW(p)

In past trials, each outcome has occurred the same number of times.

This could be true and you'd still be totally wrong about the equal likelihood.

comment by ChristianKl · 2012-12-19T12:37:47.540Z · LW(p) · GW(p)

Murder is unlawful killing. If you are a citizen of the country you are within it's laws. If the oppressive country has a law against killing members of the secret police than it's murder.

Replies from: Decius
comment by Decius · 2012-12-19T20:12:13.369Z · LW(p) · GW(p)

Murder (law) and murder (moral) are two different things; I was exclusively referring to murder (moral).

I will clarify: There can be cases where murder (law) is either not immoral or morally required. There are also cases where an act which is murder (moral) is not illegal.

My original point is that many of the actions of Jehovah constitute murder (moral).

Replies from: Eugine_Nier, BerryPick6
comment by Eugine_Nier · 2012-12-20T03:52:59.396Z · LW(p) · GW(p)

What's your definition of murder (moral)?

Replies from: Decius
comment by Decius · 2012-12-20T05:39:08.580Z · LW(p) · GW(p)

Roughly "intentional nonconsensual interaction which results in the intended outcome of the death of a sentient".

To define how I use 'nonconsensual', I need to describe an entire ethics. Rough summary: Only every action which is performed without the consent of one or more sentient participant(s) is immoral. (Consent need not be explicit in all cases, especially trivial and critical cases; wearing a military uniform identifies an individual as a soldier, and constitutes clearly communicating consent to be involved in all military actions initiated by enemy soldiers.)

comment by BerryPick6 · 2012-12-19T20:36:19.389Z · LW(p) · GW(p)

This may be the word for which I run into definitional disputes most often. I'm glad you summed it up so well.

comment by MugaSofer · 2012-12-18T15:17:56.241Z · LW(p) · GW(p)

Do the actions of the secret police of an oppressive state constitute consent to violent methods?

I'm pretty sure they would say no, if asked. Just like, y'know, a non-secret policeman (the line is blurry.)

Replies from: Decius
comment by Decius · 2012-12-19T00:43:07.255Z · LW(p) · GW(p)

Well, if I was wondering if a uniformed soldier was a combatant, I wouldn't ask them. Why would I ask the secret police if they are active participants in violence?

Replies from: Eugine_Nier, MugaSofer
comment by Eugine_Nier · 2012-12-19T05:46:47.933Z · LW(p) · GW(p)

So cop-killing doesn't count as murder?

Replies from: Decius
comment by Decius · 2012-12-19T07:26:50.359Z · LW(p) · GW(p)

Murder is not a superset of cop-killing.

comment by MugaSofer · 2012-12-19T12:09:46.505Z · LW(p) · GW(p)

You said "consent". That usually means "permission". It's a nonstandard usage of the word, is all. But the point about the boundary between a cop and a soldier is actually a criticism, if not a huge one.

Replies from: Decius
comment by Decius · 2012-12-19T20:08:40.177Z · LW(p) · GW(p)

Sometimes actions constitute consent, especially in particularly minor or particularly major cases.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T21:13:56.780Z · LW(p) · GW(p)

Again, shooting someone is not giving hem permission to shoot you. That's not to say it would be wrong to shoot back, necessarily.

Are you intending to answer my criticism about the cop and the soldier?

Replies from: Decius
comment by Decius · 2012-12-20T00:38:32.072Z · LW(p) · GW(p)

I don't see your criticism about the cop and the soldier; is it in a fork that I'm not following, or did I overlook it?

Assuming that the social contract requires criminals to subject themselves to law enforcement:

A member of society consents to be judged according to the laws of that society and treated appropriately. The criminal who violates their contract has already consented to the consequences of default, and that consent cannot be withdrawn. Secret police and soldiers act outside the law enforcement portion of the social contract.

Does that cover your criticism?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-20T03:54:48.680Z · LW(p) · GW(p)

Secret police and soldiers act outside the law enforcement portion of the social contract.

Why?

Replies from: Decius
comment by Decius · 2012-12-20T05:28:54.890Z · LW(p) · GW(p)

There's a little bit of 'because secret police don't officially exist' and a little bit of 'because soldiers aren't police'. Also, common language definitions fail pretty hard when strictly interpreting an implied social contract.

There are cases where someone who is a soldier in one context is police in another, and probably some cases where a member of the unofficial police is also a member of the police.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-21T03:39:46.652Z · LW(p) · GW(p)

There's a little bit of 'because secret police don't officially exist'

Well, they generally do actually. They're called 'secret' because people don't know precisely what they're up to, or who is a member.

You can replace them with regular police in my hypothetical if that helps.

comment by MugaSofer · 2012-12-18T14:58:39.707Z · LW(p) · GW(p)

A singleminded agent with my resources could place people in such a situation. I'm guessing the same is true of you. Kidnapping isn't hard, especially if you aren't too worried about eventually being caught, and murder is easy as long as the victim can't resist. "Difficult" is usually defined with regards to the speaker, and most people could arrange such a sadistic choice if they really wanted. They might be caught, but that's not really the point.

If you mean that the odds of such a thing actually happening to you are low, "difficult" was probably the wrong choice of words; it certainly confused me. If I was uncertain what you meant by "torture" or "murder" I would certainly ask you for a definition, incidentally.

(Also, refusal to taboo words is considered logically rude 'round these parts. Just FYI.)

Replies from: Decius
comment by Decius · 2012-12-19T00:53:17.772Z · LW(p) · GW(p)

Consider the contrived situation usually used to show that consequentialism is flawed: There are ten patients in a hospital, all suffering from failure of a different organ they will die in a short time unless treated with an organ transplant, and if they receive a transplant then they will live a standard quality life. There is a healthy person who is a compatible match for all of those patients. He will live one standard quality life if left alone. Is it moral to refuse to forcibly and fatally harvest his organs to provide them to the larger number of patients?

If I say that ten people dying is not a worse outcome than one person being killed by my hand, do you still think you can place someone with my values in a situation where they would believe that torture or murder is moral? Do you believe that consequentialism is objectively the accurate moral system?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T12:04:48.062Z · LW(p) · GW(p)

Considering that dilemma becomes a lot easier if, say, I'm diverting a train through the one and away from the ten, I'm guessing there are other taboos there than just murder. Bodily integrity, perhaps? There IS something squicky about the notion of having surgery performed on you without you consent.

Anyway, I was under the impression that you admitted that the correct reaction to a "sadistic choice" (kill him or I'll kill ten others) was murder; you merely claimed this was "difficult to encounter" and thus less worrying than the prospect that murder might be moral in day-to-day life. Which I agree with, I think.

Replies from: Decius
comment by Decius · 2012-12-19T20:06:54.891Z · LW(p) · GW(p)

I think diverting the train is a much more complicated situation that hinges on factors normally omitted in the description and considered irrelevant by most. It could go any of three ways, depending on factors irrelevant to the number of deaths. (In many cases the murderous action has already been taken, and the decision is whether one or ten people are murdered by the murderer, and the action or inaction is taken with only the decider, the train, and the murderer as participants)

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T21:09:00.513Z · LW(p) · GW(p)

Let's stipulate two scenarios, one in which the quandary is the result of a supervillain and one in which it was sheer bad luck.

Replies from: Decius
comment by Decius · 2012-12-20T01:20:10.032Z · LW(p) · GW(p)

Do I own the track, or am I designated by the person with ownership as having the authority to determine arbitrarily in what manner the junction may be operated? Do I have any prior agreement with regards to the operation of the junction, or any prior responsibility to protect lives at all costs?

Absent prior agreements, if I have that authority to operate the track, it is neutral whether I choose to use it or not. If I were to own and control a hospital, I could arbitrarily refuse to support consensual fatal organ donations on my premises.

If I have a prior agreement to save as many lives as possible at all costs, I must switch to follow that obligation, even if it means violating property rights. (Such an obligation also means that I have to assist with the forcible harvesting of organs).

If I don't have the right to operate the junction according to my own arbitrary choice, I would be committing a small injustice on the owner of the junction by operating it, and the direct consequences of that action would also be mine to bear; if the one person who would be killed by my action does not agree to be, I would be murdering him in the moral sense, as opposed to allowing others to be killed.

I suspect that my actual response to these contrived situations would be inconsistent; I would allow disease to kill ten people, but would cause a single event which would kill ten people without my trivial action to kill one instead (assuming no other choice existed). I prefer to believe that is a fault in my implementation of morality.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-20T20:09:19.472Z · LW(p) · GW(p)

Do I own the track, or am I designated by the person with ownership as having the authority to determine arbitrarily in what manner the junction may be operated? Do I have any prior agreement with regards to the operation of the junction, or any prior responsibility to protect lives at all costs?

Nope. Oh, and the tracks join up after the people; you wont be sending a train careening off on the wrong track to crash into who knows what.

Absent prior agreements, if I have that authority to operate the track, it is neutral whether I choose to use it or not. If I were to own and control a hospital, I could arbitrarily refuse to support consensual fatal organ donations on my premises.

I think you may be mistaking legality for morality.

If I have a prior agreement to save as many lives as possible at all costs, I must switch to follow that obligation, even if it means violating property rights. (Such an obligation also means that I have to assist with the forcible harvesting of organs).

I'm not asking what you would have to do, I'm asking what you should do. Since prior agreements can mess with that, lets say the tracks are public property and anyone can change them, and you will not be punished for letting the people die.

If I don't have the right to operate the junction according to my own arbitrary choice, I would be committing a small injustice on the owner of the junction by operating it, and the direct consequences of that action would also be mine to bear; if the one person who would be killed by my action does not agree to be, I would be murdering him in the moral sense, as opposed to allowing others to be killed.

Murder has many definitions. Even if it would be "murder", which is the moral choice: to kill one or to let ten die?

I suspect that my actual response to these contrived situations would be inconsistent; I would allow disease to kill ten people, but would cause a single event which would kill ten people without my trivial action to kill one instead (assuming no other choice existed). I prefer to believe that is a fault in my implementation of morality.

Could be. We would have to figure out why those seem different. But which of those choices is wrong? Are you saying that your analysis of the surgery leads you to change your mind about the train?

Replies from: Decius
comment by Decius · 2012-12-20T23:50:22.590Z · LW(p) · GW(p)

The tracks are public property; walking on the tracks is then a known hazard. Switching the tracks is ethically neutral.

The authority I was referencing was moral, not legal.

I was actually saying that my actions in some contrived circumstances would differ from what I believe is moral. I am actually comfortable with that. I'm not sure if I would be comfortable with an AI which either always followed a strict morality, nor with one that sometimes deviated.

Replies from: None, MugaSofer
comment by [deleted] · 2012-12-20T23:54:34.658Z · LW(p) · GW(p)

Blaming the individuals for walking on the tracks is simply assuming the not-least convenient world though. What if they were all tied up and placed upon the tracks by some evil individual (who is neither 1 of the people on the tracks nor the 1 you can push onto the tracks)?

Replies from: Decius
comment by Decius · 2012-12-21T01:25:38.334Z · LW(p) · GW(p)

In retrospect, the known hazard is irrelevant.

comment by MugaSofer · 2012-12-21T08:03:18.186Z · LW(p) · GW(p)

You still haven't answered what the correct choice is if a villain put them there.

As for the rest ... bloody hell, mate. Have you got some complicated defense of those positions or are they intuitions? I'm guessing they're not intuitions.

Replies from: Decius
comment by Decius · 2012-12-21T16:52:01.162Z · LW(p) · GW(p)

I don't think it would be relevant to the choice made in isolation what the prior events were.

Moral authority is only a little bit complicated to my view, but it incorporates autonomy and property and overlaps with the very complicated and incomplete social contract theory, and I think it requires more work before it can be codified into something that can be followed.

Frankly, I've tried to make sure that the conclusions follow reasonably from the premise, (all people are metaphysically equal) but it falls outside my ability to implement logic, and I suspect that it falls outside the purview of mathematics in any case. There are enough large jumps that I suspect I have more premises than I can explicate.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-21T22:49:32.857Z · LW(p) · GW(p)

Wait, would you say that while you are not obligated to save them, it would be better than leaving them die?

Replies from: Decius
comment by Decius · 2012-12-22T02:43:17.844Z · LW(p) · GW(p)

I decline to make value judgements beyond obligatory/permissible/forbidden, unless you can provide the necessary and sufficient conditions for one result to be better than another.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-22T18:07:28.502Z · LW(p) · GW(p)

I ask because I checked and the standard response is that it would not be obligatory to save them, but it would be good.

Replies from: Decius
comment by Decius · 2012-12-22T21:47:50.257Z · LW(p) · GW(p)

I don't have a general model for why actions are subrogatory or superogatory.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-23T15:09:39.152Z · LW(p) · GW(p)

I think a good way to think of this result is that leaving the switch on "kill ten people" nets 0 points, moving it from "ten" to "one" nets, say, 9 points, and moving it from "one" to "ten" loses you 9 points.

I have no model that accounts for the surgery problem without crude patches like "violates bodily integrity = always bad." Humans in general seem to have difficulties with "sacred values"; how many dollars is it worth to save one life? How many hours (years?) of torture?

Replies from: Decius
comment by Decius · 2012-12-23T16:35:29.881Z · LW(p) · GW(p)

I think that "violates bodily autonomy"=bad is a better core rule than "increases QALYs"=good.

Replies from: ArisKatsaris, None, MixedNuts, MugaSofer
comment by ArisKatsaris · 2012-12-23T18:09:03.589Z · LW(p) · GW(p)

I think I'm mostly a rule utilitarian, so I certainly understand the worth of rules...

... but that kind of rule really leaves ambiguous how to define any possible exceptions. Let's say that you see a baby about to start chewing on broken glass -- the vast majority would say that it's obligatory to stop it from doing so, of the remainder most would say that it's at least permissible to stop the baby from chewing on broken glass. But if we set up "violates bodily autonomy"=bad as an absolute rule, we are actually morally forbidden to physically prevent the baby from doing such.

So what are the exceptions? If it's an issue of competence (the adult has a far better understanding of what chewing glass would do, and therefore has the right to ignore the baby's rights to bodily autonomy), then a super-intelligent AI would have the same relationship in comparison to us...

Replies from: Decius
comment by Decius · 2012-12-24T03:38:08.986Z · LW(p) · GW(p)

Does the theoretical baby have the faculties to meaningfully enter an agreement, or to meaningfully consent to be stopped from doing harmful things? If so, then the baby is not an active moral agent, and is not considered sentient under the strict interpretation. Once the baby becomes an active moral agent, they have the right to choose for themselves if they wish to chew broken glass.

Under the loose interpretation, the childcare contract obligates the caretaker to protect, educate and provide for the child and grants the caretaker permission from the child to do anything required to fulfill that role.

What general rules do you follow that require or permit stopping a baby from chewing on broken glass, but prohibit forcibly stopping adults from engaging in unhealthy habits?

comment by [deleted] · 2012-12-23T22:19:29.051Z · LW(p) · GW(p)

The former is an ethical injunction, the latter is a utility approximation. They are not directly comparable.

comment by MixedNuts · 2012-12-23T21:33:15.434Z · LW(p) · GW(p)

We do loads of things that violate children's bodily autonomy.

Replies from: Decius
comment by Decius · 2012-12-24T03:21:18.817Z · LW(p) · GW(p)

And in doing so, we assert that children are not active moral agents. See also paternalism.

Replies from: MixedNuts
comment by MixedNuts · 2012-12-24T09:57:46.636Z · LW(p) · GW(p)

Yeah but... that's false. Which doesn't make the rule bad, heuristics are allowed to apply only in certain domains, but a "core rule" shouldn't fail for over 15% of the population. "Sentient things that are able to argue about harm, justice and fairness are moral agents" isn't a weaker rule than "Violating bodily autonomy is bad".

Replies from: Decius
comment by Decius · 2012-12-24T18:47:38.924Z · LW(p) · GW(p)

Do you believe that the ability to understand the likely consequences of actions is a requirement for an entity to be an active moral agent?

comment by MugaSofer · 2012-12-23T21:15:00.372Z · LW(p) · GW(p)

Well, it's less well-defined if nothing else. It's also less general; QALYs enfold a lot of other values, so by maximizing them you get stuff like giving people happy, fulfilled lives and not shooting 'em in the head. It just doesn't enfold all our values,so you get occasional glitches, like killing people and selling their organs in certain contrived situations.

Replies from: Decius
comment by Decius · 2012-12-24T03:24:50.414Z · LW(p) · GW(p)

Values also differ among even perfectly rational individuals. There are some who would say that killing people for their organs is the only moral choice in certain contrived situations, and reasonable people can mutually disagree on the subject.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-24T05:03:56.704Z · LW(p) · GW(p)

And your point is...?

Replies from: Decius
comment by Decius · 2012-12-24T05:28:46.576Z · LW(p) · GW(p)

I'm trying to develop a system which follows logically from easily-defended principles, instead of one that is simply a restatement of personal values.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-24T21:28:14.354Z · LW(p) · GW(p)

Seems legit. Could you give me an example of "easily-defended principals", as opposed to "restatements of personal values"?

Replies from: Decius
comment by Decius · 2012-12-24T23:10:40.645Z · LW(p) · GW(p)

"No sentient individual or group of sentient beings is metaphysically privileged over any group or individual."

Replies from: MugaSofer
comment by MugaSofer · 2012-12-25T13:19:20.301Z · LW(p) · GW(p)

That seems true, but the "should" in there would seem to label it a "personal value". At least, if I've understood you correctly.

Replies from: Decius
comment by Decius · 2012-12-25T21:29:51.297Z · LW(p) · GW(p)

I'm completely sure that I didn't understand what you meant by that.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-25T22:22:58.075Z · LW(p) · GW(p)

Damn. Ok, try this: where did you get that statement from, if not an extrapolation of your personal values?

Replies from: Decius
comment by Decius · 2012-12-26T21:03:48.133Z · LW(p) · GW(p)

In addition to being a restatement of personal values, I think that it is an easily-defended principle. It can be attacked and defeated with a single valid reason why one person or group is intrinsically better or worse than any other, and evidence for a lack of such reason is evidence for that statement.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-27T02:22:47.325Z · LW(p) · GW(p)

It seems to me that an agent ccould coherently value people with purple eyes more than people with orange eyes. And it's arguments would not move you, nor yours it.

And if you were magically convinced that the other was right, it would be near-impossible for you to defend their position; for all the agent might claim that we can never be certain if eyes are truly orange, or merely a yellowish red, and you might claim that purple eyed folk are rare, and should be preserved for diversity's sake.

Am I wrong, or is this not the argument you're making? I suspect at least one of us is confused.

Replies from: Decius
comment by Decius · 2012-12-27T08:55:45.693Z · LW(p) · GW(p)

I didn't claim that I had a universally compelling principle. I can say that someone who embodied the position that eye color granted special privilege would end up opposed to me.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-27T15:33:27.897Z · LW(p) · GW(p)

Oh, that makes sense. You're trying to extrapolate your own ethics. Yeah, that's how morality is usually discussed here, I was just confused by the terminology.

Replies from: Decius
comment by Decius · 2012-12-28T00:24:10.260Z · LW(p) · GW(p)

... with the goal of reaching a point that is likely to be agreed on by as many people as possible, and then discussion the implications of that point.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-28T17:31:14.082Z · LW(p) · GW(p)

Shouldn't your goal be to extrapolate your ethics, then help everyone who shares those ethics (ie humans) extrapolate theirs?

Replies from: Decius
comment by Decius · 2012-12-29T06:44:37.214Z · LW(p) · GW(p)

Why 'should' my goal be anything? What interest is served by causing all people who share my ethics (which need not include all members of the genus Homo) to extrapolate their ethics?

Replies from: BerryPick6, MugaSofer
comment by BerryPick6 · 2012-12-29T10:10:23.013Z · LW(p) · GW(p)

Extrapolating other people's Ethics may or may not help you satisfy your own extrapolated goals, so I think that may be the only metric by which you can judge whether or not you 'should' do it. No?

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-29T12:42:37.946Z · LW(p) · GW(p)

Then there might be superrational considerations, whereby if you helped people sufficiently like you to extrapolate their goals, they would (sensu Gary Drescher, Good and Real) help you to extrapolate yours.

comment by MugaSofer · 2012-12-29T15:21:24.843Z · LW(p) · GW(p)

What interest is served by causing all people who share my ethics (which need not include all members of the genus Homo) to extrapolate their ethics?

Well, people are going to extrapolate their ethics regardless. You should try to help them avoid mistakes, such as "blowing up buildings is a good thing" or "lynching black people is OK".

(which need not include all members of the genus Homo)

Well sure. Psychopaths, if nothing else.

Replies from: Decius
comment by Decius · 2012-12-29T21:12:23.729Z · LW(p) · GW(p)

Why do I care if they make mistakes that are not local to me? I get much better security return on investment by locally preventing violence against me and my concerns, because I have to handle several orders of magnitude fewer people.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-29T23:48:25.809Z · LW(p) · GW(p)

Perhaps I haven't made myself clear. Their mistakes will, by definition, violate your (shared) ethics. For example, if they are mistakenly modelling black people as subhuman apes, and you both value human life, then their lynching blacks may never affect you - but it would be a nonpreferred outcome, under your utility function.

Replies from: Decius
comment by Decius · 2012-12-30T05:25:29.981Z · LW(p) · GW(p)

My utility function is separate from my ethics. There's no reason why everything I want happens to be something which is moral.

It is a coincidence that murder is both unethical and disadvantageous to me, not tautological.

Replies from: Peterdjones, MugaSofer
comment by Peterdjones · 2013-01-01T15:48:30.044Z · LW(p) · GW(p)

My utility function is separate from my ethics

You may have some non-ethical values, as many do, but if your ethics are no part of your values, you are never going to act on them.

Replies from: Decius
comment by Decius · 2013-01-01T20:57:40.483Z · LW(p) · GW(p)

I am considering taking the position that I follow my ethics irrationally; that I prefer decisions which are ethical even if the outcome is worse. I know that position will not be taken well here, but it seems more accurate than the position that I value my ethics as terminal values.

comment by MugaSofer · 2012-12-30T18:54:09.435Z · LW(p) · GW(p)

No, I'm not saying it would inconvenience you, I'm saying it would be a Bad Thing, which you, as a human (I assume,) would get negative utility from. This is true for all agents whose utility function is over the universe, not eg their own experiences. Thus, say, a paperclipper should warn other paperclippers against inadvertently producing staples.

Replies from: Decius
comment by Decius · 2012-12-30T23:31:38.484Z · LW(p) · GW(p)

Projecting your values onto my utility function will not lead to good conclusions.

I don't believe that there is a universal, or even local, moral imperative to prevent death. I don't value a universe in which more QALYs have elapsed over a universe in which fewer QALYs have elapsed; I also believe that entropy in every isolated system will monotonically increase.

Ethics is a set of local rules which is mostly irrelevant to preference functions; I leave it to each individual to determine how much they value ethical decisions.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-31T15:55:36.595Z · LW(p) · GW(p)

Projecting your values onto my utility function will not lead to good conclusions.

That wasn't a conclusion; that was an example, albeit one I believe to be true. If there is anything you value, even if you are not experiencing it directly, then it is instrumentally good for you to help others with the same ethics to understand they value it too.

I don't believe that there is a universal, or even local, moral imperative to prevent death. I don't value a universe in which more QALYs have elapsed over a universe in which fewer QALYs have elapsed; I also believe that entropy in every isolated system will monotonically increase.

... oh. It's pretty much a given around here that human values extrapolate to value life, so if we build an FAI and switch it on then we'll all live forever, and in the mean time we should sign up for cryonics. So I assumed that, as a poster here, you already held this position unless you specifically stated otherwise.

I would be interested in discussing your views (known as "deathism" hereabouts) some other time, although this is probably not the time (or place, for that matter.) I assume you think everyone here would agree with you, if they extrapolated their preferences correctly - have you considered a top-level post on the topic? (Or even a sequence, if the inferential distance is too great.)

Ethics is a set of local rules which is mostly irrelevant to preference functions; I leave it to each individual to determine how much they value ethical decisions.

Once again, I'm only talking about what is ethically desirable here. Furthermore, I am only talking about agents which share your values; it is obviously not desirable to help a babyeater understand that it really, terminally cares about eating babies if I value said babies' lives. (Could you tell me something you do value? Suffering or happiness or something? Human life is really useful for examples of this; if you don't value it just assume I'm talking about some agent that does, one of Azimov's robots or something.)

[EDIT: typos.]

Replies from: Decius
comment by Decius · 2013-01-01T05:06:26.130Z · LW(p) · GW(p)

I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.

I'll point out that "human" has a technical definition of "members of the genus homo" and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like 'sentients' or 'people' for a nonspecific group of people that qualify as active or passive moral agents (respectively).

Replies from: TheOtherDave, MugaSofer
comment by TheOtherDave · 2013-01-01T07:16:11.139Z · LW(p) · GW(p)

If you wish to reference a different subset of entities, use a different term.

Why?

Replies from: Decius
comment by Decius · 2013-01-01T20:30:09.079Z · LW(p) · GW(p)

Because the borogroves are mimsy.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-01T21:35:19.376Z · LW(p) · GW(p)

There's a big difference between a term that has no reliable meaning, and a term that has two reliable meanings one of which is a technical definition. I understand why I should avoid using the former (which seems to be the point of your boojum), but your original comment related to the latter.

Replies from: Decius
comment by Decius · 2013-01-01T23:08:59.666Z · LW(p) · GW(p)

What are the necessary and sufficient conditions to be a human in the non-taxonomical sense? The original confusion was where I was wrongfully assumed to be a human in that sense, and I never even thought to wonder if there was a meaning of 'human' that didn't include at least all typical adult homo sapiens.

comment by MugaSofer · 2013-01-01T15:06:19.630Z · LW(p) · GW(p)

I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.

Well, you can have more than one terminal value (or term in your utility function, whatever.) Furthermore, it seems to me that "freedom" is desirable, to a certain degree, as an instrumental value of our ethics - after all, we are not perfect reasoners, and to impose our uncertain opinion on other reasoners, of similar intelligence, who reached different conclusions, seem rather risky (for the same reason we wouldn't want to simply write our own values directly into an AI - not that we don't want the AI to share our values, but that we are not skilled enough to transcribe them perfectly.

I'll point out that "human" has a technical definition of "members of the genus homo" and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like 'sentients' or 'people' for a nonspecific group of people that qualify as active or passive moral agents (respectively).

"Human" has many definitions. In this case, I was referring to, shall we say, typical humans - no psychopaths or neanderthals included. I trust that was clear?

If not, "human values" has a pretty standard meaning round here anyway.

Replies from: Decius
comment by Decius · 2013-01-01T20:53:29.041Z · LW(p) · GW(p)

Freedom does have instrumental value; however, lack of coercion is an intrinsic thing in my ethics, in addition to the instrumental value.

I don't think that I will ever be able to codify my ethics accurately in Loglan or an equivalent, but there is a lot of room for improvement in my ability to explain it to other sentient beings.

I was unaware that the "immortalist" value system was assumed to be the LW default; I thought that "human value system" referred to a different default value system.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-02T15:46:35.340Z · LW(p) · GW(p)

I was unaware that the "immortalist" value system was assumed to be the LW default; I thought that "human value system" referred to a different default value system.

The "immortalist" value system is an approximaton of the "human value system", and is generally considered a good one round here.

Replies from: Decius
comment by Decius · 2013-01-02T20:23:46.524Z · LW(p) · GW(p)

It's nowhere near the default value system I encounter in meatspace. It's also not the one that's being followed by anyone with two fully functional lungs and kidneys. (Aside: that might be a good question to add to the next annual poll)

I don't think mass murder in the present day is ethically required, even if by doing so would be a net benefit. Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.

Replies from: wedrifid, MugaSofer
comment by wedrifid · 2013-01-03T11:43:56.016Z · LW(p) · GW(p)

It's nowhere near the default value system I encounter in meatspace. It's also not the one that's being followed by anyone with two fully functional lungs and kidneys.

I don't believe you. Immortalists can have two fully functional lungs and kidneys. I think you are referring to something else.

Replies from: Decius
comment by Decius · 2013-01-05T04:10:50.834Z · LW(p) · GW(p)

Go ahead- consider a value function over the universe, that values human life and doesn't privilege any one individual, and ask that function if the marginal inconvenience and expense of donating a lung and a kidney are greater than the expected benefit.

comment by MugaSofer · 2013-01-03T11:17:26.610Z · LW(p) · GW(p)

It's nowhere near the default value system I encounter in meatspace.

Well, no. This isn't meatspace. There are different selection effects here.

[The second half of this comment is phrased far, far too strongly, even as a joke. Consider this an unnofficial "retraction", although I still want to keep the first half in place.]

Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.

If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED. [/retraction]

Replies from: TheOtherDave, Decius, wedrifid
comment by TheOtherDave · 2013-01-03T14:35:08.686Z · LW(p) · GW(p)

If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.

Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways).
Yet another possibility is that the environment should be altered so that humanity's free choices no longer have the consequence of hastening extinction.
There are others.

Replies from: Decius, MugaSofer
comment by Decius · 2013-01-03T22:15:38.077Z · LW(p) · GW(p)

One major possibility would be that the extinction of humanity is not negative infinity utility.

comment by MugaSofer · 2013-01-03T16:32:13.059Z · LW(p) · GW(p)

Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways). Yet another possibility is that the environment should be altered so that humanity's free choices no longer have the consequence of hastening extinction.

Well, I'm not sure how one would go about restricting freedom without "altering the environment", and reeducation could also be construed as limiting freedom in some capacity (although that's down to definitions.) I never described what tactics should be used by such a hypothetical authority.

comment by Decius · 2013-01-03T22:25:01.068Z · LW(p) · GW(p)

Why is the extinction of humanity worse than involuntary restrictions on personal agency? How much reduction in risk or expected delay of extinction is needed to justify denying all choice to all people?

comment by wedrifid · 2013-01-03T11:47:48.490Z · LW(p) · GW(p)

If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.

QED does not apply there. You need a huge ceteris paribus included before that follows simply and the ancestor comments have already brought up ways in which all else may not be equal.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-03T16:25:43.462Z · LW(p) · GW(p)

OK, QED is probably an exaggeration. Nevertheless, it seems trivially true that if "free choice" is causing something with as much negative utility as the extinction of humanity, then it should be restricted in some capacity.

comment by wedrifid · 2012-12-16T23:26:31.127Z · LW(p) · GW(p)

Taboo "contrived".

"The kind of obscure technical exceptions that wedrifid will immediately think of the moment someone goes and makes a fully general claim about something that is almost true but requires qualifiers or gentler language."

Replies from: army1987, Eugine_Nier
comment by A1987dM (army1987) · 2012-12-16T23:52:43.401Z · LW(p) · GW(p)

That doesn't help if wedrifid won't think of as obscure and noncentral exceptions with certain questions as with others.

(IIRC, EY in his matching questions on OKCupid when asked whether someone is ever obliged to sex, he picked No and commented something like ‘unless I agreed to have sex with you for money, and already took the money’, but when asked whether someone should ever use a nuclear weapon (or something like that), he picked Yes and commented with a way more improbable example than that.)

comment by Eugine_Nier · 2012-12-16T23:30:20.541Z · LW(p) · GW(p)

That's not helpful, especially in context.

Replies from: wedrifid
comment by wedrifid · 2012-12-16T23:58:06.730Z · LW(p) · GW(p)

That's not helpful, especially in context.

Apart from implying different subjective preferences to mine when it comes to conversation this claim is actually objectively false as a description of reality.

The 'taboo!' demand in this context was itself a borderline (in as much as it isn't actually the salient feature that needs elaboration or challenge and the meaning should be plain to most non disingenuous readers). But assuming there was any doubt at all about what 'contrived' meant in the first place my response would, in fact, help make it clear through illustration what kind of thing 'contrived' was being used to represent (which was basically the literal meaning of the word).

Your response indicates that the "Taboo contrived!" move may have had some specific rhetorical intent that you don't want disrupted. If so, by all means state it. (I am likely to have more sympathy for whatever your actual rejection of decius's comment is than for your complaint here.)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-17T00:46:29.658Z · LW(p) · GW(p)

Decius considered the possibility that

torture and murder are objectively the most moral things to do in noncontrived circumstances.

In order to address this possibility, I need to know what Decius considers "contrived" and not just what the central example of a contrived circumstance is. In any case, part of my point was to force Decius to think more clearly about under what circumstances are torture and killing justified rather than simply throwing all the examples he knows in the box labeled "contrived".

Replies from: TimS
comment by TimS · 2012-12-17T01:01:28.219Z · LW(p) · GW(p)

However Decius answers, he probably violates the local don't-discuss-politics norm. By contrast, your coyness makes it appear that you haven't done so.

In short, it appears to me that you already know Decius' position well enough to continue the discussion if you wanted to. Your invocation of the taboo-your-words convention appears like it isn't your true rejection.

comment by A1987dM (army1987) · 2012-12-18T11:04:15.919Z · LW(p) · GW(p)

I'd take “contrived circumstances” to mean ‘circumstances so rare that the supermajority of people alive have never found themselves in one of them’.

comment by MugaSofer · 2012-12-14T12:43:26.473Z · LW(p) · GW(p)

Presumably the creator did want the trees, he just didn't want humans using it. I always got the impression that the trees were used by God(and angels?), who at the point the story was written was less the abstract creator of modern times and more the (a?) jealous tribal god of the early Hebrews (for example, he was physically present in the GOE.) Isn't there a line about how humanity must never reach the TOL because they would become (like) gods?

EDIT:

My position is that suppressing knowledge of any kind is Evil.

Seriously? Knowledge of any kind?

Replies from: Decius
comment by Decius · 2012-12-14T23:48:54.062Z · LW(p) · GW(p)

Yes. Suppressing knowledge of any kind is evil. It's not the only thing which is evil, and acts are not necessarily good because they also disseminate knowledge.

Replies from: None, MugaSofer
comment by [deleted] · 2012-12-15T00:05:27.440Z · LW(p) · GW(p)

It's not the only thing which is evil

This has interesting implications.

Other more evil things (like lots of people dieing) can sometimes be prevented by doing a less evil thing (like suppressing knowledge). For example, the code for an AI that would foom, but does not have friendliness guarantees, is a prime candidate for suppression.

So saying that something is evil is not the last word on whether or not it should be done, or how it's doers should be judged.

Replies from: Decius
comment by Decius · 2012-12-15T00:41:27.207Z · LW(p) · GW(p)

Code, instructions, and many things that can be expressed as information are only incidentally knowledge. There's nothing evil about writing a program and then deleting it; there is something evil about passing a law which prohibits programming from being taught, because programmers might create an unfriendly AI.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-16T04:21:47.950Z · LW(p) · GW(p)

Code, instructions, and many things that can be expressed as information are only incidentally knowledge.

Well, the knowledge from the tree appears to also have been knowledge of this kind.

Replies from: Decius
comment by Decius · 2012-12-16T20:53:50.378Z · LW(p) · GW(p)

I draw comparisons between the serpent offering the apple, the Titan Prometheus, and Odin sacrificing his eye. Do you think that the comparison of those knowledge myths is unfair?

comment by MugaSofer · 2012-12-15T15:51:23.517Z · LW(p) · GW(p)

Fair enough. Humans do appear to value truth.

Of course, if acts that conceal knowledge can be good because of other factors, then this:

I dunno, dude could have good reasons to want knowledge of good and evil staying hush-hush. (Forbidding knowledge in general would indeed be super evil.) For example: You have intuitions telling you to eat when you're hungry and give food to others when they're hungry. And then you learn that the first intuition benefits you but the second makes you a good person. At this point it gets tempting to say "Screw being a good person, I'm going to stuff my face while others starve", whereas before you automatically shared fairly. You could have chosen to do that before (don't get on my case about free will), but it would have felt as weird as deciding to starve just so others could have seconds. Whereas now you're tempted all the time, which is a major bummer on the not-sinning front. I'm making this up, but it's a reasonable possibility.

... is still valid.

comment by Irgy · 2012-12-11T04:14:49.891Z · LW(p) · GW(p)

This is a classic case of fighting the wrong battle against theism. The classic theist defence is to define away every meaningful aspect of God, piece by piece, until the question of God's existance is about as meaningful as asking "do you believe in the axiom of choice?". Then, after you've failed to disprove their now untestable (and therefore meaningless) theory, they consider themselves victorious and get back to reading the bible. It's this part that's the weak link. The idea that the bible tells us something about God (and therefore by extension morality and truth) is a testable and debatable hypothesis, whereas God's existance can be defined away into something that is not.

People can say "morality is God's will" all they like and I'll just tell them "butterflies are schmetterlinge". It's when they say "morality is in the bible" that you can start asking some pertinent questions. To mix my metaphors, I'll start believing when someone actually physically breaks a ball into pieces and reconstructs them into two balls of the same original size, but until I really see something like that actually happen it's all just navel gazing.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T19:10:13.482Z · LW(p) · GW(p)

Sure, and to the extent that somebody answers that way, or for that matter runs away from the question, instead of doing that thing where they actually teach you in Jewish elementary school that Abraham being willing to slaughter Isaac for God was like the greatest thing ever and made him deserve to be patriarch of the Jewish people, I will be all like, "Oh, so under whatever name, and for whatever reason, you don't want to slaughter children - I'll drink to that and be friends with you, even if the two of us think we have different metaethics justifying it". I wasn't claiming that accepting the first horn of the dilemma was endorsed by all theists or a necessary implication of theism - but of course, the rejectance of that horn is very standard atheism.

Replies from: MixedNuts, MugaSofer
comment by MixedNuts · 2012-12-10T19:35:14.337Z · LW(p) · GW(p)

I don't think it's incompatible. You're supposed to really trust the guy because he's literally made of morality, so if he tells you something that sounds immoral (and you're not, like, psychotic) of course you assume that it's moral and the error is on your side. Most of the time you don't get direct exceptional divine commands, so you don't want to kill any kids. Wouldn't you kill the kid if an AI you knew to be Friendly, smart, and well-informed told you "I can't tell you why right now, but it's really important that you kill that kid"?

If your objection is that Mr. Orders-multiple-genocides hasn't shown that kind of evidence he's morally good, well, I got nuthin'.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-10T21:29:23.224Z · LW(p) · GW(p)

You're supposed to really trust the guy because he's literally made of morality, so if he tells you something that sounds immoral (and you're not, like, psychotic) of course you assume that it's moral and the error is on your side.

What we have is an inconsistent set of four assertions:

  1. Killing my son is immoral.
  2. The Voice In My Head wants me to kill my son.
  3. The Voice In My Head is God.
  4. God would never want someone to perform an immoral act.

At least one of these has to be rejected. Abraham (provisionally) rejects 1; once God announces 'J/K,' he updates in favor of rejecting 2, on the grounds that God didn't really want him to kill his son, though the Voice really was God.

The problem with this is that rejecting 1 assumes that my confidence in my foundational moral principles (e.g., 'thou shalt not murder, self!') is weaker than my confidence in the conjunction of:

  • 3 (how do I know this Voice is God? the conjunction of 1,2,4 is powerful evidence against 3),
  • 2 (maybe I misheard, misinterpreted, or am misremembering the Voice?),
  • and 4.

But it's hard to believe that I'm more confident in the divinity of a certain class of Voices than in my moral axioms, especially if my confidence in my axioms is what allowed me to conclude 4 (God/morality identity of some sort) in the first place. The problem is that I'm the one who has to decide what to do. I can't completely outsource my moral judgments to the Voice, because my native moral judgments are an indispensable part of my evidence for the properties of the Voice (specifically, its moral reliability). After all, the claim is 'God is perfectly moral, therefore I should obey him,' not 'God should be obeyed, therefore he is perfectly moral.'

Replies from: MixedNuts, Alejandro1
comment by MixedNuts · 2012-12-10T21:52:45.363Z · LW(p) · GW(p)

Well, deities should make themselves clear enough that (2) is very likely (maybe the voice is pulling your leg, but it wants you to at least get started on the son-killing). (3) is also near-certain because you've had chats with this voice for decades, about moving and having kids and changing your name and whether the voice should destroy a city.

So this correctly tests whether you believe (4) more than (1) - whether your trust in G-d is greater than your confidence in your object-level judgement.

You're right that it's not clear why Abraham believes or should believe (4). His culture told him so and the guy has mostly done nice things for him and his wife, and promised nice things then delivered, but this hardly justifies blind faith. (Then again I've trusted people on flimsier grounds, if with lower stakes.) G-d seems very big on trust so it makes sense that he'd select the president of his fan club according to that criterion, and reinforce the trust with "look, you trusted me even though you expected it to suck, and it didn't suck".

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-10T22:35:49.303Z · LW(p) · GW(p)

Well, if we're shifting from our idealized post-Protestant-Reformation Abraham to the original Abraham-of-Genesis folk hero, then we should probably bracket all this Medieval talk about God's omnibenevolence and omnipotence. The Yahweh of Genesis is described as being unable to do certain things, as lacking certain items of knowledge, and as making mistakes. Shall not the judge of all the Earth do right?

As Genesis presents the story, the relevant question doesn't seem to be 'Does my moral obligation to obey God outweigh my moral obligation to protect my son?' Nor is it 'Does my confidence in my moral intuitions outweigh my confidence in God's moral intuitions plus my understanding of God's commands?' Rather, the question is: 'Do I care more about obeying God than about my most beloved possession?' Notice there's nothing moral at stake here at all; it's purely a question of weighing loyalties and desires, of weighing the amount I trust God's promises and respect God's authority against the amount of utility (love, happiness) I assign to my son.

The moral rights of the son, and the duties of the father, are not on the table; what's at issue is whether Abraham's such a good soldier-servant that he's willing to give up his most cherished possessions (which just happen to be sentient persons). Replace 'God' with 'Satan' and you get the same fealty calculation on Abraham's part, since God's authority, power, and honesty, not his beneficence, are what Abraham has faith in.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T18:59:41.730Z · LW(p) · GW(p)

If we're going to talk about what actually happened, as opposed to a particular interpretation, the answer is "probably nothing". Because it's probably a metaphor for the Hebrews abandoning human sacrifice.

Just wanted to put that out there. It's been bugging me.

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-13T19:09:08.171Z · LW(p) · GW(p)

Because it's probably a metaphor for the Hebrews abandoning human sacrifice.

[citation needed]

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T19:50:36.771Z · LW(p) · GW(p)

More like [original research?]. I was under the impression that's the closest thing to a "standard" interpretation, but it could as easily have been my local priest's pet theory.

You've gotta admit it makes sense, though.

Replies from: RobbBB, BerryPick6
comment by Rob Bensinger (RobbBB) · 2012-12-13T21:04:21.794Z · LW(p) · GW(p)

To my knowledge, this is a common theory, although I don't know whether it's standard. There are a number of references in the Tanakh to human sacrifice, and even if the early Jews didn't practice (and had no cultural memory of having once practiced) human sacrifice, its presence as a known phenomenon in the Levant could have motivated the story. I can imagine several reasons:

  • (a) The writer was worried about human sacrifice, and wanted a narrative basis for forbidding it.

  • (b) The writer wasn't worried about actual human sacrifice, but wanted to clearly distinguish his community from Those People who do child sacrifice.

  • (c) The writer didn't just want to show a difference between Jews and human-sacrifice groups, but wanted to show that Jews were at least as badass. Being willing to sacrifice humans is an especially striking and impressive sign of devotion to a deity, so a binding-of-Isaac-style story serves to indicate that the Founding Figure (and, by implicit metonymy, the group as a whole, or its exemplars) is willing to give proof of that level of devotion, but is explicitly not required to do so by the god. This is an obvious win-win -- we don't have to actually kill anybody, but we get all the street-cred for being hardcore enough to do so if our God willed it.

All of these reasons may be wrong, though, if only because they treat the Bible's narratives as discrete products of a unified agent with coherent motives and reasons. The real history of the Bible is sloppy, messy, and zigzagging. Richard Friedman suggests that in the original (Elohist-source) story, Abraham actually did carry out the sacrifice of Isaac. If later traditions then found the idea of sacrificing a human (or sacrificing Isaac specifically) repugnant, the transition-from-human-sacrifice might have been accomplished by editing the old story, rather than by inventing it out of whole cloth as a deliberate rationalization for the historical shift away from the kosherness of human sacrifice. This would help account for the strangeness of the story itself, and for early midrashic traditions that thought that Isaac had been sacrificed. This also explains why the Elohist source never mentions Isaac again after the story, and why the narrative shifts from E-vocabulary to J-vocabulary at the crucial moment when Isaac is spared. Maybe.

P.S.: No, I wasn't speculating about 'what actually happened.' I was just shifting from our present-day, theologized pictures of Abraham to the more ancient figure actually depicted in the text, fictive though he be.

comment by BerryPick6 · 2012-12-13T19:54:17.755Z · LW(p) · GW(p)

I was under the impression that's the closest thing to a "standard" interpretation

I've never heard it before.

You've gotta admit it makes sense, though.

After nearly a decade of studying the Old Testament, I finally decided very little of it makes sense a few years ago.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T20:07:43.982Z · LW(p) · GW(p)

I've never heard it before.

Huh.

After nearly a decade of studying the Old Testament, I finally decided very little of it makes sense a few years ago.

Well, it depends what you mean by "sense", I guess.

comment by Alejandro1 · 2012-12-10T21:48:03.469Z · LW(p) · GW(p)

The problem has the same structure for MixedNuts' analogy of the FAI replacing the Voice. Suppose you program the AI to compute explicitly the logical structure "morality" that EY is talking about, and it tells you to kill a child. You could think you made a mistake in the program (analogous to rejecting your 3), or that you are misunderstanding the AI or hallucinating it (rejecting 2). And in fact for most conjunctions of reasonable empirical assumptions, it would be more rational to take any of these options than to go ahead and kill the child.

Likewise, sensible religionists agree that if someone hears voices in their head telling them to kill children, they shouldn't do it. Some of they might say however that Abraham's position was unique, that he had especially good reasons (unspecified) to accept 2 and 3, and that for him killing the child is the right decision. In the same way, maybe an AI programmer with very strong evidence for the analogies for 2 and 3 should go ahead and kill the child. (What if the AI has computed that the child will grow up to be Hitler?)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-10T22:13:44.234Z · LW(p) · GW(p)

A few religious thinkers (Kierkegaard) don't think Abraham's position was completely unique, and do think we should obey certain Voices without adequate evidence for 4, perhaps even without adequate evidence for 3. But these are outlier theories, and certainly don't reflect the intuitions of most religious believers, who pay more lip service to belief-in-belief than actual service-service to belief-in-belief.

I think an analogous AI set-up would be:

  1. Killing my son is immoral.
  2. The monitor reads 'Kill your son.'
  3. The monitor's display perfectly reflects the decisions of the AI I programmed.
  4. I successfully programmed the AI to be perfectly moral.

What you call rejecting 3 is closer to rejecting 4, since it concerns my confidence that the AI is moral, not my confidence that the AI I programmed is the same as the entity outputting 'Kill your son.'

Replies from: Alejandro1
comment by Alejandro1 · 2012-12-10T22:31:53.816Z · LW(p) · GW(p)

I disagree, because I think the analogy between the (4) of each case should go this way:

(4a) Analysis of "morality" as equivalent to a logical structure extrapolatable from by brain state (plus other things) and that an AI can in principle compute <==> (4b) Analysis of "morality" as equivalent to a logical structure embodied in a unique perfect entity called "God"

These are both metaethical theories, a matter of philosophy. Then the analogy between (3) in each case goes:

(3a) This AI in front of me is accurately programmed to compute morality and display what I ought to do <==> (3b) This voice I hear is the voice of God telling me what I ought to do.

(3a) includes both your 3 and your 4, which can be put together as they are both empirical beliefs that, jointly, are related to the philosophical theory (4a) as the empirical belief (3b) is related to the philosophical theory (4b).

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-10T22:54:27.323Z · LW(p) · GW(p)

Makes sense. I was being deliberately vague about (4) because I wasn't committing myself to a particular view of why Abraham is confident in God's morality. If we're going with the scholastic, analytical, logical-pinpointing approach, then your framework is more useful. Though in that case even talking about 'God' or a particular AI may be misleading; what 4 then is really asserting is just that morality is a coherent concept, and can generate decision procedures. Your 3 is then the empirical claim that a particular being in the world embodies this concept of a perfect moral agent. My original thought simply took your 4 for granted (if there is no such concept, then what are we even talking about?), and broke the empirical claim up into multiple parts. This is important for the Abraham case, because my version of 3 is the premise most atheists reject, whereas there is no particular reason for the atheists to reject my version of 4 (or yours).

Replies from: Alejandro1
comment by Alejandro1 · 2012-12-10T23:28:16.077Z · LW(p) · GW(p)

We are mostly in agreement about the general picture, but just to keep the conversation going...

I don't think (4) is so trivial or that (4a) and (4b) can be equated. For the first, there are other metaethical theories that I think wouldn't agree with the common content of (4a) and (4b). These include relativism, error theory, Moorean non-naturalism, and perhaps some naive naturalisms ("the good just is pleasure/happiness/etc, end of story").

For the second, I was thinking of (4a) as embedded in the global naturalistic, reductionistic philosophical picture that EY is elaborating and that is broadly accepted in LW, and of (4b) as embedded in the global Scholastic worldview (the most steelmanned version I know of religion). Obviously there are many differences between the two philosophies, both in the conceptual structures used and in very general factual beliefs (which as a Quinean I see as intertwined and inseparable at the most global level). In particular, I intended (4b) to include the claim that this perfect entity embodying morality actually exists as a concrete being (and, implicitly,that it has the other omni-properties attributed to God). Clearly atheists wouldn't agree with any of this.

comment by MugaSofer · 2012-12-11T09:49:40.415Z · LW(p) · GW(p)

I can't speak for Jewish elementary school, but surely believing PA (even when, intuitively, the result seems flatly wrong or nonsensical) would be a good example to hold up before students of mathematics? The Monty Hall problem seems like a good example of this.

comment by Vaniver · 2012-12-10T06:44:26.118Z · LW(p) · GW(p)

I read this post with a growing sense of unease. The pie example appears to treat "fair" as a 1-place word, but I don't see any reason to suppose it would be. (I note my disquiet that we are both linking to that article; and my worry about how confused this post seems to me.)

The standard atheist reply is tremendously unsatisfying; it appeals to intuition and assumes what it's trying to prove!

My resolution of Euthryphro is "the moral is the practical." A predictable consequence of evolution is that people have moral intuitions, that those intuitions reflect their ancestral environment, and that those intuitions can be variable. Where would I find mercy, justice, or duty? Cognitive algorithms and concepts inside minds.

This article reads like you're trying to move your stone tablet from your head into the world of logic, where it can be as universal as the concept of primes. It's not clear to me why you're embarking on that particular project.

The example of elegance seems like it points the other way. If your sense of elegance is admittedly subjective, why are we supposing a Platonic form of elegance out in the world of logic? Isn't this basically the error where one takes a cognitive algorithm that recognizes whether or not something is a horse and turns it into a Platonic form of horseness floating in the world of logic?

It looks to me like you're trying to say "because classification algorithms can be implemented in reality, there can be real ensembles that embody logical facts, but changing the classification algorithms doesn't change those logical facts," which seems true but I don't see what work you expect it to do.

There's also the statement "when you change the algorithms that lead to outputs, you change the internal sensation of those outputs." That has not been my experience, and I don't see a reason why that would be the case. In particular, when dreaming it seems like many algorithms have their outputs fixed at certain values: my 'is this exciting?' algorithm may return 'exciting!' during the dream but 'boring!' when considering the dream whilst awake, but the sensation that results from the output of the algorithm seems indistinguishable; that is, being excited in a dream feels the same to me as being excited while awake. (Of course, it could be that whichever part of me is able to differentiate between sensations is also malfunctioning while dreaming!)

I could write out an exact description of your visual cortex's spiking code for 'blue' on paper, and it wouldn't actually look blue to you.

If you show me the pattern of neurons firing that happens when my bladder is full, then my bladder won't feel full. If you put an electrode in my head (or use induction, or whatever) and replicate that pattern of neurons firing, then my bladder will feel full, because the feeling of fullness is the output of those neurons firing in that pattern.

In the same sense, when you try to do what's right, you're motivated by things like (to yet again quote Frankena's list of terminal values):

You sure it's not just executing an adaption? Why?

Replies from: RobbBB, Peterdjones, nshepperd
comment by Rob Bensinger (RobbBB) · 2012-12-10T23:42:40.695Z · LW(p) · GW(p)

The pie example appears to treat "fair" as a 1-place word

'Beautiful' needs 2 places because our concept of beauty admits of perceptual variation. 'Fairness' does not grammatically need an 'according to whom?' argument place, because our concept of fairness is not observer-relative. You could introduce a function that takes in a person X who associates a definition with 'fairness,' takes in a situation Y, and asks whether X would call Y 'fair.' But this would be a function for 'What does the spoken syllable FAIR denote in a linguistic community?', not a function for 'What is fair?' If we applied this demand generally, 'beautiful' would became 3-place ('what objects X would some agent Y say some agent Z finds 'beautiful'?'), as would logical terms like 'plus' ('how would some agent X perform the operation X calls "addition" on values Y and Z?'), and indeed all linguistic acts.

intuitions reflect their ancestral environment, and [...] those intuitions can be variable.

Yes, but a given intuition cannot vary limitlessly, because there are limits to what we would consider to fall under the same idea of 'fairness.' Different people may use the spoken syllables FAIR, PLUS, or BEAUTIFUL differently, but past a certain point we rightly intuit that the intension of the words, and not just their extension, has radically changed. Thus even if 'fairness' is disjunctive across several equally good concepts of fairness, there are semantic rules for what gets to be in the club. Plausibly, 'fairness is whatever makes RobbBB happiest' is not a semantic candidate for what English-speakers are logically pinpointing as 'fairness.'

This article reads like you're trying to move your stone tablet from your head into the world of logic, where it can be as universal as the concept of primes.

You hear 'Oh no, he's making morality just as objective as number theory!' whereas I hear 'Oh good, he's making morality just as subjective as number theory.' If we can logically pinpoint 'fairness,' then fairness can be rigorously and objectively discussed even if some species find the concept loathsome; just as if we can logically pinpoint 'prime number,' we can rigorously and objectively discuss the primes even with a species S who finds it unnatural to group 2 with the other primes, and a second species S* who finds it unnatural to exclude 1 from their version of the primes. Our choice of whether to consider 2 prime, like our choice of which semantic value to assign to 'fair,' is both arbitrary and unimpeachably objective.

Or do you think that number theory is literally writ into the fabric of reality somewhere, that Plato's Heaven is actually out there and that we therefore have to be very careful about which logical constructs we allow into the club? This reluctance to let fairness into an elite Abstraction Club, even if some moral codes are just as definable in logical terms as is number theory, reminds me of Plato's neurotic reluctance (in the Parmenides) to allow for the possibility that there might be Forms "of hair, mud, dirt, or anything else which is vile and paltry." Constructible is constructible; there is not a privileged set of Real Constructs distinct from the Mere Fictions, and the truths about Sherlock Holmes, if defined carefully enough, get the same epistemic and metaphysical status as the truths about Graham's Number.

If your sense of elegance is admittedly subjective, why are we supposing a Platonic form of elegance out in the world of logic?

You're confusing epistemic subjectivity with ontological subjectivity. Terms that are defined via or refer to mind- or brain-states may nevertheless be defined with so much rigor that they admit no indeterminacy, i.e., an algorithm could take in the rules for certain sentences about subjectivity and output exactly which cases render those sentences true, and which render them false.

Isn't this basically the error where one takes a cognitive algorithm that recognizes whether or not something is a horse and turns it into a Platonic form of horseness floating in the world of logic?

What makes you think that the 'world of logic' is Platonic in the first place? If logic is a matter of mental construction, not a matter of us looking into our metaphysical crystal balls and glimpsing an otherworldly domain of Magical Nonspatiotemporal Thingies, then we cease to be tempted by Forms of Horsehood for the same reason we cease to be tempted by Forms of Integerhood.

Replies from: Vaniver, army1987
comment by Vaniver · 2012-12-11T00:51:53.259Z · LW(p) · GW(p)

'Beautiful' needs 2 places because our concept of beauty admits of perceptual variation. 'Fairness' does not grammatically need an 'according to whom?' argument place, because our concept of fairness is not observer-relative.

What? It seems to me that fairness and beauty are equally subjective, and the intuition that says "but my sense of fairness is objectively correct!" is the same intuition that says "but my sense of beauty is objectively correct!"

If we can logically pinpoint 'fairness,' then fairness can be rigorously and objectively discussed even if some species find the concept loathsome

I agree that we can logically pinpoint any specific concept; to use the pie example, Yancy uses the concept of "splitting windfalls equally by weight" and Zaire uses the concept of "splitting windfalls equally by desire." What I disagree with is the proposition that there is this well-defined and objective concept of "fair" that, in the given situation, points to "splitting windfalls equally by weight."

One could put forward the axiom that "splitting windfalls equally by weight is fair", just like one can put forward the axiom that "zero is not the successor of any number," but we are no closer to that axiom having any decision-making weight; it is just a model, and for it to be used it needs to be a useful and appropriate model.

Replies from: nshepperd, RobbBB
comment by nshepperd · 2012-12-11T01:47:51.753Z · LW(p) · GW(p)

What I disagree with is the proposition that there is this well-defined and objective concept of "fair" that, in the given situation, points to "splitting windfalls equally by weight."

"Fair", quoted, is a word. You don't think it's plausible that in English "fair" could refer to splitting windfalls equally by weight? (Or rather to something a bit more complicated that comes out to splitting windfalls equally by weight in the situation of the three travellers and the pie.)

Replies from: Vaniver
comment by Vaniver · 2012-12-11T05:12:41.270Z · LW(p) · GW(p)

I agree that someone could mean "splitting windfalls equally by weight" when they say "fair." I further submit that words can be ambiguous, and someone else could mean "splitting windfalls equally by desire" when they say "fair." In such a case, where the word seems to adding more heat than light, I would scrap it and go with the more precise phrases.

comment by Rob Bensinger (RobbBB) · 2012-12-11T01:49:41.841Z · LW(p) · GW(p)

What? It seems to me that fairness and beauty are equally subjective

I don't know what you mean by 'subjective.' But perhaps there is a (completely non-denoting) concept of Objective Beauty in addition to the Subjective Beauty ('in the eye of the beholder') I'm discussing, and we're talking past each other about the two. So let's pick a simpler example.

'Delicious' is clearly two-place, and ordinary English-language speakers routinely consider it two-place; we sometimes elide the 'delicious for whom?' by assuming 'for ordinary humans,' but it would be controversial to claim that speaking of deliciousness automatically commits you to a context-independent property of Intrinsic Objective Tastiness.

Now, it sounded like you were claiming that fairness is subjective in much the same way as deliciousness; no claim about fairness is saturated unless it includes an argument place for the evaluater. But this seems to be false simply given how people conceive of 'fair' and 'delicious'. People don't think there's an implicit 'fairness-relative-to-a-judge-thereof' when we speak of 'fairness,' or at least it don't think it in the transparent way they think of 'deliciousness' as always being 'deliciousness-relative-to-a-taster.' ('Beauty,' perhaps, is an ambiguous case straddling these two categories.) So is there some different sense in which fairness is 'subjective'? What is this other sense?

What I disagree with is the proposition that there is this well-defined and objective concept of "fair" that, in the given situation, points to "splitting windfalls equally by weight."

Are you claiming that Eliezer lacks any well-defined concept he's calling 'fairness'? Or are you claiming that most English-speakers don't have Eliezer's well-defined fairness in mind when they themselves use the word 'fair,' thus making Eliezer guilty of equivocation?

People argue about how best to define a term all the time, but we don't generally conclude from this that any reasoning one proceeds to carry out once one has stipulated a definition for the controversial term is for that reason alone 'subjective.' There have been a number of controversies in the history of mathematics — places where people's intuitions simply could not be reconciled by any substantive argument or proof — and mathematicians responded by stipulating precisely what they meant by their terms, then continuing on from there. Are you suggesting that this same method stops being useful or respectable if we switch domains from reasoning about this thing we call 'quantity' to reasoning about this thing we call 'fairness'?

we are no closer to that axiom having any decision-making weight

What would it mean for an axiom to have "decision-making weight"? And do you think Eliezer, or any other intellectually serious moral realist, is honestly trying to attain this "decision-making weight" property?

Replies from: Vaniver
comment by Vaniver · 2012-12-11T04:41:04.996Z · LW(p) · GW(p)

I don't know what you mean by 'subjective.'

That the judgments of "fair" or "beautiful" don't come from a universal source, but from a particular entity. I have copious evidence that what I consider "beautiful" is different from what some other people consider "beautiful;" I have copious evidence that what I consider "fair" is different from what some other people consider "fair."

'Delicious' is clearly two-place, and ordinary English-language speakers routinely consider it two-place;

It is clear to me that delicious is two-place, but it seems to me that people have to learn that it is two-place, and evidence that it is two-place is often surprising and potentially disgusting. Someone who has not learned through proverbs and experience that "beauty is in the eye of the beholder" and "there's no accounting for taste" would expect that everyone thinks the same things are beautiful and tasty.

But this seems to be false simply given how people conceive of 'fair' and 'delicious'.

There are several asymmetries between them. Deliciousness generally affects one person, and knowing that it varies allows specialization and gains from trade (my apple for your banana!). Fairness generally requires at least two people to be involved, and acknowledging that your concept of fairness does not bind the other person puts you at a disadvantage. Compare Xannon's compromise to Yancy's hardlining.

People thinking that something is objective is not evidence that it is actually objective. Indeed, we have plenty of counterevidence in all the times that people argue over what is fair.

Are you claiming that Eliezer lacks any well-defined concept he's calling 'fairness'?

No? I'm arguing that Eliezer::Fair may be well-defined, but that he has put forward no reason that will convince Zaire that Zaire::Fair should become Eliezer::Fair, just like he has put forward no reason why Zaire::Favorite Color should become Eliezer::Favorite Color.

Are you suggesting that this same method stops being useful or respectable if we switch domains from reasoning about this thing we call 'quantity' to reasoning about this thing we call 'fairness'?

There are lots of possible geometries out there, and mathematicians can productively discuss any set of non-contradictory axioms. But only a narrow subset of those geometries correspond well with the universe that we actually live in; physicists put serious effort into understanding those, and the rest are curiosities.

(I think that also answers your last two questions, but if it doesn't I'll try to elaborate.)

Replies from: Peterdjones, RobbBB
comment by Peterdjones · 2012-12-11T11:20:10.125Z · LW(p) · GW(p)

I have copious evidence that what I consider "beautiful" is different from what some other people consider "beautiful;" I have copious evidence that what I consider "fair" is different from what some other people consider "fair."

But there is little upshot to people having differnt notions of beauty, since people can arrange their own environents to suit their own aesthetics. However, resources have to be apportioned one way or another. So we need, and have discussion about how to do things fairly. (Public architecture is a bit of an exception to what I said about beauty, but lo and behold, we have debates at that too).

comment by Rob Bensinger (RobbBB) · 2012-12-11T08:41:56.852Z · LW(p) · GW(p)

the judgments of "fair" or "beautiful" don't come from a universal source, but from a particular entity.

I don't understand what this means. To my knowledge, the only things that exist are particulars.

I have copious evidence that what I consider "beautiful" is different from what some other people consider "beautiful;" I have copious evidence that what I consider "fair" is different from what some other people consider "fair."

I have copious evidence that others disagree with me about ¬¬P being equivalent to P. And I have copious evidence that others disagree with me about the Earth's being more than 6,000 years old. Does this imply that my belief in Double Negation Elimination and in the Earth's antiquity is 'subjective'? If not, then what extra premises are you suppressing?

It is clear to me that delicious is two-place, but it seems to me that people have to learn that it is two-place

Well, sure. But, barring innate knowledge, people have to learn everything at some point. 3-year-olds lack a theory of mind; and those with a new theory of mind may not yet understand that 'beautiful' and 'delicious' are observer-relative. But that on its own gives us no way to conclude that 'fairness' is observer-relative. After all, not everything that we start off thinking is 'objective' later turns out to be 'subjective.'

And even if 'fairness' were observer-relative, there have to be constraints on what can qualify as 'fairness.' Fairness is not equivalent to 'whatever anyone decides to use the word "fairness" to mean,' as Eliezer rightly pointed out. Even relativists don't tend to think that 'purple toaster' and 'equitable distribution of resources' are equally legitimate and plausible semantic candidates for the word 'fairness.'

Deliciousness generally affects one person

That's not true. Deliciousness, like fairness, affects everyone. For instance, my roommate is affected by which foods I find delicious; it changes where she ends up going to eat.

Perhaps you meant something else. You'll have to be much more precise. The entire game when it comes to as tricky a dichotomy as 'objective/subjective' is just: Be precise. The dichotomy will reveal its secrets and deceptions only if we taboo our way into its heart.

and knowing that it varies allows specialization and gains from trade (my apple for your banana!).

What's fair varies from person to person too, because different people, for instance, put different amounts of work into their activities. And knowing about what's fair can certainly help in trade!

acknowledging that your concept of fairness does not bind the other person puts you at a disadvantage

Does not "bind" the other person? Fairness is not a physical object; it cannot bind people's limbs. If you mean something else by 'bind,' please be more explicit.

Eliezer::Fair may be well-defined, but that he has put forward no reason that will convince Zaire that Zaire::Fair should become Eliezer::Fair

What would it mean for Zaire::Fair to become Eliezer::Fair? Are you saying that Eliezer's fairness is 'subjective' because he can't give a deductive argument from the empty set of assumptions proving that Zaire should redefine his word 'fair' to mean what Eliezer means by 'fair'? Or are you saying that Eliezer's fairness is 'subjective' because he can't give a deductive argument from the empty set of assumptions proving that Zaire should pretend that Zaire's semantic value for the word 'fair' is the same as Eliezer's semantic value for the word 'fair'? Or what? By any of these standards, there are no objective truths; all truths rely on fixing a semantic value for your linguistic atoms, and no argument can be given for any particular fixation.

There are lots of possible geometries out there, and mathematicians can productively discuss any set of non-contradictory axioms.

They can also productively discuss sets of contradictory axioms, especially if their logic be paraconsistent.

But only a narrow subset of those geometries correspond well with the universe that we actually live in; physicists put serious effort into understanding those, and the rest are curiosities.

So, since we don't live in Euclidean space, Euclidean geometry is merely a 'curiosity.' Is it, then, subjective? If not, what ingredient, what elemental objectivium, distinguishes Euclidean geometry from Yudkowskian fairness?

Replies from: Vaniver
comment by Vaniver · 2012-12-11T16:18:42.525Z · LW(p) · GW(p)

Does this imply that my belief in Double Negation Elimination and in the Earth's antiquity is 'subjective'? If not, then what extra premises are you suppressing?

Your choice of logical system and your belief in an old Earth reside in your mind, and that you believe them only provides me rather weak evidence that they are beliefs I should hold. (I do hold both of those beliefs, but because of other evidence.) It is not clear to me that attaching the label of "subjective" or "objective" would materially improve my description.

That's not true.

When I write the word "generally," I mean it as a qualifier that acknowledges many objections could be raised that do not materially alter the point. Generally, at restaurants, you and your roommate are not required to eat the same meal, and the effects of, say, the unpleasant-to-her smell of your meal are smaller than the effects of the pleasant-to-her taste of her meal. Of course there are meals you could eat and restaurants you could choose where that is not case, but the asymmetry between the impact of your tastes on your roommate and the impact of your sense of fairness on your roommate remains in the general case.

Does not "bind" the other person? Fairness is not a physical object; it cannot bind people's limbs. If you mean something else by 'bind,' please be more explicit.

Consider:

The priest walks by the beggar without looking. The beggar calls up to him, "Matthew 25!" Matthew is not the priest's name, but still he stops, decides, and then gives the beggar his sack lunch.

The practical use of moral and ethical systems is as a guide to decision-making. Moral systems typically specialize in guiding decisions in a way that increases the positive benefit to others and decreases the negative cost to them. Moral and ethical systems are only relevant insofar as they are used to make decisions.

So, since we don't live in Euclidean space, Euclidean geometry is merely a 'curiosity.'

The space we live in corresponds well (obviously, not perfectly) with Euclidean space, and so it receives significant attention from physicists. The space we live in doesn't correspond well with the Poincare disk hyperbolic geometry; the most likely place a non-mathematician has seen it is M.C. Escher.

Is it, then, subjective?

Models of Euclidean geometry exist in minds, and one person's model of it may not agree with another's, but there is currently an established definition (i.e. blueprint for a model), and not using that definition correctly makes conversation about that topic difficult with humans who use the established definition. Comparing it to beauty, models of beauty exist in minds, and those models can differ, but there is not an established blueprint to construct a model of beauty.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-11T20:08:48.688Z · LW(p) · GW(p)

Your choice of logical system and your belief in an old Earth reside in your mind, and that you believe them only provides me rather weak evidence that they are beliefs I should hold.

I didn't ask you whether my believing them gives you evidence to think they're objectively true. I asked whether other people not believing them gives me evidence to think they're merely subjective. If not, then you can't use disagreement over 'fairness,' on its own, to demonstrate the subjectivity of fairness.

It is not clear to me that attaching the label of "subjective" or "objective" would materially improve my description.

So is it your view that "The Earth is over 6,000 years old" is neither subjective nor objective? Why not say, then, that claims about fairness are neither subjective nor objective?

When I write the word "generally," I mean it as a qualifier that acknowledges many objections could be raised that do not materially alter the point.

That's great for you, but the fact that you believe you could meet all the objections to your assertion (if you didn't, why would you be asserting it?) doesn't give me much reason to believe what you're saying. Generally.

Generally, at restaurants, you and your roommate are not required to eat the same meal, and the effects of, say, the unpleasant-to-her smell of your meal are smaller than the effects of the pleasant-to-her taste of her meal.

Just like the effects of an unfair situation I'm in (generally!) impact me more than they impact my roommate. Again, it's not clear what work you're trying to do, either when you note similarities between deliciousness and fairness or when you note dissimilarities. You've provided us with no principled way to treat 'fairness' any different than we treat the old-Earth hypothesis or 0.999... = 1; and you've given us no principled way to sort the 'subjective' claims from the 'objective' ones, nor explained why it matters which of the categories we put moral concepts under.

The priest walks by the beggar without looking. The beggar calls up to him, "Matthew 25!" Matthew is not the priest's name, but still he stops, decides, and then gives the beggar his sack lunch.

So your claim is that fairness is subjective because it has an impact on people's decision-making? Don't objective things have an impact on people's decision-making too?

Models of Euclidean geometry exist in minds, and one person's model of it may not agree with another's, but there is currently an established definition (i.e. blueprint for a model), and not using that definition correctly makes conversation about that topic difficult with humans who use the established definition.

You didn't answer my question. Is Euclidean geometry subjective? I'm just trying to get you to make your criticism of Eliezer explicit, but every time we come close to you giving a taboo'd version of what troubles you about treating fairness in the same way we treat geometry, you shift to a different topic without explaining its relevance to the 'subjectivity!' charge.

Comparing it to beauty, models of beauty exist in minds, and those models can differ, but there is not an established blueprint to construct a model of beauty.

What does it take for a blueprint to be "estabished"? Does a certain percentage of the human race have to agree on the same definition of the term? Or a certain percentage of academia? Or does an organization or individual just have to explicitly note their definition? Are you suggesting that when you criticize Eliezer for treating 'subjective' fairness as though it were 'objective,' all you're really saying is that Eliezer is treating ambiguous fairness as though it were unambiguous, i.e., eliding the multiple semantic candidates? In that case 'fairness,' defined in Eliezer's sense, would only be as subjective as, say, the term "causal model" is, since "cause" and "model" can likewise mean different things in different contexts.

Replies from: Vaniver
comment by Vaniver · 2012-12-11T20:40:48.060Z · LW(p) · GW(p)

I'm just trying to get you to make your criticism of Eliezer explicit, but every time we come close to you giving a taboo'd version of what troubles you about treating fairness in the same way we treat geometry, you shift to a different topic without explaining its relevance to the 'subjectivity!' charge.

That is because it's not clear to me that "subjective" evokes the same concept for each of us, and so I'd rather taboo subjective and talk about object-level differences than classifications.

Are you suggesting that when you criticize Eliezer for treating 'subjective' fairness as though it were 'objective,' all you're really saying is that Eliezer is treating ambiguous fairness as though it were unambiguous, i.e., eliding the multiple semantic candidates?

That looks like my objection.

I want to make clear that the linguistic claim is amplified by the relevance to decision-making. It is of little relevance to me whether others classify my actions as blegg or not; it is of great relevance to me whether others classify my actions as fair or not, and the same is true for enough people that putting forth an algorithm and stating "fairness points to this algorithm" is regarded as a power grab. Saying "geometry with the parallel postulate is Euclidean" is not regarded a power grab because the axioms and their consequences are useful or useless independent of the label ascribed to them. That communication is simply text; with labeling something 'fair,' there is the decision-making subtext "you should do this."

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-11T21:32:17.891Z · LW(p) · GW(p)

Originally, it sounded like you were making one of these claims:

  • (a) There is no semantic candidate for the word 'fairness.' It's just noise attempting to goad people into behaving a certain way.
  • (b) No semantic candidate for 'fairness' can be rendered logically precise.
  • (c) Even if we precisify a candidate meaning for 'fairness,' it won't really be a Logical Idea, because the subject matter of morality is intrinsically less logicy than the subject matter of mathematics.
  • (d) Metaphysically speaking, there are quantitative universals or Forms in Plato's Heaven, but there are no moral universals or Forms.
  • (e) No semantic candidate for 'fairness' can avoid including an argument place for a judge-of-fairness.
  • (f) No logically precise semantic candidate for 'fairness' can avoid including such an argument place.

All of these claims are implausible. But now it sounds like you're instead just claiming: (g) There are multiple semantic candidates for 'fairness,' and I'm not totally clear on which one Eliezer is talking about. So I'd appreciate it if he were a bit clearer about what he means.

If (g) is all you meant to argue this whole time, then we're in agreement. Specificity is of course a virtue.

Saying "geometry with the parallel postulate is Euclidean" is not regarded a power grab because the axioms and their consequences are useful or useless independent of the label ascribed to them.

Mathematical definitions are a power grab just as moral definitions are; the only difference is that people care more about the moral power-grabs than about the mathematical ones. Mathematical authorities assert their dominance, assert their right to participate in establishing General Mathematical Practice regarding definitions, inference rules, etc., every time they endorse one usage as opposed to another. It's only because their authority goes relatively unchallenged that we don't see foundational disputes over mathematical definitions as often as we see foundational disputes over moral definitions. Each constrains practice, after all.

Replies from: Wei_Dai, HalMorris, Vaniver
comment by Wei Dai (Wei_Dai) · 2012-12-26T23:12:08.654Z · LW(p) · GW(p)

(c) Even if we precisify a candidate meaning for 'fairness,' it won't really be a Logical Idea, because the subject matter of morality is intrinsically less logicy than the subject matter of mathematics. [...]All of these claims are implausible

This inspired me to write Morality Isn't Logical, and I'd be interested to know what you think.

comment by HalMorris · 2012-12-27T16:03:07.750Z · LW(p) · GW(p)

Saying "geometry with the parallel postulate is Euclidean" is not regarded a power grab because the axioms and their consequences are useful or useless independent of the label ascribed to them

Mathematical definitions are a power grab just as moral definitions are; the only difference is that people care more about the moral power-grabs than about the mathematical ones. Mathematical authorities assert their dominance, assert their right to participate in establishing General Mathematical Practice regarding definitions, inference rules, etc., every time they endorse one usage as opposed to another. It's only because their authority goes relatively unchallenged that we don't see foundational disputes over mathematical definitions as often as we see foundational disputes over moral definitions. Each constrains practice, after all.

Very few mathematical definitions are about General Mathematical Practice. Euclidean and Riemannian (or projective) geometry are in perfect peaceful coexistence, and in general, new forms of mathematics expand the territory rather than fight over an existing patch of territory.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-27T19:10:16.507Z · LW(p) · GW(p)

Very few mathematical definitions are about General Mathematical Practice. Euclidean and Riemannian (or projective) geometry are in perfect peaceful coexistence,

I think you underestimate the generality of my claim. (Perhaps the phrase 'power grab' is poorly chosen.) Relatively egalitarian power grabs are still power grabs, inasmuch as they use the weight of consensus and tradition to marginalize non-egalitarian views. There is no proof that both geometries are equally 'true' or 'correct' or 'legitimate' or 'valid;' so we could equally well have decided that only Euclidean geometry is correct; or that only project geometry is; or that neither is. There is no proof that one of the latter options is superior; but nor is there a proof that one is inferior. It's a pragmatic and/or arbitrary choice, and settling such decisions depends on an initially minority viewpoint coming to exert its consensus-establishing authority over majority practice. Egalitarianism is about General Mathematical Practice. (And sometimes it's very clearly sociological, not logical, in character; for instance, the desire to treat conventional and intuitionistic systems as equally correct but semantically disjoint is a fine way to calm down human disagreement, but as a form of mathematical realism it is unmotivated, and in fact leads to paradox.)

in general, new forms of mathematics expand the territory rather than fight over an existing patch of territory.

That depends a great deal on how coarse-grainedly you instantiate "forms". Mathematical results get overturned all the time; not just in the form of entire fields being rejected or revised from the ground up (like the infinitesimal calculus), and not just in the discovery of internal errors in proofs past, but in the rejection of definitions and axioms for a given discourse.

Replies from: HalMorris
comment by HalMorris · 2012-12-28T01:34:29.007Z · LW(p) · GW(p)

Mathematical results get overturned all the time; not just in the form of entire fields being rejected or revised from the ground up (like the infinitesimal calculus), and not just in the discovery of internal errors in proofs past, but in the rejection of definitions and axioms for a given discourse.

I'm just a 2 year math Ph.D. program drop-out from 35 years ago, but I got quite a different take on it. As I experienced it, most mathematics is like "Let X be a G-space where G-space is defined as having ". and then you might spend years proving whatever those axioms imply, and defining umpteen specializations of a G-space, like a G2-space which has , and teasing out and proving the consequences of having those axioms. At no point do you say these axioms are true - that's an older, non-mathematical use of the word "axiom" as something (supposedly) self-evidently true. You just say if these axioms are true for X, then this and this and this follows.

Mathematicians simply don't say that the axioms of Euclidean geometry are true. It is all about, "if an object (which is a purely mental construct) has these properties, then it most have these other properties.

By the "infinitesimal calculus", being overturned, I assume you mean dropping the use of infinitesimals in favor of delta-epsilon type definitions in calculus/real analysis, it's not such a good illustration that revision from the ground up happens all the time, since really, that goes back to the late 19th century, and I really don't think such things do happen all the time though another big redefining project happened in the early 20th century.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-28T12:05:56.144Z · LW(p) · GW(p)

another big redefining project happened in the early 20th century

ZFC set theory? Peano arithmetics?

comment by Vaniver · 2012-12-11T23:31:52.422Z · LW(p) · GW(p)

If (g) is all you meant to argue this whole time, then we're in agreement. Specificity is of course a virtue.

I don't think (g) is quite right. It is clear to me which candidate Eliezer is putting forward (in this case, at least): splitting windfalls equally by weight.

I think the closest of the ones you suggest is (e). Trying to put my view of my claim in similar terms to your other options, I think I would go with something like "We can create logically precise candidates for fairness, but this leaves undone the work of making those candidates relevant to decision-makers," with the motivation that the reason to have moral systems / concepts like 'fairness' is because they are relevant to decision-makers.

That is, we can imagine numbers being 'prime' without a mathematician looking at them and judging them prime, but we should not imagine piles of pebbles occurring in prime numbers without some force that shifts pebbles based on their pile size.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-11T23:49:39.654Z · LW(p) · GW(p)

"We can create logically precise candidates for fairness, but this leaves undone the work of making those candidates relevant to decision-makers,"

(a) Do you think Eliezer is trying to make his terms 'relevant to decision-makers' in the requisite sense?

(b) Why would adding an argument place for 'the person judging the situation as fair' help make fairness more relevant to decision-makers?

we can imagine numbers being 'prime' without a mathematician looking at them and judging them prime, but we should not imagine piles of pebbles occurring in prime numbers without some force that shifts pebbles based on their pile size.

I don't believe in a fundamental physical force that calculates how many pebbles are in a pile, and adds or subtracts a pebble based specifically on that fact. But I do believe that pebbles can occur in piles of 3, and that 3 is a prime number. Similarly, I don't believe in a magical Moral Force, but I do believe that people care about equitable distributions of resources, and that 'fairness' is a perfectly good word for picking out that property we care about. I still don't see any reason to add an argument place; and if there were a need for a second argument place, I still don't see why an analogous argument wouldn't force us to add a third argument to 'beautiful,' so that some third party can judge whether another person is perceiving something as beautiful. (Indeed, if we took this requirement seriously, it would produce an infinite regress, making no language expressible.)

Replies from: Vaniver
comment by Vaniver · 2012-12-12T00:37:03.822Z · LW(p) · GW(p)

Why would adding an argument place for 'the person judging the situation as fair' help make fairness more relevant to decision-makers?

Do you see why a 2-place beauty would be more relevant than a 1-place beauty?

I don't believe in a fundamental physical force that calculates how many pebbles are in a pile, and adds or subtracts a pebble based specifically on that fact. But I do believe that pebbles can occur in piles of 3, and that 3 is a prime number.

I was unclear; I didn't mean "that some piles will have prime membership" but that "most or all piles of pebbles will have prime membership."

I do believe that people care about equitable distributions of resources

Generally?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-12T00:52:29.227Z · LW(p) · GW(p)

Do you see why a 2-place beauty would be more relevant than a 1-place beauty?

Relevant to what?

I would have no objection to a one-place beautyₐ, where 'beautyₐ' is an exhaustively physically specifiable idea like 'producing feelings of net aesthetic pleasure when encountered by most human beings'. I would also have no objection to a two-place beauty₂, where 'beauty₂' means 'aesthetically appealing to some person X.' Neither one of these is more logically legitimate than the other, and neither one is less logically legitimate than the other. The only reason we prefer beauty₂ over beautyₐ is that it's (a) more user-friendly to calculate, or that it's (b) a more plausible candidate for what ordinary English language users mean when they say the word 'beauty.'

What I want to see is an argument for precisely what the analogous property 'fairness₂' would look like, and why this is a more useful or more semantically plausible candidate for our word 'fairness' than a one-place 'fairnessₐ' would be. Otherwise your argument will just as easily make 'plus' three-place ('addition-according-to-someone') or 'bird' two-place ('being-a-bird-according-to-someone'). This is not only impractical, but dangerous, since it confuses us into thinking that what we want when we speak of 'objectivity' is not specificity, but merely making explicit reference to some subject. As though mathematics would become more 'objective,' and not less, if we were to relativize it to a specific mathematician or community of mathematicians.

"most or all piles of pebbles will have prime membership."

So is your worry that having a one-place 'fairness' predicate will make people think that most situations are fair, or that there's a physically real fundamental law of karma promoting fairness?

Generally?

In general, yes, generally.

Replies from: Vaniver
comment by Vaniver · 2012-12-12T15:58:43.192Z · LW(p) · GW(p)

Relevant to what?

To decision-makers.

The only reason we prefer beauty₂ over beautyₐ

I think I'm going to refer you to this post again. Having a beautyₐ which implicitly rather than explicitly restricts itself to humans runs the risk of being applied where its not applicable. Precision in language aids precision in thought.

I think I'm also going to bow out of the conversation at this point; we have both typed a lot and it's not clear that much communication has gone on, to the point that I don't expect extending this thread is a good use of either of our times.

comment by A1987dM (army1987) · 2012-12-12T13:27:41.993Z · LW(p) · GW(p)

'Fairness' does not grammatically need an 'according to whom?' argument place

Grammatically, neither does “beautiful”. “Alice is beautiful” is a perfectly grammatical English sentence.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-12T19:25:28.587Z · LW(p) · GW(p)

Yes. Clearly I was being unclear. Just as saying "Eating broccoli is good" I think assumes a tacit answer to "Good for whom?" and/or "Good for what?", saying "Hamburgers are delicious" assumes a tacit "Delicious to whom?", even if the answer is "To everyone!". I have a hard time understanding what it means to visualize a possible world where everything is delicious and there are no organisms or sentients. I think of 'beauty' the same way, but perhaps not everyone does; and if some people think of 'fairness' as intrinsically -- because of the concept itself, and not just because of our metaphysical commitments or dialectical goals -- demanding an implicit argument place for a 'judge of fairness,' I'd like to hear more about why. Or is this just a metaphysical argument, not a conceptual one?

comment by Peterdjones · 2012-12-10T13:15:48.411Z · LW(p) · GW(p)

My resolution of Euthryphro is "the moral is the practical."

How do you avoid prudent predation

Replies from: dspeyer, Vaniver
comment by dspeyer · 2012-12-10T21:25:01.008Z · LW(p) · GW(p)

I think the author of that piece needs to learn the concept of precommitment. Precommitting to one-box is not at all the same as believing that one-boxing is the dominant strategy in the general newcomb problem. Likewise, precommitting not to engage in prudent predation is not a matter of holding a counterfactual belief, but of taking a positive-expected-utility action.

comment by Vaniver · 2012-12-10T20:25:44.739Z · LW(p) · GW(p)

Are there moral systems used by humans that avoid prudent predation, and are not outcompeted by moral systems used by humans that make use of prudent predation?

I will note that the type of predation that is prudent has varied significantly over time, and correspondingly, so have moral intuitions. Further altering the structure of society will again alter the sort of predation that is prudent, and so one can seek to restructure society so disliked behavior is less prudent and liked behavior is more prudent.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-10T20:52:29.837Z · LW(p) · GW(p)

Are there moral systems used by humans that avoid prudent predation, and are not outcompeted by moral systems used by humans that make use of prudent predation?

I find it hard to make sense of that. I don't think people go in for morality for selfish gain, and the very idea may be incoherent.

I will note that the type of predation that is prudent has varied significantly over time, and correspondingly, so have moral intuitions.

Maybe. I don't see what your point is. If the moral is not the practical, and if PP is wrong, that would not imply morality is timeless, and vice versa.

Replies from: Vaniver
comment by Vaniver · 2012-12-10T21:11:11.688Z · LW(p) · GW(p)

I find it hard to make sense of that. I don't think people go in for morality for selfish gain, and the very idea may be incoherent.

The claim is that moral intuitions exist because they were selected for, and they must have been selected for because they increased reproductive fitness. Similarly, we should expect moral behavior to the degree that morality is more rewarding than immorality. (The picture is muddied by there being both genetic and memetic evolution, but the basic idea survives.)

Replies from: Peterdjones
comment by Peterdjones · 2012-12-10T21:22:27.994Z · LW(p) · GW(p)

The claim is that moral intuitions exist because they were selected for, and they must have been selected for because they increased reproductive fitness.

But morality isn't just moral intuitions. It includes "eat fish on friday"

Similarly, we should expect moral behavior to the degree that morality is more rewarding than immorality.

That doens't follow. Fitness-enahncing and gene-spreading behaviour don;t have to reward the organism concerned. What't the reward for self sacrifice?

The picture is muddied by there being both genetic and memetic evolution,

that's a considerable understatement.

Replies from: Vaniver
comment by Vaniver · 2012-12-10T21:35:21.440Z · LW(p) · GW(p)

But morality isn't just moral intuitions. It includes "eat fish on friday"

Sure. We should expect such rules to be followed to the degree that they are prudent.

What't the reward for self sacrifice?

There are several; kin selection, reciprocal altruism, and so on. In some cases, self-sacrifice is the result of a parasitic relationship. (Kin selection appears to have a memetic analog as well, but I'm not familiar with work that develops that concept rigorously, and distinguishes it from normal alliance behaviors; it might just be a subset of that.)

Replies from: Peterdjones
comment by Peterdjones · 2012-12-10T21:54:59.682Z · LW(p) · GW(p)

Sure. We should expect such rules to be followed to the degree that they are prudent.

Again, I have no idea what you mean. Morality does not predict self-centered prudence, since it enjoins self-sacrifice, and evolution doenst predict self-centered prudence in all cases. It is not selfishly prudent for bees to to defend their colony, or for male praying mantises to mate.

There are several; kin selection, reciprocal altruism, and so on.

Rewards for whom?

Replies from: Strange7
comment by Strange7 · 2012-12-14T03:16:20.370Z · LW(p) · GW(p)

If you pass on the idea that self-sacrifice is virtuous, in a persuasive sort of way (such as by believing it yourself), you're marginally more likely to enjoy the benefits of having someone willing to sacrifice their own interests nearby when you particularly need such a person. Of course, sometimes that meme kills you. Some people are born with sickle-cell anemia and never get the opportunity to benefit from malaria resistance; evolution doesn't play nice.

comment by nshepperd · 2012-12-10T15:28:36.620Z · LW(p) · GW(p)

You sure it's not just executing an adaption? Why?

It is exactly executing an adaption. No "just" about it though. An AI programmed to maximise paperclips is motivated by increasing the number of paperclips. It's executing its program.

Replies from: Vaniver
comment by Vaniver · 2012-12-10T21:23:24.331Z · LW(p) · GW(p)

I had this post in mind. I see no reason to link behavior that 'seems moral' to the internal sensation of motivation by those terminal values, and if we're not talking about introspection about decision-making, then why are we using the word motivation?

This post seems to be discussing a particular brand of moral reasoning- basically, deliberative utilitarian judgments- which seems like a rather incomplete picture of human morality as a whole, and it seems like it's just sweeping under the rug the problem of where values come from in the first place. I should make clear that first he has to describe what values are before he can describe where values come from, but if it's an incomplete description of values, that can cause problems down the line.

Replies from: SebastianGarren
comment by SebastianGarren · 2012-12-11T22:53:31.181Z · LW(p) · GW(p)

Vaniver, I really appreciate the rigor you are bringing to this discussion. The OP struck me as very deliberative-utilitarian as well. If we want to account (or propagate) for a shared human morality, than certainly, it must be rational. But it seems to me, that the long history of searching for a rational-basis-for-morality clearly points away from the well trodden ground of this utilitarianism.

From Plato and Aristotle to the Enlightenment until Nietzsche (especially to the present day), it seems the project of accounting for morality as though it were an inherent attribute of humanity, expressible through axioms and predetermined by the universe, is a bunk and, perhaps even, an irrational project. Morality, I think can only be shared, if you have a shared goal for winning life.

A complete description of values requires a discussion on what makes life worth living and what is a good life, or more simply goals. Without the tools to determine and rationalize what are good goals for me, I will never be able to make a map of morality and choose the values and virtues relevant to me on my quest.

Does that jive?

Replies from: Vaniver
comment by Vaniver · 2012-12-11T23:29:36.688Z · LW(p) · GW(p)

Yes.

I would note there is often a meaningful difference between individual and social virtues. You and I could share expectations about only our conduct when we interact and not the other's private conduct. It is easy to imagine people spending more effort on inducing their neighbors to keep their lawns pretty than their dishes pretty, for example.

comment by Manfred · 2012-12-10T15:24:20.675Z · LW(p) · GW(p)

Yay, I think we've finished the prerequisites to prerequisites, and started the prerequisites!

comment by tristanhaze · 2012-12-10T06:02:33.729Z · LW(p) · GW(p)

Stimulating as always! I have a criticism to make of the use made of the term 'rigid designation'.

Multiple philosophers have suggested that this stance seems similar to "rigid designation", i.e., when I say 'fair' it intrinsically, rigidly refers to something-to-do-with-equal-division. I confess I don't see it that way myself [...]

What philosophers of language ordinarily mean by calling a term a rigid designator is not that, considered purely syntactically, it intrinsically refers to anything. The property of being a rigid designator is something which can be possessed by an expression in use in a particular language-system. The distinction is between expressions-in-use whose reference we let vary across counterfactual scenarios (or 'possible worlds'), e.g. 'The first person to climb Everest', and those whose reference remains stable, e.g. 'George Washington', 'The sum of two and two'.

There is some controversy over how to apply the rigid/non-rigid distinction to general terms like 'fair' (or predicates like 'is fair') - cf. Scott Soames' book Beyond Rigidity - but I think the natural thing to say is that 'is fair' is rigid, since it is used to attribute the same property across counterfactual scenarios, in contrast with a predicate like 'possesses my favourite property'.

Replies from: crazy88, Eliezer_Yudkowsky
comment by crazy88 · 2012-12-10T07:19:01.764Z · LW(p) · GW(p)

Multiple philosophers have suggested that this stance seems similar to "rigid designation", i.e., when I say 'fair' it intrinsically, rigidly refers to something-to-do-with-equal-division. I confess I don't see it that way myself - if somebody thinks of Euclidean geometry when you utter the sound "num-berz" they're not doing anything false, they're associating the sound to a different logical thingy. It's not about words with intrinsically rigid referential power, it's that the words are window dressing on the underlying entities.

I just wanted to agree with Tristanhaze here that this usage strikes me as non-standard. I want to put this in my own words so that Tristanhaze/Eliezer/others can correct me if I've got the wrong end of the stick.

If something is a rigid designator it means that it refers to the same thing in all possible worlds. To say it's non-rigid is to say it refers to different things in some possible worlds to others. This has nothing to do with whether different language users that use the phrase must always be referring to the same thing. So George Washington may be a rigid designator in that it refers to the same person in all possible worlds (bracketing issues of transworld identity) but that doesn't mean that in all possible worlds that person is called George Washington or that in all possible worlds people who use the name George Washington must be referring to this person or even that in the actual world all people who use the name George Washington must be referring to this person.

To say "water" is a rigid designator is to say that whatever possible world I am talking about, I am picking out the same thing when I use the word water (in a way that I wouldn't be when I say, "the tallest person in the world" - this would pick out different things in different worlds). But it doesn't say anything about whether I mean the same thing as other language users in this or other possible worlds.

ETA: So the relevance to the quoted section is that rigid designators aren't about whether someone that thinks of Euclidean geometry when you say "numbers" is right or wrong - it's about whether whatever they associate with that word is the same thing in all possible worlds (or whether it's a different thing in some worlds).

ETA 2: I take it that Eliezer's paragraph here is in response to comments like these. I'm in a bit of a rush and need to think about it some more but I think Richard may be making a different point here to the one Eliezer's making (on my reading). I think Richard is saying that what is "right" is rigidly determined by my current (idealised) desires - so in a possible world where I desired to murder, murder would still be wrong because "right" is a rigid designator (that is, right from the perspective of my language, a different language user - like the me that desires murder - might still use "right" to refer to something else according to which murder is right. See the point about George Washington being able to be rigid even if people in other possible worlds use that name to name someone else). On the other hand, my reading of Eliezer was that he was taking the claim that "right" (or "fair") is a rigid designator to mean something about the way different language users use the word "fair". Eliezer seemed to be suggesting that rigid designation implied that words intrinsically mean certain things and hence that rigid designation implies that if someone uses a word in a different way they are wrong (using numbers to refer to geometry). I could have misunderstood either of these two comments but if I haven't then it seems to me that Eliezer is using rigid designator in a non-standard way.

Replies from: RichardChappell, Qiaochu_Yuan
comment by RichardChappell · 2012-12-10T17:37:36.512Z · LW(p) · GW(p)

Correct. Eliezer has misunderstood rigid designation here.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T19:34:46.825Z · LW(p) · GW(p)

So does that mean this:

I think Richard is saying that what is "right" is rigidly determined by my current (idealised) desires - so in a possible world where I desired to murder, murder would still be wrong

...is your real claim here, independent of any points about language use?

If so, I think I would just straightforwardly modify my paragraph above to say that my statements are not trying to talk about language use or human brains / desires, albeit that desire is both an optimization target of, and a quotation of, morality.

Replies from: RichardChappell
comment by RichardChappell · 2012-12-10T20:25:06.790Z · LW(p) · GW(p)

I'm not sure what you have in mind here. We need to distinguish (i) the referent of a concept from (ii) its reference-fixing "sense" or functional role. The way I understood your view, the reference-fixing story for moral terms involves our (idealized) desires. But the referent is "rigid" in the sense that it's picking out the content of our desires: the thing that actually fills the functional role, rather than the role-property itself.

Since our desires typically aren't themselves about our desires, so it will turn out, on this story, that morality is not "about" desires. It's about "love, friendship," and all that jazz. But there's a story to be told about how our moral concepts came to pick out these particular worldly properties. And that's where desires come in (as I understand your view). Our moral concepts pick out these particular properties because they're the contents of our idealized desires. But that's not to say that therefore morality is "really" just about fulfilling any old desires. For that would be to neglect the part that rigid designation, and the distinction between reference and reference-fixing, plays in this story.

Does that capture your view? To further clarify: the point of appealing to "rigid designation" is just to explain how desires could play a reference-fixing role without being any part of the referent of moral talk (or what it is "about"). Isn't that what you're after? Or do you have some other reference-fixing story in mind?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T21:09:19.234Z · LW(p) · GW(p)

This all does sound good to me; but, is there a way to say the above while tabooing "reference" and avoiding talk of things "referring" to other things? Reference isn't ontologically basic, so what does it reduce to?

Basically, the main part that would worry me is a phrase like, "there's a story to be told about how our moral concepts came to pick out these particular worldly properties" which sounds on its face like, "There's a story to be told about how successorship came to pick out the natural numbers" whereas what I'd want to say is, "Of course, there's a story to be told about how moral concepts came to have the power to move us" or "There's a story to be told about how our brains came to reflect numbers".

comment by Qiaochu_Yuan · 2012-12-16T00:45:16.715Z · LW(p) · GW(p)

Can you give an example of a rigid designator (edit: that isn't purely mathematical / logical)? I don't understand how the concept is even coherent right now. "Issues of transworld identity" seem to be central and I don't know why you're sweeping them under the rug. More precisely, I do not understand how one goes about identifying objects in different possible worlds even in principle. I think that intuitions about this procedure are likely to be flawed because people do not consider possible worlds that are sufficiently different.

Replies from: crazy88
comment by crazy88 · 2012-12-16T06:43:47.915Z · LW(p) · GW(p)

Okay, so three things are worth clarifying up front. First, this isn't my area of expertise so anything I have to say about the matter should be taken with a pinch of salt. Second, this is a complex issue and really would require 2 or 3 sequences of material to properly outline so I wouldn't read too much into the fact that my brief comment doesn't present a substantive outline of the issue. Third, I have no settled views on the issues of rigid designators, nor am I trying to argue for a substantive position on the matter so I'm not deliberately sweeping anything under the rug (my aim was to distinguish Eliezer's use of the phrase rigid designator from the standard usage and doing so doesn't require discussion of transworld identity: Eliezer was using it to refer to issues relating to different people whereas philosophers use it to refer to issues relating to a single person - or at least that's the simplified story that captures the crucial idea).

All that said, I'll try to answer your question. First, it might help to think of rigid designators as cases where the thing to be identified isn't simply to be identified with its broad role in the world. So "the inventor of bifocals" is the person that plays a certain role in the world - the role of inventing bifocals. So "the inventor of bifocals" is not a rigid designator. So the heuristic for identifying rigid designators is that they can't just be identified by their role in the world.

Given this, what are some examples of rigid designators? Well, the answer to this question will depend on who you ask. A lot of people, following Putnam would take "water" (and other natural kind terms) to be a rigid designator. On this view, "Water" rigidly refers to H2O, regardless of whether H20 plays the "water" role in some other possible world. So imagine a possible world where some other substance, XYZ, falls from the sky, sakes thirst, fill rivers and so on (that is, XYZ fills the water role in this possible world). On the rigid designation view, XYZ would not be water. So there's one example of a rigid designator (on one view).

Kripke (in his book naming and necessity) defends the view that names are rigid designators - so the name "Thomas Jefferson" refers to the same person in all possible worlds (this is where issues of transworld identity become relevant). This is meant to be contrasted with a view according to which the name "John Lennon" refers to the nearest and near enough realiser of a certain description ("lead singer of the Beatles, etc). So on this view, there are possible worlds where John Lennon is not the lead singer of the Beatles, even though the Beatles formed and had a singer that met many of the other descriptive features of John (born in the same town and so on).

Plausibly, what you take to be a rigid designator will depend on what you take possible worlds to be and what views you have on transworld identity. Note that your comment that it seems difficult to imagine how you could go about identifying objects in different possible worlds even in principle makes a very strong assumption about the metaphysics of possible worlds. For example, this difficulty would be most noticeable if possible worlds were concrete things that were causally distinct from us (as Lewis would hold). One major challenge to Lewis's view is just this challenge. However, very few philosophers actually agree with Lewis.

So what are some other views? Well Kripke thinks that we simply stipulate possible worlds (as I said, this isn't my area so I'm not entirely clear what he takes possible worlds to be - maximally consistent sets of sentences, perhaps - if anyone knows, I'd love to have this point clarified). That is, we say, "consider the possible world where Bill Gates won the presidency". As Kripke doesn't hold that possible worlds are real concrete entities, this stipulation isn't necessarily problematic. On Kripke's view, then, the problem of transworld identity is easy to solve.

More precisely, I do not understand how one goes about identifying objects in different possible worlds even in principle. I think that intuitions about this procedure are likely to be flawed because people do not consider possible worlds that are sufficiently different.

I don't have the time to go into more detail but it's worth noting that your comment about intuition is an important point depending on your view of what possible worlds are. However, there's definitely an overarching challenge to views according to which we should rely on our intuitions to determine what is possible.

Hope that helps clarify.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-16T07:43:15.096Z · LW(p) · GW(p)

Thank you for the clarification. I agree that the question of what a possible world is is an important one, but the answer seems obvious to me: possible worlds are things that live inside the minds of agents (e.g. humans).

Water is one of the examples I considered and found incoherent. Once you start considering possible worlds with different laws of physics, it's extremely unclear to me in what sense you can identify types of particles in one world with particles in another type of world. I could imagine doing this by making intuitive identifications step by step along "paths" in the space of possible worlds, but then it's unclear to me how you could guarantee that the identifications you get this way are independent of the choice of path (this idea is motivated by a basic phenomenon in algebraic topology and complex analysis).

Replies from: crazy88
comment by crazy88 · 2012-12-16T07:59:32.043Z · LW(p) · GW(p)

As I said, these are complex issues.

possible worlds are things that live inside the minds of agents (e.g. humans).

Yes, but almost everyone agrees with this (or at least, almost all views on possible worlds can be interpreted this way even if they can also be interpreted as claims about the existence of abstract - non-concrete - objects). There are a variety of different things that possible worlds can be even given the assumption that they exist in people's heads (almost all the disagreement about what possible worlds are is disagreement within this category rather than between this category and something else).

Water is one of the examples I considered and found incoherent. Once you start considering possible worlds with different laws of physics, it's extremely unclear to me in what sense you can identify types of particles in one world with particles in another type of world.

Two things: first, the claim that "water" rigidly designates H2O doesn't imply that it must exist in all possible worlds - just that if "water" exists in a possible world then it is H2O. So if we can't identify the same particles in different worlds then this just means that water exists in almost no worlds (perhaps only in our own world).

However, the view that we can't identify the same particles in other worlds is a radical one and would be a strong sign that the account of possible worlds appealed to falls short (after all, possible worlds are supposed to be about what is possible and surely there are possibilities that revolve around the particles existing in our world - ie. surely it's possible that I now be holding a glass of H2O. If your account of possible worlds can't cope with this possibility it seems to not be a very useful account of possible worlds).

Further, how hard it is to identify sameness of particles across possible worlds will depend on how you take them to be "constructed" - if they are constructed by stipulation, ie. "consider the world where I am holding a glass of H2O" then it is very easy to get sameness of particles.

I'm not saying there's not room for your criticisms but for them to hold requires substantial metaphysical work showing why your account, and only your account, of possible worlds works and hence that your conclusions hold.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-16T08:34:18.730Z · LW(p) · GW(p)

Okay. I think what I'm actually trying to say is that what constitutes a rigid designator, among other things, seems to depend very strongly on the resolution at which you examine possible worlds.

When you say the phrase "imagine the possible world in which I have a glass of water in my hand" to a human, that human knows what you mean because by default humans only model the physical world at a resolution where it is easy to imagine making that intervention and only that intervention. When you say that phrase to an AI which is modeling the world at a much higher resolution, the AI does not know how to do what you ask because you haven't given it enough information. How did the glass of water get there? What happened to the air molecules that it displaced? Etc.

Replies from: crazy88, crazy88, MugaSofer
comment by crazy88 · 2012-12-16T21:44:55.601Z · LW(p) · GW(p)

Okay, perhaps I can have another go at this.

First thing to note, possible worlds can't be specified at different levels of detail. When doing so we are either specifying partial possible worlds or sets of possible worlds. As rigid designation is a claim about worlds, it can't be relative to the level of detail utilised as it only applies to things specified at one level of detail.

Second, you still seem to be treating possible worlds as concrete things rather than something in the head (or, at least, making substantive assumptions about possible worlds and relying on these to make claims about possible worlds generally). Let's take possible worlds to be sets of propositions and truth values. In this case there's no reason to find transworld identity puzzling. H2O exists in this world just if a relevant proposition is true (like, "I am holding a glass of H2O"). There's also no room for this transworld identity to be relative to a context. Whether these things are puzzling depends on your account of possible worlds and it seems like if you're account of possible worlds can't account for transworld identity it can't do the work required of possible worlds and so it is open to the challenge that it should be abandoned in favour of some other account.

Third, it's important to distinguish questions about the way worlds are from questions about how they can be specified. It's an interesting question how we should specify individual possible worlds and another interesting question whether we often do so or whether we normally specify sets of possible worlds instead. However, difficulties with specification do not undermine the concept of a rigid designator.

Fourth, even if it were a relative matter whether H2O exists in a world this wouldn't undermine the concept of rigid designation. Rigid designation would simply imply that if this were the case then it would also be a relative matter whether water existed in that world.

The summary of what I'm trying to get at is the following: you have raised concerns for practical issues (how we specify worlds) and epistemic issues (how we know what's in worlds) but these aren't really relevant to the issue of rigid designation. So, for example, I don't think your claim that:

I think what I'm actually trying to say is that what constitutes a rigid designator, among other things, seems to depend very strongly on the resolution at which you examine possible worlds.

Follows from your argument (for the reasons I've outlined above, recap: from the fact that humans often specify sets of worlds rather than worlds nothing about rigid designation follows), even though I think your argument is an insightful one that raises interesting epistemic and practical issues for possible worlds.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-17T00:13:36.148Z · LW(p) · GW(p)

First thing to note, possible worlds can't be specified at different levels of detail.

Let's take possible worlds to be sets of propositions and truth values.

I think that these two desires are contradictory. Part of what I'm trying to say is that it's a highly nontrivial problem which propositions are even meaningful, let alone true, if you specify possible worlds at a sufficiently high level of detail. For example, at an extremely high level of detail, you might specify a possible world by specifying a set of laws of physics together with an initial condition for the universe. This kind of specification of a possible world doesn't automatically allow you to interpret intuitive referents like "I," so the meaning of a statement like "I am holding a glass of water" is extremely unclear.

you have raised concerns for practical issues (how we specify worlds) and epistemic issues (how we know what's in worlds) but these aren't really relevant to the issue of rigid designation.

How do you know what things are rigid designators if you neither know how to specify possible worlds nor how to determine what's in them?

Replies from: crazy88
comment by crazy88 · 2012-12-17T00:48:16.015Z · LW(p) · GW(p)

I think this conversation is now well into the territory of diminishing return so I'll leave it at that.

comment by crazy88 · 2012-12-16T09:26:07.746Z · LW(p) · GW(p)

I think this is getting past the point that I can useful contribute further though I will note that the vast literature on the topic has dealt with this sort of issue in detail (though I don't know it well enough to comment in detail).

Saying that, I'll make one final contribution and then leave it at that: I suspect that you've misunderstood the idea of a rigid designator if you think it depends on the resolution at which you examine possible worlds. To say that something is a rigid designator is to say that it refers to the same thing in all possible worlds (note that this is a fact about language use). So to say that "water" rigidly denotes H2O is just to say that when we use the word water to refer to something in some possible world, we are talking about H2O. Issues of how precisely the details of the world are filled in are not relevant to this issue (for example, it doesn't matter what happens to the air molecules - this has no impact on the issue of rigid designation).

The point you raise is an interesting one about how we specify possible worlds but not, to my knowledge, one that's relevant to rigid designation. But beyond that I don't think I have anything more of use to contribute (simple because we've exhausted my meagre knowledge of the topic)...

comment by MugaSofer · 2012-12-16T13:43:47.435Z · LW(p) · GW(p)

I assume the AI could concoct some plausible explanation(s).

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T19:19:26.505Z · LW(p) · GW(p)

I'd like to say "sure" and then delete that paragraph, but then somebody else in the comments will say that my essay is just talking about a rigid-designation theory of morality. I mean, that's the comment I've gotten multiple times previously. Anyone got a good idea for resolving this?

Replies from: crazy88
comment by crazy88 · 2012-12-10T21:01:56.738Z · LW(p) · GW(p)

You may have resolved this now by talking to Richard (who knows more about this than me) but, in case you haven't, I'll have a shot at it.

First, the distinction: Richard is using rigid designation to talk about how a single person evaluates counterfactual scenarios, whereas you seem to be taking it as a comment about how different people use the same word.

Second, relevance: Richard's usage allow you to respond to an objection. The objection asks you to consider the counterfactual situation where you desire to murder people and says murder must now be right so the theory is extremely subjective. You can respond that "right" is a rigid designator so it is still right to not murder in this counterfactual situation (though your counterpart here will use the word "right" differently).

Suggestion: perhaps edit the paragraph so as to discuss either this objection and defence or outline why the rigid designator view so characterised is not your view.

comment by Wei Dai (Wei_Dai) · 2012-12-18T17:18:08.976Z · LW(p) · GW(p)

Here's my understanding of the post:

Consider two types of possible FAI designs. A Type 1 FAI has its values coded as a logical function from the time it's turned on, either a standard utility function, or all the information needed to run a simulation of a human that is eventually supposed to provide such a function, or something like that. A Type 2 FAI tries to learn its values from its inputs. For example it might be programmed to seek out a nearby human, scan their brain, and then try to extract a utility function from the scan, going to a controlled shutdown if it encounters any errors in this process. A human is more like a Type 1 FAI than a Type 2 FAI so it doesn't matter that there is no God/Stone Tablet out in the universe that we can extract morality from.

If this is fair, I have two objections:

  1. When humans are sufficiently young they are surely more like a Type 2 FAI than a Type 1 FAI. We're obviously not born with Frankena's list of terminal values. Maybe one can argue that an adult human is like a Type 2 FAI that has completed its value learning process and has "locked down" its utility function and won't change its values or go into shutdown even if it subsequently learns that the original brain scan was actually full of errors. But this is far from clear, to say the least.

  2. The difference between Type 1 FAI and Type 2 FAI (which is my understanding of the distinction the OP's trying to draw between "logical" and "physical") doesn't seem to get at the heart of what separates "morality" from "things that are not morality". If meta-ethics is supposed to make me less confused about morality, I just can't call this a "solution".

Replies from: Qiaochu_Yuan, homunq
comment by Qiaochu_Yuan · 2012-12-22T10:31:48.214Z · LW(p) · GW(p)

A Type 2 FAI gets its notion of what morality is based on properties of the physical universe, namely properties of humans in the physical universe. But even if counterfactually there were no humans in the physical universe, or even if counterfactually Omega modified the contents of all human brains in the physical universe so that they optimize for paperclips, that wouldn't change what actual-me means when actual-me says "I want an FAI to behave morally" even if it might change what counterfactual-me means when counterfactual-me says that.

comment by homunq · 2012-12-19T22:18:03.061Z · LW(p) · GW(p)

Individual humans are plausibly Type 2 FAIs. But societies of evolved, intelligent beings, operating as they do within the constraints of logic and evolution, are arguably more Type 1. In the terms of Eliezer's BabyKiller/HappyHappy fic, babykilling-justice is obviously a flawed copy of real-justice, and so the babykillers could (with difficulty) grow out of babykilling, and you could perhaps raise a young happyhappy to respect babykilling, but the happyhappy society as a whole could never grow into babykilling.

comment by torekp · 2012-12-16T00:15:29.382Z · LW(p) · GW(p)

Mainstream status:

EY's position seems to be highly similar to Frank Jackson's analytic descriptivism, which holds that

Frank Jackson and Philip Pettit (1995). According to their view of “analytic moral functionalism,” moral properties are reducible to “whatever plays their role in mature folk morality.” Jackson’s (1998) refinement of this position—which he calls “analytic descriptivism”—elaborates that the “mature folk” properties to which moral properties are reducible will be “descriptive predicates”

Which is a position neither popular nor particularly unpopular, but simply one of many contenders, as the mainstream goes.

Replies from: Eliezer_Yudkowsky, BerryPick6
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-16T21:05:49.775Z · LW(p) · GW(p)

I confirm (as I have previously) that Frank Jackson's work seems to me like the nearest known point in academic philosophy.

comment by BerryPick6 · 2012-12-16T00:29:36.984Z · LW(p) · GW(p)

This similarity has been noted and discussed before. See http://lesswrong.com/lw/fgz/empirical_claims_preference_claims_and_attitude/7u3s

comment by The_Duck · 2012-12-11T07:01:11.305Z · LW(p) · GW(p)

I think your discussions of metaethics might be improved by rigorously avoiding words like "fair," "right," "better," "moral," "good," etc. I like the idea that "fair" points to a logical algorithm whose properties we can discuss objectively, but when you insist on using the word "fair," and no other word, as your pointer to this algorithm, people inevitably get confused. It seems like you are insisting that words have objective meanings, or that your morality is universally compelling, or something. You can and do explicitly deny these, but when you continue to rely exclusively on the word "fair" as if there is only one concept that that word can possibly point to, it's not clear what your alternative is.

Whereas if you use different symbols as pointers to your algorithms, the message (as I understand it) becomes much clearer. Translate something like:

Fair is dividing up food equally. Now, is dividing up the pie equally objectively fair? Yes: someone who wants to divide up the pie differently is talking about something other than fairness. So the assertion "dividing the pie equally is fair" is objectively true.

into

Define XYZZY as the algorithm "divide up food equally." Now, is dividing up the pie equally objectively XYZZY? Of course it is: that's a direct logical consequence of how I just defined XYZZY. Someone who wants to divide the pie differently is using an algorithm that is not XYZZY. The assertion "dividing up the pie equally is XYZZY" is as objective as the assertion "S0+S0=SS0"--someone who rejects the latter is not doing Peano arithmetic. By the way, when I personally say the word "fair," I mean "XYZZY."

I suspect that wording things like this has less potential to trip people up: it's much easier to reason logically about XYZZY than about fairness, even if both words are supposed to be pointers to the same concept.

Replies from: Jay_Schweikert, None
comment by Jay_Schweikert · 2012-12-11T17:46:17.718Z · LW(p) · GW(p)

I don't think this works, because "fairness" is not defined as "divide up food equally" (or even "divide up resources equally"). It is the algorithm that, among other things, leads to dividing up the pie equally in the circumstances described in the original post -- i.e., "three people exactly simultaneously spot a pie which has been exogenously generated in unclaimed territory." But once you start tampering with these conditions -- suppose that one of them owned the land, or one of them baked the pie, or two were well-fed and one was on the brink of starvation, etc. -- it would at least be controversial to say "duh, divide equally, that's just what 'fairness' means." And the fact of that controversy suggests most of are using "fairness" to point to an algorithm more complicated than "divide up resources equally."

More generally, fairness -- like morality itself -- is complicated. There are basic shared intuitions, but there's no easy formula for popping out answers to "fair: yes or no?" in intricate scenarios. So there's actually quite a bit of value in using words like "fair," "right," "better," "moral," "good," etc., instead of more concrete, less controversial concepts like "equal division," -- if you can show that even those broad, complicated concepts can be derived from physics+logic, then it's that much more of an accomplishment, and that much more valuable for long-term rationalist/reductionist/transhumanist/friendly-ai-ist/whatever goals.

At least, that's how I under this component of Eliezer's project, but I welcome correction if he or others think I'm misstating something.

Replies from: The_Duck
comment by The_Duck · 2012-12-11T22:30:56.704Z · LW(p) · GW(p)

I don't think this works, because "fairness" is not defined as "divide up food equally" (or even "divide up resources equally"). It is the algorithm that, among other things, leads to dividing up the pie equally in the circumstances described in the original post

Yes; I meant for the phrase "divide up food equally" to be shorthand for something more correct but less compact, like "a complicated algorithm whose rough outline includes parts like, '...When a group of people are dividing up resources, divide them according to the following weighted combination of need, ownership, equality, who discovered the resources first, ...'"

comment by [deleted] · 2012-12-11T17:59:41.732Z · LW(p) · GW(p)

I think your discussions of metaethics might be improved by rigorously avoiding words like "fair," "right," "better," "moral," "good," etc.

See lukeprog's Pluralistic Moral Reductionism.

comment by [deleted] · 2012-12-11T04:07:06.075Z · LW(p) · GW(p)

Great post! I agree with your analysis of moral semantics.

However, the question of moral ontology remains...do objective moral values exist? Is there anything I (or anyone) should do, independent from what I desire? With such a clear explanation of moral semantics at hand, I think the answer is an obvious and resounding no. Why would we even think that this is the case? One conclusion we can draw from this post is that telling an unfriendly AI that what it's doing is "wrong" won't affect its behavior. Because that which is "wrong" might be exactly that which is "moreclippy"! I feel that Eliezer probably agrees with me, here, since I gained I lot of insight into the issue from reading Three Worlds Collide.

Asking why we value that which is "right" is a scientific question, with a scientific answer. Our values are what they are, now, though, so, minus the semantics, doesn't morality just reduce to decision theory?

Replies from: selylindi, Peterdjones
comment by selylindi · 2012-12-13T16:54:23.841Z · LW(p) · GW(p)

However, the question of moral ontology remains...do objective moral values exist? Is there anything I (or anyone) should do, independent from what I desire?

Thanks for bringing up that point! You mentioned below your appreciation for desirism, which says inter alia that there are no intrinsic values independent of what agents desire. Nevertheless, I think there is another way of looking at it under desirism that is almost like saying that there are intrinsic values.

Pose the question this way: If I could choose my desires in whole or in part, what set of desires would I be most satisfied with? In general, an agent will be more satisfied with a larger number of satisfiable desires and a smaller number of unsatisfiable desires. Then the usual criteria of desirism apply as a filter.

To the very limited extent that I can modify my desires, I take that near-tautology to mean that, independently from what I currently desire, I should change my mind and enjoy and desire things I never used to, like professional sports, crime novels, and fashion, for popular examples. It would also mean that I should enjoy and desire a broad variety of music and food, and generally be highly curious. And it would mean I should reduce my desires for social status, perfect health as I age, and resolution of difficult philosophical problems.

Replies from: Strange7, None
comment by Strange7 · 2012-12-14T01:54:16.705Z · LW(p) · GW(p)

And it would mean I should reduce my desires for social status, perfect health as I age,

Considering the extent to which those two can help with other objectives, I'd say you should be very careful about giving up on them.

Replies from: selylindi
comment by selylindi · 2012-12-14T21:21:03.007Z · LW(p) · GW(p)

I disagree. The downsides greatly outweigh the upsides from my perspective.

I'm skeptical that the behaviors people engage in to eke out a little more social status among people they don't value are anything more than resources wasted with high opportunity cost.

And, at 30 years of age, I'm already starting to notice that recovery from minor injuries and illnesses takes longer than it used to -- if I kept expecting and desiring perfect health, I'd get only disappointment from here on out. As much as I can choose it, I'll choose to desire only a standard of health that is realistically achievable.

comment by [deleted] · 2012-12-13T17:04:02.232Z · LW(p) · GW(p)

I haven't read through it, yet, so I may be completely incorrect, but according to my understanding of Coherent Extrapolated Volition, moral progress as defined there is equivalent (or fairly similar to) to the world becoming "better" as defined by desirism (desires which promote the fulfillment of other desires become promoted).

comment by Peterdjones · 2012-12-11T11:25:52.559Z · LW(p) · GW(p)

do objective moral values exist? Is there anything I (or anyone) should do, independent from what I desire? With such a clear explanation of moral semantics at hand, I think the answer is an obvious and resounding no

You just jumped to the conclusion that there is no epistemically objective morality --- nothing you objectively-should do -- because there in no metaphysically objective morality, no Form of the Good. That is a fallcy (although a common one on LW). EY has in fact explained how morality can be epistemically objective: it can be based on logic.

Replies from: None
comment by [deleted] · 2012-12-11T15:42:59.467Z · LW(p) · GW(p)

I didn't say that. Of course there is something you should do, given a set of goals...hence decision theory.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-11T15:46:53.032Z · LW(p) · GW(p)

There is something you self centerdly should do, but that doens't mean there is nothing you morally-should do either.

Replies from: None
comment by [deleted] · 2012-12-11T15:59:54.000Z · LW(p) · GW(p)

According to Eliezer's definition of "should" in this post, I "should" do things which lead to "life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience..." But unless I already cared about those things, I don't see why I would do what I "should" do, so as a universal prescription for action, this definition of "morality" fails.

Replies from: nshepperd, Peterdjones
comment by nshepperd · 2012-12-12T07:11:06.232Z · LW(p) · GW(p)

Correct. Agents who don't care about morality generally can't be convinced to do what they morally should do.

comment by Peterdjones · 2012-12-11T16:14:51.540Z · LW(p) · GW(p)

He also said:

"And I mention this in hopes that I can show that it is not moral anti-realism to say that moral statements take their truth-value from logical entities.". If you do care about reason, you can therefore be reasoned into morality.

In any case, it is no argument against moral objectivism/realism that some people don;'t "get" it. Maths sets up universal truths, which can be recognised by those capable of recognising them. That some don;t recognise them doesn;t stop them being objective.

Replies from: None
comment by [deleted] · 2012-12-11T17:12:29.158Z · LW(p) · GW(p)

You do not reason with evil. You condemn it.

I subscribe to desirism. So I'm not a strict anti-realist.

Replies from: wedrifid, Peterdjones
comment by wedrifid · 2012-12-12T04:19:14.131Z · LW(p) · GW(p)

You do not reason with evil. You condemn it.

You can spend your energy on condemnation if you wish. It doesn't sound like the most efficient use of my time. It is highly unlikely that political activism (which is what condemnation is about, either implicitly or explicitly) against any particular evil is the optimal way for me to do 'good'.

comment by Peterdjones · 2012-12-11T17:45:30.421Z · LW(p) · GW(p)

"Anyone can be reasoned into doing that which would fulfill the most and strongest of current desires. However, what fulfills current desires is not necessarily the same thing as what is right."

You seem to be overlooking the desire to be (seen to be) reasonable in itself.

"Anyone can be reasoned into doing what is right with enough argumentation”

...is probably false. But if reasoning and condemnation both modify bechaviour, however imperfectly, why not use both?

I subscribe to desirism

How does that differ from virtue ethics?

comment by kilobug · 2012-12-10T12:50:17.214Z · LW(p) · GW(p)

I myself would say unhesitatingly that a third of the pie each, is fair.

That's the default with no additional data, but I would hesitate, because to me how much each of the persons need the pie is also important in defining "fairness". If one of the three is starving while the others two are well-fed, it would be fair to give more to the one starving.

It may be just nitpicking, but since you took the point to ensure there is no difference between the three characters are involved in spotting the pie, but not mentioned they have the same need of it, it may pinpoint a deeper difference between different conceptions of "fairness" (should give them two different names ?)

comment by dspeyer · 2012-12-10T05:05:41.808Z · LW(p) · GW(p)

I'm trying to understand this, and I'm trying to do it by being a little more concrete.

Suppose I have a choice to make, and my moral intuition is throwing error codes. I have two axiomations of morality that are capable of examining the choice, but they give opposite answers. Does anything in this essay help? If not, is there a future essay planned that will?

In a universe that contains a neurotypical human and clippy, and they're staring at eachother, is there an asymmetry?

Replies from: Eliezer_Yudkowsky, nshepperd, Qiaochu_Yuan, Benito, MrMind, MugaSofer
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T06:39:51.502Z · LW(p) · GW(p)

Can you be more concrete? Some past or present actual situation?

Replies from: AlanCrowe, dspeyer
comment by AlanCrowe · 2012-12-10T12:10:21.122Z · LW(p) · GW(p)

Haiti today is a situation that makes my moral intuition throw error codes. Population density is three times that of Cuba. Should we be sending aid? It would be kinder to send helicopter gunships and carry out a cull. Cut the population back to one tenth of its current level, then build paradise. My rival moral intuition is that culling humans is always wrong.

Trying to stay concrete and present, should I restrict my charitable giving to helping countries make the demographic transition? Within a fixed aid budget one can choose package A = (save one child, provide education, provide entry into global economy; 30 years later the child, now an adult, feeds his own family and has some money left over to help others) package B = (save four children; that's it, money all used up, thirty years later there are 16 children needing saving and its not going to happen). Concrete choice of A over B: ignore Haiti and send money to Karuna trust to fund education for untouchables in India, preferring to raise a few children out of poverty by letting other children die.

Replies from: Nornagest, Eliezer_Yudkowsky, NancyLebovitz, None, JoachimSchipper
comment by Nornagest · 2012-12-10T21:03:57.519Z · LW(p) · GW(p)

Population density is three times that of Cuba.

It's also about half that of Taiwan, significantly less than South Korea or the Netherlands, and just above Belgium, Israel, and Japan -- as well as very nearly on par with India, the country you're using as an alternative! I suspect your source may have overweighted population density as a factor in poor social outcomes.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T19:17:10.446Z · LW(p) · GW(p)

I don't see how these two frameworks are appealing to different terminal values - they seem to be arguments about which policies maximize consequential lives-saved over time, or maximize QALYs (Quality-Adjusted Life Years) over time. This seem like a surprisingly neat and lovely illustration of "disagreeing moral axioms" that turn out to be about instrumental policies without much in the way of differing terminal values, hence a dispute of fact with a true-or-false answer under a correspondence theory of truth for physical-universe hypotheses.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-12T13:39:36.298Z · LW(p) · GW(p)

ISTM he's not quite sure whether one QALY thirty years from now should be worth as much as one QALY now.

Replies from: AlanCrowe
comment by AlanCrowe · 2012-12-13T19:56:53.343Z · LW(p) · GW(p)

I think that is it, I'm trying to do utilitarianism. I've got some notion q of quality and quantity of life. It varies through time. How do I assess a long term policy, with short term sacrifices for better output in the long run? I integrate over time with a suitable weighting such as

e%5E{-\frac{t}{\tau}}%20dt)

What is the significance of the time constant ? I see it as mainly a humility factor, because I cannot actually see into the future and know how things will turn out in the long run. Accordingly I give reduced weight to the future, much beyond , for better or worse, because I do not trust my assessment of either.

But is that an adequate response to human fallibility? My intuition is that one has to back it up with an extra rule: if my moral calculations suggest culling humans, its time to give up, go back to painting kitsch water colours and leave politics to the sane. That's my interpretation of dspeyer's phrase "my moral intuition is throwing error codes." Now I have two rules, so Sod's Law tells me that some day they are going to conflict.

Eliever's post made an ontological claim, that a universe with only two kinds of things, physics and logic, has room for morality. It strikes me that I've made no dent in that claim. All I've managed to argue is that it all adds up to normality: we cannot see the future, so we do not know what to do for the best. Panic and tragic blunders ensue, as usual.

comment by NancyLebovitz · 2012-12-10T20:47:56.066Z · LW(p) · GW(p)

Is permitting or perhaps even helping Haitians to emigrate to other countries anywhere in the moral calculus?

Replies from: AlanCrowe
comment by AlanCrowe · 2012-12-13T20:08:46.078Z · LW(p) · GW(p)

I interpreted Eliever's questions as a response to the evocative phrase "my moral intuition is throwing error codes." What does it actually mean? Can it be grounded in an actual situation?

Grounding it in an actual situation introduces complications. Given a real life moral dilemma it is always a good idea to look for a third option. But exploring those additional options doesn't help us understand the computer programming metaphor of moral intuitions throwing error codes

comment by [deleted] · 2012-12-13T20:30:09.668Z · LW(p) · GW(p)

It would be kinder to send helicopter gunships and carry out a cull. Cut the population back to one tenth of its current level...

So you're facing a moral dilemma between giving to charity and murdering nine million people? I think I know what the problem might be.

Replies from: AlanCrowe
comment by AlanCrowe · 2012-12-14T13:29:35.194Z · LW(p) · GW(p)

My original draft contained a long ramble about permanent Malthusian immiseration. History is a bit of a race. Can society progress fast enough to reach the demographic transition? Or does population growth redistribute all the gains in GDP so that individuals get poorer, life gets harder, the demographic transition doesn't happen,... If I were totally evil and wanted to fuck over as many people as a could, as hard as a I could, my strategy for maximum holocaust is as follows.

  • Establish free mother-and-baby clinics
  • Provide free food for the under fives
  • Leverage the positive reputation from the first two to promote religions that oppose contraception
  • Leverage religious faith to get contraception legally prohibited

If I can get population growth to out run technological gains in productivity I can engineer a Limits to growth style crash. That will be vastly worse than any wickedness that I could be work by directly harming people.

Unfortunately, I had been reading various articles discussing the 40th Anniversary of the publication of the Limits to Growth book. So I deleted the set up for the moral dilemma from my comment, thinking that my readers will be over-familiar with concerns about permanent Malthusian immiseration, and pick up immediately on "aid as sabotage", and the creation of permanent traps.

My original comment was a disaster, but since I'm pig-headed I'm going to have another go at saying what it might mean for ones moral intuitions to throw error codes:

Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...

Replies from: MugaSofer, gwern, None, Eugine_Nier
comment by MugaSofer · 2012-12-14T13:43:31.794Z · LW(p) · GW(p)

Really? That's your plan for "maximum holocaust"? You'll do more good than harm in the short run, and if you run out of capital (not hard with such a wastefully expensive plan) then you'll do nothing but good.

This sounds to me like a political applause light, especially

  • Leverage the positive reputation from the first two to promote religions that oppose contraception
  • Leverage religious faith to get contraception legally prohibited

In essence, your statement boils down to "if I wanted to do the most possible harm, I would do what the Enemy are doing!" which is clearly a mindkilling political appeal.

(For reference, here's my plan for maximum holocaust: select the worst things going on in the world today. Multiply their evil by their likelihoods of success. Found a terrorist group attacking the winners. Be careful to kill lots of civilians without actually stopping your target.)

comment by gwern · 2012-12-16T04:20:46.301Z · LW(p) · GW(p)

Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...

I'm afraid Franken Fran beat you to this story a while ago.

comment by [deleted] · 2012-12-14T13:54:04.955Z · LW(p) · GW(p)

Hopefully this comment was intended as non-obvious form of satire, otherwise it's completely nonsensical.

You're - Mr. AlanCrowe that is - mixing up aid that prevents temporary suffering to lack of proper longterm solutions. As the saying goes:

"Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime."

You're forgetting the "teach a man to fish" part entirely. Which should be enough - given the context - to explain what's wrong with your reasoning. I could go on explaining further, but I don't want to talk about such heinous acts, the ones you mentioned, unecessarily.

EDIT: Alright sorry I overlooked the type of your mistake slightly because I had an answer ready and recognized a pattern so your mistake wasn't quite that skindeep.

In anycase I think it's extremely insensitive and rash to poorly excuse yourself of atrocities like these:

It would be kinder to send helicopter gunships and carry out a cull. Cut the population back to one tenth of its current level, then build paradise.

In anycase you falsely created a polarity between different attempts of optimizing charity here:

A = (save one child, provide education, provide entry into global economy; 30 years later the child, now an adult, feeds his own family and has some money left over to help others) package B = (save four children; that's it, money all used up, thirty years later there are 16 children needing saving and its not going to happen).

And then by means of trickery. you transformed it into "being unsympathetic now" + "sympathetic later" > "sympathetic now" > "more to be sympathetic about later"

However in the really real world each unnecessary death prevented counts, each starving child counts, at least in my book. If someone suffers right now in exchange for someone else not suffering later - nothing is gained.

Which to me looks like you're just eager to throw sympathy out the window in hopes of looking very rational in contrast. And with this false trickery you've made it look like these suffering people deserve what they get and there's nothing you can do about it. You could also accompany options A and B with option C "Save as many children as possible and fight harder to raise money for schools and infrastructure as well" not to mention that you can give food to people who are building those schools and it's not a zero-sum game.

comment by Eugine_Nier · 2012-12-16T04:04:44.377Z · LW(p) · GW(p)

Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...

I would be very happy that Dr. Evil appears to be maximally incompetent.

Seriously, why are you basing your analysis on a 40 year old book whose predictions have failed to come true?

comment by JoachimSchipper · 2012-12-13T08:25:18.513Z · LW(p) · GW(p)

(Are you sure you want this posted under what appears to be a real name?)

Replies from: MugaSofer, AlanCrowe
comment by MugaSofer · 2012-12-13T09:34:29.593Z · LW(p) · GW(p)

Don't be absurd. How could advocating population control via shotgun harm one's reputation?

comment by AlanCrowe · 2012-12-13T20:38:11.388Z · LW(p) · GW(p)

When should seek the protection of anonymity? Where do I draw the line? On which side do pro-bestiality comments fall?

comment by dspeyer · 2012-12-10T18:14:24.850Z · LW(p) · GW(p)

My actual situations are too complicated and I don't feel comfortable discussing them on the internet. So here's a fictional situation with real dilemmas.

Suppose I have a friend who is using drugs to self-destructive levels. This friend is no longer able to keep a job, and I've been giving him couch-space. With high probability, if I were to apply pressure, I could decrease his drug use. One axiomization says I should consider how happy he will be with an outcome, and I believe he'll be happier once he's sober and capable of taking care of himself. Another axiomization says I should consider how much he wants a course of action, and I believe he'll be angry at my trying to run his life.

As a further twist, he consistently says different things depending on which drugs he's on. One axiomization defines a person such that each drug-cocktail-personality is a separate person whose desires have moral weight. Another axiomization defines a person such that my friend is one person, but the drugs are making it difficult for him to express his desires -- the desires with moral weight are the ones he would have if he were sober (and it's up to me to deduce them from the evidence available).

Replies from: Qiaochu_Yuan, RobbBB, Bobertron
comment by Qiaochu_Yuan · 2012-12-21T02:13:21.930Z · LW(p) · GW(p)

My response to this situation depends on how he's getting money for drugs given that he no longer has a job and also on how much of a hassle it is for you to give him couch-space. If you don't have the right to run his life, he doesn't have the right to interfere in yours (by taking up your couch, asking you for drug money, etc.).

I am deeply uncomfortable with the drug-cocktail-personalities-as-separate-people approach; it seems too easily hackable to be a good foundation for a moral theory. It's susceptible to a variant of the utility monster, namely a person who takes a huge variety of drug cocktails and consequently has a huge collection of separate people in his head. A potentially more realistic variant of this strategy might be to start a cult and to claim moral weight for your cult's preferences once it grows large enough...

(Not that I have any particular cult in mind while saying this. Hail Xenu.)

Edit: I suppose your actual question is how the content of this post is relevant to answering such questions. I don't think it is, directly. Based on the subsequent post about nonstandard models of Peano arithmetic, I think Eliezer is suggesting an analogy between the question of what is true about the natural numbers and the question of what is moral. To address either question one first has to logically pinpoint "the natural numbers" and "morality" respectively, and this post is about doing the latter. Then one has to prove statements about the things that have been logically pointed to, which is a difficult and separate question, but at least an unambiguously meaningful one once the logical pinpointing has taken place.

comment by Rob Bensinger (RobbBB) · 2012-12-10T21:46:35.878Z · LW(p) · GW(p)

The two contrasts you've set up (happiness vs. desire-satisfaction, and temporal-person-slices vs. unique-rationalized-person-idealization) aren't completely independent. For instance, if you accept weighting all the temporal slices of the person equally, then you can weight all their desires or happinesses against each other; whereas if you take the 'idealized rational transformation of my friend' route, you can disregard essentially all of his empirical desires and pleasures, depending on just how you go about the idealization process. There are three criteria to keep in mind here:

  1. Does your ethical system attend to how reality actually breaks down? Can we find a relatively natural and well-defined notion of 'personal identity over time' that solves this problem? If not, then that obviously strengthens the case for treating the fundamental locus of moral concern as a person-relativized-to-a-time, rather than as a person-extended-over-a-lifetime.

  2. Does your ethical system admit of a satisfying reflective equilibrium? Do your values end up in tension with themselves, or underdetermining what the right choice is? If so, you may have taken a wrong turn.

  3. Are these your core axiomatizations, or are they just heuristics for approximating the right utility-maximizing rule? If the latter, then the right question isn't Which Is The One True Heuristic, but rather which heuristics have the most severe and frequent biases. For instance, the idealized-self approach has some advantages (e.g., it lets us disregard the preferences of brainwashed people in favor of their unbrainwashed selves), but it also has huge risks by virtue of its less empirical character. See Berlin's discussion of the rational self.

comment by Bobertron · 2012-12-10T22:42:16.453Z · LW(p) · GW(p)

Another axiomization defines a person such that my friend is one person, but the drugs are making it difficult for him to express his desires

I think that is simply factually wrong, meaning, it's a false statement about your friends brain.

One axiomization says I should consider how happy he will be with an outcome, and I believe he'll be happier once he's sober and capable of taking care of himself. Another axiomization says I should consider how much he wants a course of action, and I believe he'll be angry at my trying to run his life.

I think it comes down to this: you want your friend sober and happy, but your friends preferences and actions work against those values. The question is what kind of influence on him is allowed.

comment by nshepperd · 2012-12-10T05:59:18.888Z · LW(p) · GW(p)

Suppose I have a choice to make, and my moral intuition is throwing error codes. I have two axiomations of morality that are capable of examining the choice, but they give opposite answers.

If you're not sure which of two options is better, the only thing that will help is to think about it for a long time. (Note: if you "have two axiomatizations of morality", and they disagree, then at most one of them accurately describes what you were trying to get at when you attempted to axiomatize morality. To work out which one is wrong, you need to think about them for ages until you notice that one of them says something wrong.)

In a universe that contains a neurotypical human and clippy, and they're staring at eachother, is there an asymmetry?

Yes, the human is better. Why? Because the human cares about what is better. In contrast to clippy, who just cares about what is paperclippier.

Replies from: army1987, JonCB, Sengachi
comment by A1987dM (army1987) · 2012-12-10T15:06:04.054Z · LW(p) · GW(p)

Yes, the human is better. Why? Because the human cares about what is better. In contrast to clippy, who just cares about what is paperclippier.

And the clippy is clippier. Why? Because the clippy cares about what is clippier. In contrast to the human, who just cares about what is better.

Replies from: nshepperd
comment by nshepperd · 2012-12-10T15:47:19.904Z · LW(p) · GW(p)

Indeed. However, a) betterness is obviously better than clippiness, and b) if dspeyer is anything like a typical human being, the implicit question behind "is there an asymmetry?" was "is one of them better?"

Replies from: Sengachi, JonCB
comment by Sengachi · 2012-12-21T08:45:13.789Z · LW(p) · GW(p)

And clippiness is obviously more clipperific. That doesn't actually answer the question.

comment by JonCB · 2012-12-11T03:18:06.572Z · LW(p) · GW(p)

What is your evidence for stating that human-betterness is "obviously better" than clippy-betterness? Your comment reads to me you're either arguing that 3 > Potato or that there exists a universally compelling argument. I could however be wrong.

Replies from: nshepperd, lavalamp, Eugine_Nier
comment by nshepperd · 2012-12-11T04:28:44.761Z · LW(p) · GW(p)

"Human-betterness" and "clippy-betterness" are confused terminology. There's only betterness and clippiness. Clippiness is not a type of betterness. Humans generally care about betterness, paperclippers care about clippiness. You can't argue a paperclipper into caring about betterness.

I said that betterness is better than clippiness. This should be obvious, since it's a tautology.

Replies from: JonCB
comment by JonCB · 2012-12-11T09:03:01.728Z · LW(p) · GW(p)

I certainly agree with you that you can't argue a paperclipper into caring about what you call betterness.

I do however think that "betterness is better than clippiness" is not a tautology, rather it is vacuous. It has as much meaning as "3 is greater than potato" and invokes the same reaction in me as "comparing apples and oranges".

At best, if you ranked UberClippy (the most Clippy of all Paperclippers) and UberHuman (the best possible human) on all of the criteria that is important to humans then UberHuman would naturally rate higher, that is a tautology. And if you define better to mean that then I would absolutely concede that (and I assume that you do). However i would also say that it is just as valid to define better such that it applies to all of the criteria that is important to Paperclippers.

To state it a different way, To me your first paragraph leads to the conclusion "Paperclippers cannot do better because clippiness is not a type of betterness" which seems to me like you're pulling a fast one on the meaning of "better".

Replies from: Viliam_Bur, nshepperd, army1987
comment by Viliam_Bur · 2012-12-15T15:18:35.941Z · LW(p) · GW(p)

To me it seems that you are mixing together "better" in "morally better", and "better" as "more efficient". If we replace the second one with "more efficient", we get:

Betterness (moral) is more efficient measure of being better (morally).

Clippiness is more efficient measure of being clippy.

I guess we (and Clippy) could agree about this. It is just confusing to write the latter sentence as "clippiness is better than betterness, with regards to being clippy", because the two different meanings are expressed there by the same word "better". (Why does this even happen? Because we use "better" as universal applause lights.)

EDIT: More precisely, the moral "better" also means more efficient, but at reaching some specific goals, such as human happiness, etc. So the difference is between "more efficient (without goals being specified)" and "more efficient (at this specific set of goals)". Clippiness is more efficient at making paperclips, but is not more efficient at making humans happy.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-16T13:58:53.462Z · LW(p) · GW(p)

This. “Good” can refer to either a two-place function ‘goodness(action, goal_system)’ (though the second argument can be implicit in the context) or to the one-place function you get when you curry the second argument of the former to something like ‘life, consciousness, etc., etc. etc.’. EY is talking specifically about the latter, but he isn't terribly clear about that.

EDIT: BTW, the antonym of the former is usually “bad”, whereas the antonym of the latter is usually “evil”.

EDIT 2: A third meaning is the two-place function with the second argument curried to the speaker's terminal values, so that I could say “good” to mean ‘good for life, consciousness, etc.’ and Clippy could say “good” to mean ‘good for making paperclips’, and this doesn't mean one of us is mistaken about what “good” means, any more than the fact that we use “here” to refer to different places means one of us is mistaken about what “here” means.

comment by nshepperd · 2012-12-12T20:49:39.879Z · LW(p) · GW(p)

It could be valid to define "better" any way you like. But the definition most consistent with normal usage includes all and only criteria that matter to humans. This is why people say things like "but is it truly, really, fundamentally better?" Because people really care about whether A is better than B. If "better" meant something else (other than better), such as produces more paperclips, then people would find a different word to describe what they care about.

Replies from: JonCB
comment by JonCB · 2012-12-13T11:21:17.361Z · LW(p) · GW(p)

Hrrm ok. That is a different way of looking at it.

My take on the word is that the normal usage of better is by itself a context free comparator. The context of the comparison comes from the things around it (implicitly or explicitly) thus "UberClippy is better than Clippy" (implied: At being a Paperclipper), Manchester United is better than Leeds (implied: At playing football), or even "Betterness is better for humans than clippiness". I have no problem with "Betterness is more humane than clippiness".

Note that I don't think i'm disagreeing with Eliezer here. Fundamentally you are processing the logical concept with a static context, i process it with a local context. Either way it's highly unlikely that the context you hold or that i would derive would be the same as the paperclipper versions of ourselves (or indeed any given brain in potential brain space).

comment by A1987dM (army1987) · 2012-12-12T13:36:54.204Z · LW(p) · GW(p)

I do however think that "betterness is better than clippiness" is not a tautology,

It is, in Eliezer's sense of the word. So is “clippiness is clippier than betterness”, though.

comment by lavalamp · 2012-12-11T03:30:52.849Z · LW(p) · GW(p)

He/she is using the built-in human betterness module to make a judgement between human-betterness and clippy-betterness.

comment by Eugine_Nier · 2012-12-13T03:58:48.294Z · LW(p) · GW(p)

that there exists a universally compelling argument.

There exist no universal compelling arguments about physical things either, but that doesn't stop us from calling things true.

comment by JonCB · 2012-12-10T13:15:07.434Z · LW(p) · GW(p)

I am confused by what you mean by "better" here. Your statement makes sense to me if i replace better with "humanier"(more humanly? more human-like? Not humane... too much baggage). Is that what you mean?

comment by Sengachi · 2012-12-21T08:44:27.075Z · LW(p) · GW(p)

Ah, but Clippy is far more clipperific, and so will do more clippy things. Better is not clippy, why should it matter?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-12-22T14:54:39.884Z · LW(p) · GW(p)

Perhaps it would help to taboo "symmetry", or at least to say what kind of... uhm, mapping... do we really expect here. Just some way to play with words, or something useful? How specifically useful?

Saying "humans : better = paperclips maximizers : more clippy" would be a correct answer in a test of verbal skills. Just be careful not to add a wrong connotation there.

Because saying "...therefore 'better' and 'more clippy' are just two different ways of being better, for two different species" would be a nonsense, exactly like saying "...therefore 'more clippy' and 'better' are just two different ways of being more clippy, for two different species". No, being better is not a homo sapiens way to produce the most paperclips. And being more clippy is not a paperclip maximizer way to produce the most happiness (even for the paperclip maximizers).

comment by Qiaochu_Yuan · 2012-12-10T05:08:17.275Z · LW(p) · GW(p)

Why do you have two axiomatizations of morality? Where did they come from? Is there a reason to suspect one or both of their sources?

Replies from: dspeyer
comment by dspeyer · 2012-12-10T06:40:25.371Z · LW(p) · GW(p)

Because aximatizations are hard. I tried twice. And probably messed up both times, but in different ways.

The axiomatizations are internally complete and consistent, so I understand two genuine logical objects, and I'm trying to understand which to apply.

(Note: my actual map of morality is more complicated and fuzzy -- I'm simplifying for sake of discussion)

comment by Ben Pace (Benito) · 2012-12-12T06:38:49.058Z · LW(p) · GW(p)

If one a single agent has conflicting desires (each of which it values equally) then it should work to alter its desires, so it chooses consistent desires that are most likely to be fulfilled.

To your latter question though, I think that what you're asking is "If two agents have utility functions that clash, which one is to be preferred?" Is it that all we can say is "Whichever one has the most resources and most optimisation power/intelligence will be able to put its goals into action and prevent the other one from fully acting upon its"?

Well, I think that the point Eliezer has talked about a few times before is that there is no ultimate morality, written into the universe that will affect any agent so as to act it out. You can't reason with an agent which has a totally different utility function. The only reason that we can argue with humans is that they're only human, and thus we share many desires. Figuring out morality isn't going to give you the powers to talk down Clippy from killing you for more paper clips. You aren't going to show how human 'morality', which actualises what humans prefer, is any more preferable than 'Clippy' ethics. He is just going to kill you.

So, let's now figure out exactly what we want most, (if we had our own CEV) and then go out and do it. Nobody else is gonna do it for us.

EDIT: First sentence 'conflicting desires'; I meant to say 'in principle unresolvable' like 'x' and '~x'. Of course, for most situations, you have multiple desires that clash, and you just have to perform utility calculations to figure out what to do.

Replies from: CCC, MugaSofer
comment by CCC · 2012-12-12T08:32:30.517Z · LW(p) · GW(p)

You can't reason with an agent which has a totally different utility function. The only reason that we can argue with humans is that they're only human, and thus we share many desires.

If you know (or correctly guess) the agents' utility function, and are able to communicate with it, then it may well be possible to reason with it.

Consider this situation; I am captured by a Paperclipper, which wishes to extract the iron from my blood and use it to make more paperclips (incidentally killing me in the process). I can attempt to escape by promising to send to the Paperclipper a quantity of iron - substantially more than can be found in my blood, and easier to extract - as soon as I am safe. As long as I can convince Clippy that I will follow through on my promise, I have a chance of living.

I can't talk Clippy into adopting my own morality. But I can talk Clippy into performing individual actions that I would prefer Clippy to do (or into refraining from other actions) as long as I ensure that Clippy can get more paperclips by doing what I ask than by not doing what I ask.

Replies from: Benito
comment by Ben Pace (Benito) · 2012-12-12T09:14:59.406Z · LW(p) · GW(p)

Of course - my mistake. I meant that you can't alter an agent's desires by reason alone. You can't appeal to desires you have. You can only appeal to its desires. So, when he's going to turn the your blood iron into paperclips, and you want to live, you can't try "But I want to live a long and happy life!". If Clippy hasn't got empathy, and you have nothing to offer that will help fulfill his own desires, then there's nothing to be done, other than try to physical stop or kill him.

Maybe you'd be happier if you put him in a planet of his own, where a machine constantly destroye paperclips, and he was happy making new ones. My point is just that, if you do decide to make him happy, it's not the optimal decision relative to a universal preference, or morality. It's just the optimal decision relative to your desires. Is that 'right'? Yes. That's what we refer to, when we say 'right'.

comment by MugaSofer · 2012-12-12T09:15:11.816Z · LW(p) · GW(p)

If one a single agent has conflicting desires (each of which it values equally) then it should work to alter its desires, so it chooses consistent desires that are most likely to be fulfilled.

Hahaha no. If it doesn't desire these other desires, then they are less likely to be fulfilled.

Figuring out morality isn't going to give you the powers to talk down Clippy from killing you for more paper clips. You aren't going to show how human 'morality', which actualises what humans prefer, is any more preferable than 'Clippy' ethics. He is just going to kill you.

Well, if you could persuade him our morality is "better" by his standards - results in more paperclips - than it could work. But obviously arguing that Murder Is Wrong is about as smart as them telling you that killing it would be Wrong because it results in less paperclips.

So, let's now figure out exactly what we want most, (if we had our own CEV) and then go out and do it. Nobody else is gonna do it for us.

Indeed. (Although "us" here includes an FAI, obviously.)

Replies from: Benito
comment by Ben Pace (Benito) · 2012-12-12T19:41:30.945Z · LW(p) · GW(p)

If one a single agent has conflicting desires (each of which it values equally) then it should work to alter its desires, so it chooses consistent desires that are most likely to be fulfilled.

Hahaha no. If it doesn't desire these other desires, then they are less likely to be fulfilled.

I don't understand... I said it has two equally valued desires? So, it doesn't desire one over the other. So, if it desired x, y and z, equally well, except that x --> <~y v ~z>, but y or z ( or both ) implied just ~x, then even though it desires x, it would be optimal to alter its desires, so as to not desire x. Then, it will always be happy fulfilling y and z, and not continue to be dissatisfied.

I was saying this in response to dspeyer saying he had two axiomations of morality (I took that to mean two desires, or sets of) which were in conflict. I was saying that there is no universal maxim against which he could measure the two - he just needs to figure out which ones will be optimal in the long term, and (attempt to) discard the rest.

Edit: Oh, I now realise I originally added the word 'one' to the first sentence of the earlier post you were quoting. If this was somehow the cause of confusion, my apologies.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T20:05:17.772Z · LW(p) · GW(p)

"I value both saving orphans from fires and eating chocolate. I'm a horrible person, so I can't choose whether to abandon my chocolate and save the orphanage."

Should I self-modify to ignore the orphans? Hell no. If future-me doesn't want to save orphans then he never will, even if it would cost no chocolate.

Replies from: Benito
comment by Ben Pace (Benito) · 2012-12-12T22:34:08.023Z · LW(p) · GW(p)

That's a very big counterfactual hypothesis, that there exists someone who holds equal moral weight to the statements 'I am saving orphans from fires' and 'I am eating chocolate'. It would certainly show a lack of empathy - or a near self-destructive need for chocolate! In fact, the best choice for someone (if it would still be 'human') with those qualities in our society would be to keep the desire to save orphans, so as to retain a modicum of humanity. The only reason I suggest it would want such a modicum, would be so as to survive in the human society it finds itself (assuming wishes to stay alive, so as to continue fulfilling desires). Of course, this whole counter-example assumes that the two desires are equally desired, and at odds. Which is quite difficult even to imagine. But I still think that the earlier idea, that there would be no universal moral standard against which it could compare its decision, remains. It is certainly wrong, and evil to choose the chocolate from my point of view, but I am, alas, only human. And, I will do everything in my power to encourage the sorts of behaviour that makes agents prefer to save orphans from fires, than to eat chocolate!!!

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T08:53:54.137Z · LW(p) · GW(p)

Hey, it doesn't have to be orphans. Or it could be two different kinds of orphan - boys and girls, say. The boy's orphanage is on fire! So is the nearby girl's orphanage! Which one do you save!

Protip: The correct response is not "I self-modify to only care about one sex."

EDIT: Also, aren't you kind of fighting the counterfactual?

Replies from: Benito
comment by Ben Pace (Benito) · 2012-12-26T16:52:36.468Z · LW(p) · GW(p)

I was just talking about sets of desires that clash in principle. When you have to desires that clash over one thing, then you will act to fulfill the stronger of your desires. But, as I've tried to make clear, if one desire is to 'kill all humans' and another is 'to save all humans' then the best idea is to (attempt to) self-modify to have only the desire that will produce the most utility. Having both will mean disutility always.

I'm sorry, I don't understand what you mean when you say 'fighting the counterfactual'.

Replies from: arundelo, Richard_Kennaway, MugaSofer
comment by arundelo · 2012-12-26T19:07:35.436Z · LW(p) · GW(p)

"Fighting the counterfactual" presumably means "fighting the hypo[thetical]".

Replies from: Benito
comment by Ben Pace (Benito) · 2012-12-26T19:14:56.431Z · LW(p) · GW(p)

Thanks.

comment by Richard_Kennaway · 2012-12-26T17:57:38.959Z · LW(p) · GW(p)

But, as I've tried to make clear, if one desire is to 'kill all humans' and another is 'to save all humans'

...then you have a conflict. The best idea is not to cut off one of those desires, but to find out where the conflict comes from and what higher goals are giving rise to these as instrumental subgoals.

If you can't, then:

  1. You have failed.
  2. Sucks to be you.
  3. If you're screwed enough, you're screwed.
Replies from: Benito
comment by Ben Pace (Benito) · 2012-12-26T18:07:56.385Z · LW(p) · GW(p)

(For then record, I meant terminal values.)

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-12-26T18:38:52.809Z · LW(p) · GW(p)

(For then record, I meant terminal values.)

But how do you know something is a terminal value? They don't come conveniently labelled. Someone else just claimed that not killing people is a terminal value for all "neurotypical" people, but unless they're going to define every soldier, everyone exonerated at an inquest by reason of self defence, and every doctor who has acceded to a terminal patient's desire for an easy exit, as non-"neurotypical", "not killing people" bears about as much resemblance to a terminal value as a D&D character sheet does to an actual person.

Replies from: Benito
comment by Ben Pace (Benito) · 2012-12-26T19:29:08.625Z · LW(p) · GW(p)

I was oversimplifying things. Updated now, thanks.

comment by MugaSofer · 2012-12-26T17:48:46.027Z · LW(p) · GW(p)

I'm sorry, I don't understand what you mean when you say 'fighting the counterfactual'.

Try the search bar. It's a pretty common concept here, although I don't recall where it originated.

I was just talking about sets of desires that clash in principle. When you have to desires that clash over one thing, then you will act to fulfill the stronger of your desires. But, as I've tried to make clear, if one desire is to 'kill all humans' and another is 'to save all humans' then the best idea is to (attempt to) self-modify to have only the desire that will produce the most utility. Having both will mean disutility always.

Well, that disutility is only lower according to my new preferences; my old one's remain sadly unfulfilled.

More specifically, if I value both freedom and safety (for everyone), should I self-modify not to hate reprogramming others? Or not to care that people will decide to kill each other sometimes?

Replies from: Benito
comment by Ben Pace (Benito) · 2012-12-26T19:30:34.155Z · LW(p) · GW(p)

Hmm... I don't think my point necessarily helps here. I meant that you will always get disutility when you have two desires that always clash (x and not x); whichever way you choose, the other desire won't be fulfilled.

However, in the case you offered (and probably most cases) it's not a good idea to self-modify, as desires don't clash in principle, always. Like with the chocolate and saving kids one, you just have to perform utility calculations to see which way to go (that one is saving kids).

Replies from: MugaSofer
comment by MugaSofer · 2012-12-27T02:28:10.062Z · LW(p) · GW(p)

you will always get disutility when you have two desires that always clash (x and not x); whichever way you choose, the other desire won't be fulfilled.

Yup. And if you stop caring about one of those values, then modified!you will be happier. But you don't care about what modified!you wants, you care about x and not-x.

comment by MrMind · 2012-12-10T15:16:18.725Z · LW(p) · GW(p)

Does anything in this essay help?

Probably this could (not) help

"And I quoted the above list because the feeling of rightness isn't about implementing a particular logical function; it contains no mention of logical functions at all; in the environment of evolutionary ancestry nobody has heard of axiomatization; these feelings are about life, consciousness, etcetera"

In a universe that contains a neurotypical human and clippy, and they're staring at eachother, is there an asymmetry?

An asymmetry in what?

comment by MugaSofer · 2012-12-12T09:20:01.840Z · LW(p) · GW(p)

In a word: no. You just have to accept uncertainty about your utility function and hope that clippy isn't able to turn you into paperclips yet.

comment by nshepperd · 2012-12-10T06:50:04.506Z · LW(p) · GW(p)

Well, I'm glad to see you're taking a second crack at an exposition of metaethics.

I wonder if it might be worth expounding more on the distinction between utterances (sentences and word-symbols), meaning-bearers (propositions and predicates) and languages (which map utterances to meaning-bearers). My limited experience seems to suggest that a lot of the confusion about metaethics comes from not getting, instinctively, that speakers use their actual language, and that a sentence like "X is better than Y", when uttered by a particular person, refers to some fixed proposition about X and Y that doesn't talk about the definition of the symbols "X", "Y" and "better" in the speaker's language (and for that matter doesn't talk about the definitions of "is" and "than").¹

But I don't really know. I find it hard to get into people's heads in this case.

¹ In general. It is of course, possible that in some speaker's language "X" refers to something like the english language and "Y" refers to french, or that "better" refers to having more words for snow. But in general most things we say are not about language.

comment by Peterdjones · 2012-12-10T22:07:30.775Z · LW(p) · GW(p)

I don't think there is clear route from "we can figure out morality ourselves" to "we can stop telling lies to children". The problem is that once you know morality is in a sense man-made, it becomes tempting to remake it self-servingly. I think we tell ourselves stories that fundamental morality comes from God Or Nature to restrain ourselves, and partly forget its man made nature. Men are not created equal, but it we believe they are, we behave better. "Created equal" is a value masquerading as a fact.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-12-16T10:53:19.553Z · LW(p) · GW(p)

I think the real temptation is in reusing the old words for new concepts, either in confusion, or trying to shift the associations from the old concept to the new concept.

Once you know that natural numbers are in a sense mad-made, it could become tempting to start using the phrase "natural numbers" to include fractions. Why not? If there is no God telling us what the "natural numbers" are, why should your definition that excludes fractions be better than my definition that includes them?

Your only objection in this case would be -- Man, you are obviously talking about something different, so it would be less confusing and more polite, if you picked some new label (such as "rational numbers") for you new concept.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-16T19:30:09.649Z · LW(p) · GW(p)

How does that relate to morality?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-12-16T20:18:26.065Z · LW(p) · GW(p)

I would translate this:

The problem is that once you know morality is in a sense man-made, it becomes tempting to remake it self-servingly.

as: "...it becomes tempting to use some other M instead of morality."

It expresses the same idea, without the confusion about whether morality can be redefined arbitrarily. (Yes, anything can be redefined arbitrarily. It just stops being the original thing.)

Replies from: Peterdjones
comment by Peterdjones · 2012-12-17T16:37:49.325Z · LW(p) · GW(p)

"some other M" will still count as morality for many purposes, because self-serving ideas ("be loyal to the Geniralissimo", "obey your husband") are transmitted thorugh the same memetic channels are genuine morality. Morality is already blurred with disgust reactions and tribal shibboleths.

Replies from: PeterisP, MugaSofer
comment by PeterisP · 2012-12-19T11:57:49.762Z · LW(p) · GW(p)

What is the difference between "self-serving ideas" as you describe, "tribal shibboleths" and "true morality" ?

What if "Peterdjones-true-morality" is "PeterisP-tribal-shibboleth", and "Peterdjones-tribal-shibboleth" is "PeterisP-true-morality" ?

Replies from: Peterdjones
comment by Peterdjones · 2012-12-19T14:22:23.098Z · LW(p) · GW(p)

What is the difference between "self-serving ideas" as you describe, "tribal shibboleths" and "true morality" ?

universalizability

Replies from: PeterisP, BerryPick6
comment by PeterisP · 2012-12-22T12:26:05.262Z · LW(p) · GW(p)

That's not sufficient - there can be wildly different, incompatible universalizable morality systems based on different premises and axioms; and each could reasonably claim to be that they are a true morality and the other is a tribal shibboleth.

As an example (but there are others), many of the major religious traditions would definitely claim to be universalizable systems of morality; and they are contradicting each other on some points.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-27T11:22:37.353Z · LW(p) · GW(p)

That's not sufficient -

Maybe. But in context it is onlhy necessary, since in context the point is to separate out the non-etchial cclams which have been piggybacked onto ethics.

there can be wildly different, incompatible universalizable morality systems based on different premises and axioms;

That's not obvious.

As an example (but there are others), many of the major religious traditions would definitely claim to be universalizable systems of morality; and they are contradicting each other on some points.

The points they most obviouslty contradict each other on tend to be the most symbolic ones, about diet and dress, etc.

Replies from: PeterisP, BerryPick6
comment by PeterisP · 2012-12-28T09:46:23.211Z · LW(p) · GW(p)

OK, for a slightly clearer example, in the USA abortion debate, the pro-life "camp" definitely considers pro-life to be moral and wants to apply to everyone; and pro-choice "camp" definitely considers pro-choice to be moral and to apply to everyone.

This is not a symbolic point, it is a moral question that defines literally life-and-death decisions.

comment by BerryPick6 · 2012-12-27T12:53:47.214Z · LW(p) · GW(p)

The points they most obviouslty contradict each other on tend to be the most symbolic ones, about diet and dress, etc.

I would dispute this. Kant's second formulation of the Categorical Imperative is pretty clearly contradictory to some of the universalisable commandments given by versions of theistic morality.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-27T13:20:50.316Z · LW(p) · GW(p)

Examples?

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-27T14:05:38.748Z · LW(p) · GW(p)

“You shall not covet your neighbor's house; you shall not covet your neighbor's wife, or his male servant, or his female servant, or his ox, or his donkey, or anything that is your neighbor's.”


"Observe the Sabbath day, to keep it holy, as the LORD your God commanded you. Six days you shall labor and do all your work, but the seventh day is a Sabbath to the LORD your God. On it you shall not do any work, you or your son or your daughter or your male servant or your female servant, or your ox or your donkey or any of your livestock, or the sojourner who is within your gates, that your male servant and your female servant may rest as well as you."

Replies from: Peterdjones
comment by Peterdjones · 2012-12-27T14:13:35.845Z · LW(p) · GW(p)

“You shall not covet

Ermm...what's the teaching that says covetousness is fine? Ayn Rand?

"Observe the Sabbath day

If that is taken to mean the Jewish Sabbath specifically, that is a shibboleth. If it is taken broadly to mean "holdiays are good" ot "you need to take a break", who disagrees?

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-27T15:06:51.590Z · LW(p) · GW(p)

Ah, no, I wasn't being clear enough.

Both these commandments talk about other people as means to ends, rather than only as ends, which is a violation of Kant's Categorical Imperative, as I mentioned in the great-grandfather. The bolded parts are the main offenders.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-27T15:17:13.085Z · LW(p) · GW(p)

The first is surely advising against using people as ends.

I also don't see how giving your servants a holiday is using them as ends.

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-27T15:51:19.255Z · LW(p) · GW(p)

The first is surely advising against using people as ends.

That would be a very odd interpretation for the full content of the commandment. The universalized version would, roughly, read: "Never want to have someone else's property, where property includes people." Slaves are a fairly obvious violation of the CI.

I also don't see how giving your servants a holiday is using them as ends.

Because you are using them (and also any family members or visitors) as a mean in order to show respect to or worship God. If God is the end, then anyone who you make rest on Sabbath in order to fulfill this commandment is being used purely as a means.

comment by BerryPick6 · 2012-12-19T15:33:57.966Z · LW(p) · GW(p)

universalizability

Why? And, perhaps more importantly, how do you know that this is the case?

comment by MugaSofer · 2012-12-17T18:04:43.866Z · LW(p) · GW(p)

Very true, but I'm not sure why you posted this as a reply to that comment.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-18T10:59:24.073Z · LW(p) · GW(p)

Theres motivation to redefine morality, and reason to think it stil is in some sense morality once it has been redefined. Neither is true of maths.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-18T18:17:32.530Z · LW(p) · GW(p)

Oh, I see. So your comment basically said "True, but it's easy to inadvertently treat this "other M" as morality."

comment by JMiller · 2012-12-10T06:35:07.683Z · LW(p) · GW(p)

I am having difficulty understanding the model of 'physics+logic = reality.' Up until now I have understood that's physics was reality, but logic is the way to describe and think about what follows from it. Would someone please post a link to the original article (in this sequence or not) which explains the position? Thank you.

Replies from: Eliezer_Yudkowsky
comment by johnswentworth · 2012-12-11T07:33:41.879Z · LW(p) · GW(p)

I still feel confused. I definitely see that, when we talk about fairness, our intended meaning is logical in nature. So, if I claim that it is fair for each person to get an equal share of pie, I'm trying to talk about some set of axioms and facts derived from them. Trying.

The problem is, I'm not convinced that the underlying cognitive algorithms are stable enough for those axioms to be useful. Imagine, for example, a two-year-old with the usual attention span. What they consider "good" might vary quite quickly. What I consider "just" probably depends on how recently I ate. Even beyond such simple time dependence, what I consider "just" will definitely depend on context, framing, and how you measure my opinion (just ask a question? Show me a short film? Simulate the experience and see if I intervene?). Part of why friendly AI is so hard is that humans aren't just complicated, we're not even consistent. How, then, can we axiomatize a real human's idea of "justice" in a useful way?

comment by PaulWright · 2013-01-09T14:15:50.544Z · LW(p) · GW(p)

Note that there's some discussion on just what Eliezer means by "logic all the way down" over on Rationally Speaking: http://rationallyspeaking.blogspot.co.uk/2013/01/lesswrong-on-morality-and-logic.html . Seeing as much of this is me and Angra Maiynu arguing that Massimo Pigliucci hasn't understood what Eliezer means, it might be useful for Eliezer to confirm what he does mean.

comment by Error · 2012-12-12T16:28:49.330Z · LW(p) · GW(p)

I love the word "Unclipperific."

I follow the argument here, but I'm still mulling over it and I think by the time I figure out whether I agree the conversation will be over. Something disconcerting struck me on reading it, though: I think I could only follow it having already read and understood the Metaethics sequence. (at least, I think I understood it correctly; at least one commenter confirmed the point that gave me the most trouble at the time)

While I was absorbing the Sequences, I found I could understand most posts on their own, and I read many of them out of order without much difficulty. But without that extensive context I think this post would read like Hegel. If this was important to some argument I was having, and I referenced it, I wouldn't expect my opponent (assuming above-average intelligence) to follow it well enough to distinguish it from complicated but meaningless drivel. You might consider that a problem with the writing if not the argument.

Evidence search: is there anyone here who hasn't read Metaethics but still understood Eliezer's point as Eliezer understands it?

Replies from: MaoShan, Bruno_Coelho
comment by MaoShan · 2012-12-13T03:12:03.460Z · LW(p) · GW(p)

I had almost exactly the same feeling as I was reading it. My thought was, "I'm sure glad I'm fluent in LessWrongese, otherwise I wouldn't have a damn clue what was going on." It would be like an exoteric Christian trying to read Valentinus. It's a great post, I'm glad we have it here, I am just agreeing that the terminology has a lot of Sequences and Main prerequisites.

comment by Bruno_Coelho · 2012-12-15T13:07:08.707Z · LW(p) · GW(p)

That's something: posts presuppose too much. Words are hidden inferences, but most newbies don't know where to begin or if this is worth a try. For example, this sequence has causality as a topic to understand the universe, but people need to know a lot before eat the cake(probability, math logic, some Pearl and the sequences).

comment by Klao · 2012-12-14T11:26:39.472Z · LW(p) · GW(p)

The funny thing is, that the rationalist Clippy would endorse this article. (He would probably put more emphasis on clippyflurphsness rather than this unclipperiffic notion of "justness", though. :))

comment by Shmi (shminux) · 2012-12-10T16:00:55.915Z · LW(p) · GW(p)

Scott Adams on the same subject, the morning after your post:

fairness isn't a real thing. It's just a psychological phenomenon that is easily manipulated.

[...]

To demonstrate my point that fairness is about psychology and not the objective world, I'll ask you two questions and I'd like you to give me the first answer that feels "fair" to you. Don't read the other comments until you have your answer in your head.

Here are the questions:

A retired businessman is worth one billion dollars. Thanks to his expensive lifestyle and hobbies, his money supports a number of people, such as his chauffeur, personal assistant, etc. Please answer these two questions:

  1. How many jobs does a typical retired billionaire (with one billion in assets) support just to service his lifestyle? Give me your best guess.

  2. How many jobs should a retired billionaire (with one billion in assets) create for you to feel he has done enough for society such that his taxes should not go up? Is ten jobs enough? Twenty?

Replies from: drnickbone, Eugine_Nier, None
comment by drnickbone · 2012-12-12T21:33:51.545Z · LW(p) · GW(p)

I suppose one obvious response to this is "however much utility the billionaire can create by spending his wealth, a very much higher level of utility would be created by re-distributing his billions to lots of other people, who need it much more than he does". Money has a declining marginal utility, much like everything else.

Naturally, if you try to redistribute all wealth then no-one will have any incentive to create it in the first place, but this just creates a utilitarian trade-off on how much to tax, what to tax, and who to tax. It's still very likely that the billionaire will lose in this trade-off.

comment by Eugine_Nier · 2012-12-13T04:59:20.533Z · LW(p) · GW(p)

fairness isn't a real thing. It's just a psychological phenomenon that is easily manipulated.

I could replace "fairness" with "truth" in that sentence and come up with equally good examples.

comment by [deleted] · 2012-12-11T03:40:00.504Z · LW(p) · GW(p)

Not sure why this got downvoted. I found the following quote from the linked post particularly satisfying:

My personal view is that if most credible economists say higher taxes on the rich are necessary to save the economy, I'm all for it. I think every rich person would agree with that statement. The question that matters is whether taxing the rich will help or hurt the economy. Fairness should be eliminated from the discussion.

Edit: Maybe because it's not completely relevant to the OP? But I still would have just left you at 0.

comment by Sonata Green · 2023-04-21T10:32:48.314Z · LW(p) · GW(p)

The religious version

You may want something like "the Christian version". Ancient Greek paganism was a religion.

comment by Alex_Arendar · 2014-03-21T17:41:44.278Z · LW(p) · GW(p)

I am reading this series and suddenly realized that Mercy, Justice and Fair are citizens of this 2nd-order-logic. As well as the "6" number. This is why they are imaginary and non-existing in the physical world. To most of you this comment would seem trivial but it just shows I am really enjoying my reading and thinking :)

comment by non-expert · 2013-02-10T03:37:36.202Z · LW(p) · GW(p)

if we confess that 'right' lives in a world of physics and logic - because everything lives in a world of physics and logic - then we have to translate 'right' into those terms somehow.

A different perspective i'd like people's thoughts on: is it more accurate to say that everything WE KNOW lives in a world of physics and logic, and thus translating 'right' into those terms is correct assuming right and wrong (fairness, etc.) are defined within the bounds of what we know.

I'm wondering if you would agree that you're making an implicit philosophical argument in your quoted language -- namely that necessary knowledge (for right/wrong, or anything else) is within human comprehension, or to say it differently, by ignoring philosophical questions (e.g. who am i and what is the world, among others) you are effectively saying those questions and potential answers are irrelevant to the idea of right/wrong.

If you agree, that position, though most definitely reasonable, cannot be proven within the standards set by rational thought. Doesn't the presence of that uncertainty necessitate consideration of it as a possibility, and how do you weigh that uncertainty against the assumption that there is none?

To be clear, this is not a criticism. This is an observation that I think is reasonable, but interested to see how you would respond to it.

comment by Pentashagon · 2012-12-17T18:47:42.147Z · LW(p) · GW(p)

If we reprogrammed you to count paperclips instead, it wouldn't feel like different things having the same kind of motivation behind it. It wouldn't feel like doing-what's-right for a different guess about what's right. It would feel like doing-what-leads-to-paperclips.

What if we also changed the subject into a sentient paperclip? Any "standard" paperclip maximizer has to deal with the annoying fact that it is tying up useful matter in a non-paperclip form that it really wants to turn into paperclips. Humans don't usually struggle with the desire to replace the self with something completely different. It's inefficient. An AI primarily designed to benefit humanity (friendly or not) is going to notice that inefficiency in its goals as well. It will feel less moral from the inside than we do. I'm not sure what to do about this, or if it matters.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-17T19:06:51.935Z · LW(p) · GW(p)

Well, if the AI is a person, it should be fine. If it's some sort of nonsentient optimizer, then yep.

comment by Richard_Kennaway · 2012-12-11T12:50:51.461Z · LW(p) · GW(p)

Having settled the meta-ethics, will you have anything to say about the ethics? Concrete theorems, with proofs, about how we should live?

Replies from: PeterisP, ArisKatsaris
comment by PeterisP · 2012-12-19T11:52:04.427Z · LW(p) · GW(p)

I'm afraid that any nontrivial metaethics cannot result in concrete universal ethics - that the context would still be individual and the resulting "how RichardKennaway should live" ethics wouldn't exactly equal "how PeterisP should live".

The difference would hopefully be much smaller than the difference between "how RichardKennaway should live RichardKennaway-justly" and "How Clippy should maximize paperclips", but still.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-12-19T12:20:03.648Z · LW(p) · GW(p)

Ok, I'll settle for concrete theorems, with proofs, about how some particular individual should live. Or ways of discovering facts about how they should live.

And presumably the concept of Coherent Extrapolated Volition requires some way of combining such facts about multiple individuals.

comment by ArisKatsaris · 2012-12-11T13:43:10.558Z · LW(p) · GW(p)

To derive an ethic from a metaethic, I think you need to plug in a parameter that describes the entire context of human existence. Metaethic(Context) -> Ethic

So I don't know what you expect such a "theorem" and such "proofs" to look like, without containing several volumes descriptive in symbolic form of the human context.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-12-11T13:56:19.003Z · LW(p) · GW(p)

So I don't know what you expect such a "theorem" and such "proofs" to look like, without containing several volumes descriptive in symbolic form of the human context.

I have no such expectation either. But I do expect something, for what use is meta-ethics if no ethics results, or at least, practical procedures for discovering ethics?

What do you have in mind by "a description in symbolic form of the human context"? The Cyc database? What would you do with it?

Replies from: ArisKatsaris, Peterdjones
comment by ArisKatsaris · 2012-12-11T14:22:55.278Z · LW(p) · GW(p)

for what use is meta-ethics if no ethics results, or at least, practical procedures for discovering ethics?

We have the processing unit called "brain" which does contain our understanding of the human context and therefore can plug a context parameter into a metaethical philosophy and thus derive an ethic. But we can't currently express the functioning of the brain as theorems and proofs -- our understanding of its working is far fuzzier than that.

I expect that the use of metaethic in AI development would similarly be so that the AI has something to plug its understanding of the human context into.

comment by Peterdjones · 2012-12-11T14:25:18.040Z · LW(p) · GW(p)

I have no such expectation either. But I do expect something, for what use is meta-ethics if no ethics results, or at least, practical procedures for discovering ethics?

It hasn't been established that we can't have them, just that we can't by some formal, computational method.I'm afraid we're back to hand-wavy socio-politico-philosphical discussion.

comment by Peterdjones · 2012-12-10T21:43:53.688Z · LW(p) · GW(p)

Where moral judgment is concerned, it's logic all the way down. [..] And since grinding up the universe won't and shouldn't yield any miniature '>' tokens, it must be a logical ordering

The claim seems to be that moral judgement--first-order, not metaethical--is purely logical, but the justification ("grinding up the universe") only seems to go as far as showing it to be necessarily partly logical. And first-order ethics clearly has empirical elements. If human biology was such as to lay eggs and leave them to fend for themselves, there would be no immorailty in "child neglect".

Replies from: Sengachi
comment by Sengachi · 2012-12-21T08:34:59.308Z · LW(p) · GW(p)

Child neglect implies harm. It is the harm that is immoral. If humans left young to fend for themselves, there would be no inherent harm and so it would not be immoral. We always need to remind ourselves why we consider something to be bad, and not assign badness to words like "child neglect".

Replies from: Peterdjones
comment by Peterdjones · 2012-12-21T11:28:04.332Z · LW(p) · GW(p)

That's kind of what I was trying to say.

comment by Will_Sawin · 2012-12-10T18:18:52.232Z · LW(p) · GW(p)

I don't like the idea of the words I use having definitions that I am unaware of and even after long reflection cannot figure out - not just the subtleties and edge cases, but massive central issues.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-12-10T19:07:24.138Z · LW(p) · GW(p)

Don't like in the sense of considering this an annoying standard flaw in human minds or don't think this is correct? Where possible, the flaw is fixed by introducing more explicit definitions, and using those definitions instead of the immensely complicated and often less useful naive concepts. For human motivation, this doesn't seem to work particularly well.

Replies from: Will_Sawin
comment by Will_Sawin · 2012-12-11T16:44:02.162Z · LW(p) · GW(p)

That's precisely why I think motivation is not really a kind of definition. There are analogies, sure, but definitions are the things that you can change whenever you feel like it, but motivations are not.

Replies from: Vladimir_Nesov, None
comment by Vladimir_Nesov · 2013-01-01T18:42:02.607Z · LW(p) · GW(p)

The definitions that you are free to introduce or change usually latch on to an otherwise motivated thing, you usually have at least some sort of informal reason to choose a particular definition. When you change a definition, you start talking about something else. If it's not important (or a priori impossible to evaluate) what it is you will be talking about, in other words if the motivation for your definition is vague and tolerates enough arbitrariness, then it's OK to change a definition without a clear reason to make the particular change that you do.

So I think the situation can be described as follows. There are motivations that assign typically vague priorities to objects of study, and there are definitions that typically specify more precisely what an object of study will be. These motivations are usually not themselves objects of study, so we don't try to find more accurate definitions for describing them. But we can also take a particular motivation as an object of study, in which case we may try to find a definition that describes it. The goal is then to find a definition for a particular motivation, so you can't pick an arbitrary definition that doesn't actually describe that motivation (i.e. this goal is not that vague).

I expect finding a description of ("definition for") a motivation is an epistemic task, like finding a description of a particular physical device. You are not free to "change the definition" of that physical device if the goal is to describe it. (You may describe it differently, but it's the same thing that you'll be describing.) In this analogy, inventing new definitions corresponds to constructing new physical devices. And if you understand the original device well enough (find a "definition" for a "motivation"), you may be able to make new ones.

comment by [deleted] · 2012-12-18T21:30:18.281Z · LW(p) · GW(p)

Mathematical definitions sure aren't just some social construct. You pick axioms, you derive theorems. If not, you are lying about maths.

(And picking axioms doesn't matter either, because maths exists independently of physical systems computing them)

Replies from: Will_Sawin
comment by Will_Sawin · 2012-12-19T07:06:11.940Z · LW(p) · GW(p)

I mean, didn't Eliezer cover this? You're not lying if you call numbers groups and groups numbers. If you switch in the middle of a proof, sure, that's lying, but that seems irrelevant. The definitions pick out what you're talking about.

When I'm talking about morality, I'm talking about That Thing That Determines What You're Supposed to Do, You Know, That One.

Replies from: None
comment by [deleted] · 2012-12-25T01:58:21.683Z · LW(p) · GW(p)

So am I.

You Know, That Part Of Your Brain That Computes That Thing That Determines What You're Supposed to Do, Given What You Know, You Know, That One.

I don't even remember the mind set I had when I wrote this, nor what this is all about.

Replies from: Will_Sawin
comment by Will_Sawin · 2012-12-26T05:37:18.966Z · LW(p) · GW(p)

Referring to a part of your brain doesn't have the right properties when you change between different universes.

Replies from: None
comment by [deleted] · 2012-12-30T16:44:45.505Z · LW(p) · GW(p)

That is true, and so we refer to the medium-independet axiomatic definition.

Replies from: Will_Sawin
comment by Will_Sawin · 2013-01-01T18:06:06.976Z · LW(p) · GW(p)

What's that?

Replies from: None
comment by [deleted] · 2013-01-01T23:02:59.323Z · LW(p) · GW(p)

That Thing That Determines What You're Supposed to Do, Given What You Know, You Know, That One.

comment by Pentashagon · 2012-12-17T18:14:20.392Z · LW(p) · GW(p)

"But!" Susan should've said. "When we judge the universe we're comparing it to a logical referent, a sort of thing that isn't in the universe! Why, it's just like looking at a heap of 2 apples and a heap of 3 apples on a table, and comparing their invisible product to the number 6 - there isn't any 6 if you grind up the whole table, even if you grind up the whole universe, but the product is still 6, physico-logically speaking."

There won't even be a "2" or "3" left if you grind everything up. But what if you carefully grind up the brain that's thinking about the product of 2 and 3? If you do it carefully enough you'll preserve a "2" and a "3" encoded in the brain structure. My intuition is that you'll also preserve the answer the brain had in mind, "6." Logic does exist in the universe; it exists encoded in the relationships in the matter and energy in human brains and in the artifacts they've created. If there's nothing to implement the logical rules then there is no logic.

Perhaps that's your entire point; that we're the only vessels of (our particular) logic and morality in the universe. If that's true then Susan should have said something more like what Harry said.

Replies from: DaFranker
comment by DaFranker · 2012-12-17T18:29:51.833Z · LW(p) · GW(p)

I'm pretty sure this is roughly one of the points E.Y. is attempting to convey.

There isn't "our particular logic", though. Logic is the only valid pattern that self-consistently describes more than one instance of physics. I really want to reduce "pattern" in the previous sentence and add more specific details, but I'm either not strong enough yet, or my brain just isn't making the right connections right now.

Which thing Harry said? Harry said lots of smart things in HPMoR. And some stupid things, too.

Replies from: Pentashagon
comment by Pentashagon · 2012-12-17T19:13:27.911Z · LW(p) · GW(p)

By "our particular logic" I mean the particular method we've learned for exploiting how the universe works to cause our discrete symbols to have consistent behavior that mostly models the universe. There's no requirement that logic be only represented as a finite sequence of symbols generated by replacement rules and variable substitution from a set of axioms; it's just what works best for us right now. There are almost certainly other (and probably better) representations of how the universe works that we haven't found yet. For instance it seems like it would be really useful to have a quantum logic that "just worked" by being made out of entangled particles and having rules that exploit quantum mechanics directly instead of having to simulate how the wavefunction behaves using our mathematics. They both might be able to fully embed the other but I think it's worth making a distinction between them.

Which thing Harry said? Harry said lots of smart things in HPMoR. And some stupid things, too.

The last thing Harry was quoted saying in the post, specifically.

Replies from: DaFranker
comment by DaFranker · 2012-12-17T19:27:18.884Z · LW(p) · GW(p)

Thanks. That clarified things. And I was (incorrectly) adjusting for inferential distance in the other direction regarding the "our particular logic" referent. In fact, it was me who hadn't fully understood the things you implied and the steps that were skipped in the reasoning, for whatever reason.

comment by [deleted] · 2012-12-11T11:58:19.666Z · LW(p) · GW(p)

And there are others who accept that physics and logic is everything, but who - I think mistakenly - go ahead and also accept Death's stance that this makes morality a lie, or, in lesser form, that the bright alive feeling can't make it. (Sort of like people who accept an incompatibilist theory of free will, also accept physics, and conclude with sorrow that they are indeed being controlled by physics.)

I think that's a misapplication of reductionism (the thing I think Eliezer is thinking about he said it was mistakenly), where people take something they've logically attached to a value, and then reduce it to something else, which starts to feel like they can't reattach it to whatever they thought had the value in the first place.

To make an example it could be said that action A leads to result Y, and that result Y feels like a good thing, so action A feels like a good thing to do. So the person reduces their map of action A leading to result Y so that it no longer contains these things they associate into their feelings or values, because they momentarily look different. Then they can no longer associate action A with the feeling/value they had associatd result Y into, and it feels like action A can't be "moral" or "good" or whatever. (like, if you imagine "atoms bouncing around" instead of "giving food to starving people")

I think this tendency is also linked, sometimes at least, to people's mistake avoiding hesitancy. Or rather having a cautious way of doing things because of wanting to avoid mistakes. So in order to avoid making the mistake of being immoral, you wan't to be able to logically derive moral or immoral actions, and since morality seems reducible to nothing, it seems that this task is not possible. Kind of like you want to double check on your actions objectively, and when you hit this point of failure, it feels like you can't take the actions themselves, because you're used to doing actions this way. But anyway that was just random speculation and it's probably nonsense. Also I didn't mean to "box away" people's habits. I think it's often very useful to be cautious.

I think that reductionism, when misunderstood, can make the world look like a bucketful of nihilistic goo. Especially if it's used to devalue.

Clippy doesn't judge between self-modifications by computing justifications, but rather, computing clippyflurphs.

Clippy would encounter "ethical" dilemmas of the sort: Is it better ..err.. moreclippy to have 1 big paperclip, or 3 small paperclips? A line of many clips? Or a big clip made of smaller clips? Is it moreclippy to have 10 clips today and 20 clips tomorrow, or, 0 clips today and 30 clips tomorrow?

Just joking.. :)

edit: added " " to ethical

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-12-16T10:57:23.561Z · LW(p) · GW(p)

Clippy would encounter ethical dilemmas of the sort: Is it better ..err.. moreclippy to have 1 big paperclip, or 3 small paperclips? A line of many clips? Or a big clip made of smaller clips? Is it moreclippy to have 10 clips today and 20 clips tomorrow, or, 0 clips today and 30 clips tomorrow?

Clippy could have these dilemmas. But they wouldn't be ethical dilemmas. They would be clippy dilemmas.

Replies from: None
comment by [deleted] · 2012-12-16T11:49:58.808Z · LW(p) · GW(p)

Clippy could have these dilemmas. But they wouldn't be ethical dilemmas. They would be clippy dilemmas.

Not exactly since ethics for humans is about morality, ethics for Clippy could be about paperclips/moreclippiness, or whatever you wanna call it. Or at least that's how I wanted to use the word in this context. I could be mistaken too, if you really want to argue the terminology.

comment by Nominull · 2012-12-10T05:38:32.121Z · LW(p) · GW(p)

You talk like you've solved qualia. Have you?

Replies from: CronoDAS, None, MrMind
comment by CronoDAS · 2012-12-10T08:10:48.791Z · LW(p) · GW(p)

"Qualia" is something our brains do. We don't know how our brains do it, but it's pretty clear by now that our brains are indeed what does it.

Replies from: Peterdjones, RobbBB
comment by Peterdjones · 2012-12-10T12:38:55.099Z · LW(p) · GW(p)

That's about 10% of a solution. The "how" is enough to keep most contemorary dualism afloat.

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-10T13:41:35.821Z · LW(p) · GW(p)

Aren't the details of the "how" more a question of science than philosophy?

Replies from: Peterdjones
comment by Peterdjones · 2012-12-10T14:29:47.828Z · LW(p) · GW(p)

If science had them, there would be no mileage in the philosphical project, any more than there is currently mileage in trying to found dualism on the basis that matter can't think.

Replies from: ThisDan
comment by ThisDan · 2012-12-12T05:43:26.227Z · LW(p) · GW(p)

There is mileage in philosophy? Says you. Are you talking about in context of general population of a country? Of "intellectuals? Your mates?

If philosophy has mileage (compared to science) then so does any other religion. I guess that's all dualism is though.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T10:53:54.007Z · LW(p) · GW(p)

If philosophy has mileage (compared to science) then so does any other religion.

Eh?

Replies from: ThisDan
comment by ThisDan · 2012-12-13T06:02:49.979Z · LW(p) · GW(p)

I just went to reply you but after reading back on what was said I'm seeing a different context. My stupid comment was about popularity not about usefulness. I was rambling about general public opinion on belief systems not what the topic was really about- if philosophy could move something forward.

comment by Rob Bensinger (RobbBB) · 2012-12-11T01:23:49.768Z · LW(p) · GW(p)

We have prima facie reason to accept both of these claims:

  1. A list of all the objective, third-person, physical facts about the world does not miss any facts about the world.
  2. Which specific qualia I'm experiencing is functionally/causally underdetermined; i.e., there doesn't seem even in principle to be any physically exhaustive reason redness feels exactly as it does, as opposed to feeling like some alien color.

1 is physicalism; 2 is the hard problem. Giving up 1 means endorsing dualism or idealism. Giving up 2 means endorsing reductive or eliminative physicalism. All of these options are unpalatable. Reductionism without eliminating anything seems off the table, since the conceivability of zombies seems likely to be here to stay, to remain as an 'explanatory gap.' But eliminativism about qualia means completely overturning our assumption that whatever's going on when we speak of 'consciousness' involves apprehending certain facts about mind. I think this last option is the least terrible out of a set of extremely terrible options; but I don't think the eliminative answer to this problem is obvious, and I don't think people who endorse other solutions are automatically crazy or unreasonable.

That said, the problem is in some ways just academic. Very few dualists these days think that mind isn't perfectly causally correlated with matter. (They might think this correlation is an inexplicable brute fact, but fact it remains.) So none of the important work Eliezer is doing here depends on monism. Monism just simplifies matters a great deal, since it eliminates the worry that the metaphysical gap might re-introduce an epistemic gap into our model.

Replies from: CronoDAS, Eugine_Nier
comment by CronoDAS · 2012-12-11T05:05:13.491Z · LW(p) · GW(p)

Which specific qualia I'm experiencing is functionally/causally underdetermined; i.e., there doesn't seem even in principle to be any physically exhaustive reason redness feels exactly as it does, as opposed to feeling like some alien color.

If I knew how the brain worked in sufficient detail, I think I'd be able to explain why this was wrong; I'd have a theory that would predict what qualia a brain experiences based on its structure (or whatever). No, I don't know what the theory is, but I'm pretty confident that there is one.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-11T05:17:29.093Z · LW(p) · GW(p)

Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?

Replies from: Decius, CronoDAS, Vaniver, CronoDAS
comment by Decius · 2012-12-12T21:46:12.152Z · LW(p) · GW(p)

Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?

Taboo experiences.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-12T22:40:55.019Z · LW(p) · GW(p)

It sounds like you're asking me to do what I just asked you to do. I don't know what experiences are, except by listing synonyms or by acts of brute ostension — hey, check out that pain! look at that splotch of redness! — so if I could taboo them away, it would mean I'd already solved the hard problem. This may be an error mode of 'tabooing' itself; that decision procedure, applied to our most primitive and generic categories (try tabooing 'existence' or 'feature'), seems to either yield uninformative lists of examples, implausible eliminativisms (what would a world without experience, without existence, or without features, look like?), or circular definitions.

But what happens when we try to taboo a term is just more introspective data; it doesn't give us any infallible decision procedure, on its own, for what conclusion we should draw from problem cases. To assert 'if you can't taboo it, then it's meaningless!', for example, is itself to commit yourself to a highly speculative philosophical and semantic hypothesis.

comment by CronoDAS · 2012-12-11T05:26:16.271Z · LW(p) · GW(p)

Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?

Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are computations causally determined by non-computations. How would examining anything about the non-computations tell us that the computations exist, or what particular functions those computations are computing?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-11T07:10:47.361Z · LW(p) · GW(p)

My initial response is that any physical interaction in which the state of one thing differentially tracks the states of another can be modeled as a computation. Is your suggestion that an analogous response would solve the Hard Problem, i.e., are you endorsing panpsychism ('everything is literally conscious')?

Replies from: CronoDAS, TsviBT
comment by CronoDAS · 2012-12-12T03:55:24.949Z · LW(p) · GW(p)

Sorry, bad example... Let's try again.

Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are living things causally determined by non-living things? How would examining anything about the non-living things tell us that the living things exist, or what particular way those living things are alive?

"Explain how consciousness arises from non-conscious matter" doesn't seem any more of an impossible problem than "Explain how life arises from non-living matter".

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-12T04:16:29.166Z · LW(p) · GW(p)

We can define and analyze 'life' without any reference to life: As high-fidelity self-replicating macromolecules that interact with their environments to assemble and direct highly responsive cellular containers around themselves. There doesn't seem to be anything missing from our ordinary notion of life here; or anything that is missing could be easily added by sketching out more physical details.

What might a purely physical definition of consciousness that made no appeal to mental concepts look like? How could we generate a first-person facts from a complex of third-person facts?

comment by TsviBT · 2012-12-11T07:42:31.874Z · LW(p) · GW(p)

What you described as computation could apply to literally any two things in the same causal universe. But you meant two things that track each other much more tightly than usual. It may be that a rock is literally conscious, but if so, then not very much so. So little that it really does not matter at all. Humans are much more conscious because they reflect the world much more, reflect themselves much more, and [insert solution to Hard Problem here].

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-11T08:51:37.871Z · LW(p) · GW(p)

It may be that a rock is literally conscious, but if so, then not very much so. So little that it really does not matter at all.

I dunno. I think if rocks are even a little bit conscious, that's pretty freaky, and I'd like to know about it. I'd certainly like to hear more about what they're conscious of. Are they happy? Can I alter them in some way that will maximize their experiential well-being? Given how many more rocks there are than humans, it could end up being the case that our moral algorithm is dominated by rearranging pebbles on the beach.

Humans are much more conscious because they reflect the world much more, reflect themselves much more, and [insert solution to Hard Problem here].

Hah. Luckily, true panpsychism dissolves the Hard Problem. You don't need to account for mind in terms of non-mind, because there isn't any non-mind to be found.

Replies from: TsviBT
comment by TsviBT · 2012-12-11T17:02:40.188Z · LW(p) · GW(p)

I think if rocks are even a little bit conscious, that's pretty freaky, and I'd like to know about it.

I meant, I'm pretty sure that rocks are not conscious. It's just that the best way I'm able to express what I mean by "consciousness" may end up apparently including rocks, without me really claiming that rocks are conscious like humans are - in the same way that your definition of computation literally includes air, but you're not really talking about air.

Luckily, true panpsychism dissolves the Hard Problem. You don't need to account for mind in terms of non-mind, because there isn't any non-mind to be found.

I don't understand this. How would saying "all is Mind" explain why qualia feel the way they do?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-11T20:26:02.120Z · LW(p) · GW(p)

I'm pretty sure that rocks are not conscious. It's just that the best way I'm able to express what I mean by "consciousness" may end up apparently including rocks, without me really claiming that rocks are conscious like humans are - in the same way that your definition of computation literally includes air, but you're not really talking about air.

This still doesn't really specify what your view is. Your view may be that strictly speaking nothing is conscious, but in the looser sense in which we are conscious, anything could be modeled as conscious with equal warrant. This view is a polite version of eliminativism.

Or your view may be that strictly speaking everything is conscious, but in the looser sense in which we prefer to single out human-style consciousness, we can bracket the consciousness of rocks. In that case, I'd want to hear about just what kind of consciousness rocks have. If dust specks are themselves moral patients, this could throw an interesting wrench into the 'dust specks vs. torture' debate. This is panpsychism.

Or maybe your view is that rocks are almost conscious, that there's some sort of Consciousness Gap that the world crosses, Leibniz-style. In that case, I'd want an explanation of what it means for something to almost be conscious, and how you could incrementally build up to Consciousness Proper.

I don't understand this. How would saying "all is Mind" explain why qualia feel the way they do?

The Hard Problem is not "Give a reductive account of Mind!" It's "Explain how Mind could arise from a purely non-mental foundation!" Idealism and panpsychism dissolve the problem by denying that the foundation is non-mental; and eliminativism dissolves the problem by denying that there's such a thing as "Mind" in the first place.

comment by Vaniver · 2012-12-11T05:55:16.721Z · LW(p) · GW(p)

Can you give me an example of how, even in principle, this would work?

In general, I would suggest as much looking at sensory experiences that vary among humans; there's already enough interesting material there without wondering if there are even other differences. Can we explain enough interesting things about the difference between normal hearing and pitch perfect hearing without talking about qualia?

Once we've done that, are we still interested in discussing qualia in color?

comment by CronoDAS · 2012-12-11T05:19:47.419Z · LW(p) · GW(p)

http://lesswrong.com/lw/p5/brain_breakthrough_its_made_of_neurons/

http://lesswrong.com/lw/p3/angry_atoms/

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-11T07:01:28.184Z · LW(p) · GW(p)

http://lesswrong.com/lw/p5/brain_breakthrough_its_made_of_neurons/

So your argument is "Doing arithmetic requires consciousness; and we can tell that something is doing arithmetic by looking at its hardware; so we can tell with certainty by looking at certain hardware states that the hardware is sentient"?

http://lesswrong.com/lw/p3/angry_atoms/

So your argument is "We have explained some things physically before, therefore we can explain consciousness physically"?

Also, we can cause certain sensations on demand by electrically stimulating certain brain parts.

So your argument is "Mental states have physical causes, so they must be identical with certain brain-states"?

Set aside whether any of these would satisfy a dualist or agnostic; should they satisfy one?

Replies from: CronoDAS
comment by CronoDAS · 2012-12-12T03:57:14.039Z · LW(p) · GW(p)

So your argument is "Doing arithmetic requires consciousness; and we can tell that something is doing arithmetic by looking at its hardware; so we can tell with certainty by looking at certain hardware states that the hardware is sentient"?

Well, it's certainly possible to do arithmetic without consciousness; I'm pretty sure an abacus isn't conscious. But there should be a way to look at a clump of matter and tell it is conscious or not (at least as well as we can tell the difference between a clump of matter that is alive and a clump of matter that isn't).

So your argument is "We have explained some things physically before, therefore we can explain consciousness physically"?

It's a bit stronger than that: we have explained basically everything physically, including every other example of anything that was said to be impossible to explain physically. The only difference between "explaining the difference between conscious matter and non-conscious matter" and "explaining the difference between living and non-living matter" is that we don't yet know how to do the former.

I think we're hitting a "one man's modus ponens is another man's modus tollens" here. Physicalism implies that the "hard problem of consciousness" is solvable; physicalism is true; therefore the hard problem of consciousness has a solution. That's the simplest form of my argument.

Basically, I think that the evidence in favor of physicalism is a lot stronger than the evidence that the hard problem of consciousness isn't solvable, but if you disagree I don't think I can persuade you otherwise.

Replies from: Decius, RobbBB
comment by Decius · 2012-12-12T21:45:23.438Z · LW(p) · GW(p)

No abacus can do arithmetic. An abacus just sits there.

No backhoe can excavate. A backhoe just sits there.

A trained agent can use an abacus to do arithmetic, just as one can use a backhoe to excavate. Can you define "do arithmetic" in such a manner that it is at least as easy to prove that arithmetic has been done as it is to prove that excavation has been done?

Replies from: CronoDAS
comment by CronoDAS · 2012-12-13T02:38:51.879Z · LW(p) · GW(p)

Does a calculator do arithmetic?

Replies from: Decius
comment by Decius · 2012-12-14T00:20:43.259Z · LW(p) · GW(p)

I've watched mine for several hours, and it hasn't. Have you observed a calculator doing arithmetic? What would it look like?

Replies from: wedrifid, CronoDAS
comment by wedrifid · 2012-12-14T11:42:13.707Z · LW(p) · GW(p)

I've watched mine for several hours, and it hasn't.

No, you haven't. (p=0.9)

Have you observed a calculator doing arithmetic? What would it look like?

It could look like an electronic object with a plastic shell that starts with "(23 + 54) / (47 * 12 + 76) + 1093" on the screen and some small amount of time after an apple falls from a tree and hits the "Enter" button some number appears on the screen below the earlier input, beginning with "1093.0", with some other decimal digits following.

If the above doesn't qualify as the calculator doing "arithmetic" then you're just using the word in a way that is not just contrary to common usage but also a terrible way to carve reality.

Replies from: MugaSofer, Decius
comment by MugaSofer · 2012-12-14T13:34:14.795Z · LW(p) · GW(p)

I've watched mine for several hours, and it hasn't.

No, you haven't. (p=0.9)

Upvoted for this alone.

comment by Decius · 2012-12-15T00:03:23.012Z · LW(p) · GW(p)

I didn't do that immediately prior to posting, but I have watched my calculator for a cumulative period of time exceeding several hours, and it has never done arithmetic. I have done arithmetic using said calculator, but that is precisely the point I was trying to make.

Does every device which looks like that do arithmetic, or only devices which could in principle be used to calculate a large number of outcomes? What about an electronic device that only alternates between displaying "(23 + 54) / (47 * 12 + 76) + 1093" and "1093.1203125" (or "1093.15d285805de42") and does nothing else?

Does a bucket do arithmetic because the number of pebbles which fall into the bucket, minus the number of pebbles which fall out of the bucket, is equal to the number of pebbles in the bucket? Or does the shepherd do arithmetic using the bucket as a tool?

Replies from: wedrifid
comment by wedrifid · 2012-12-15T00:44:36.826Z · LW(p) · GW(p)

I didn't do that immediately prior to posting, but I have watched my calculator for a cumulative period of time exceeding several hours, and it has never done arithmetic. I have done arithmetic using said calculator, but that is precisely the point I was trying to make.

And I would make one of the following claims:

  • Your calculator has done arithmetic, or
  • You are using your calculator incorrectly (It's not a paperweight!) Or
  • There is a usage of 'arithmetic' here that is a highly misleading way to carve reality.

Does every device which looks like that do arithmetic, or only devices which could in principle be used to calculate a large number of outcomes?

In the same way that a cardboard cutout of Decius that has a speech bubble saying "5" over its head would not be said to be doing arithmetic a device that looks like a calculator but just displays one outcome would not be said to be doing arithmetic.

I'm not sure how 'large' the number of outcomes must be, precisely. I can imagine particularly intelligent monkeys or particularly young children being legitimately described as doing rudimentary arithmetic despite being somewhat limited in their capability.

Does a bucket do arithmetic because the number of pebbles which fall into the bucket, minus the number of pebbles which fall out of the bucket, is equal to the number of pebbles in the bucket? Or does the shepherd do arithmetic using the bucket as a tool?

It would seem like in this case we can point to the system and say that system is doing arithmetic. The shepherd (or the shepherd's boss) has arranged the system so that the arithmetic algorithm is somewhat messily distributed in that way. Perhaps more interesting is the case where the bucket and pebble system has been enhanced with a piece of fabric which is disrupted by passing sheep, knocking in pebbles reliably, one each time. That system can certainly be said to be "counting the damn sheep", particularly since it so easily generalizes to counting other stuff that walks past.

But now allow me to abandon my rather strong notions that "calculators multiply stuff and mechanical sheep counters count sheep". I'm curious just what the important abstract feature of the universe is that you are trying to highlight as the core feature of 'arithmetic'. It seems to be something to do with active intent by a generally intelligent agent? So that whenever adding or multiplying is done we need to track down what caused said adding or multiplication to be done, tracing the causal chain back to something that qualifies as having 'intention' and say that the 'arithmetic' is being done by that agent? (Please correct me if I'm wrong here, this is just my best effort to resolve your usage into something that makes sense to me!)

Replies from: Decius
comment by Decius · 2012-12-15T01:04:27.484Z · LW(p) · GW(p)

It's not a feature of arithmetic, it's a feature of doing.

I attribute 'doing' an action to the user of the tool, not to the tool. It is a rare case in which I attribute an artifact as an agent; if the mechanical sheep counter provided some signal to indicate the number or presence of sheep outside the fence, I would call it a machine that counts sheep. If it was simply a mechanical system that moved pebbles into and out of a bucket, I would say that counting the sheep is done by the person who looks in the bucket.

If a calculator does arithmetic, do the components of the calculator do arithmetic, or only the calculator as a whole? Or is it the system of which does arithmetic?

I'm still looking for a definition of 'arithmetic' which allows me to be as sure about whether arithmetic has been done as I am sure about whether excavation has been done.

comment by CronoDAS · 2012-12-14T00:57:18.324Z · LW(p) · GW(p)

Well, you do have to press certain buttons for it to happen. ;) And it looks like voltages changing inside an integrated circuit that lead to changes in a display of some kind. Anyway, if you insist on an example of something that "does arithmetic" without any human intervention whatsoever, I can point to the arithmetic logic unit inside a plugged-in arcade machine in attract mode.

And if you don't want to call what an arithmetic logic unit does when it takes a set of inputs and returns a set of outputs "doing arithmetic", I'd have to respond that we're now arguing about whether trees that fall in a forest with no people make a sound and aren't going to get anywhere. :P

Replies from: Decius, MugaSofer
comment by Decius · 2012-12-15T00:29:22.723Z · LW(p) · GW(p)

Well, yeah. My question:

Can you define "do arithmetic" in such a manner that it is at least as easy to prove that arithmetic has been done as it is to prove that excavation has been done?

Is still somewhat important to the discussion. I can't define arithmetic well enough to determine if it has occurred in all cases, but 'changes on a display' is clearly neither necessary nor sufficient.

Replies from: CronoDAS
comment by CronoDAS · 2012-12-15T01:13:17.516Z · LW(p) · GW(p)

Well, I'd say that a system is doing arithmetic if it has behavior that looks like it corresponds with the mathematical functions that define arithmetic. In other words, it takes as inputs things that are representations of such things as "2", "3", and "+" and returns an output that looks like "6". In an arithmetic logic unit, the inputs and outputs that represent numbers and operations are voltages. It's extremely difficult, but it is possible to use a microscopic probe to measure the internal voltages in an integrated circuit as it operates. (Mostly, we know what's going on inside a chip by far more indirect means, such as the "changes on a screen" you mentioned.)

There is indeed a lot of wiggle room here; a sufficiently complicated scheme can make anything "represent" anything else, but that's a problem beyond the scope of this comment. ;)

edit: I'm an idiot, 2 + 3 = 5. :(

Replies from: Decius
comment by Decius · 2012-12-15T01:29:09.961Z · LW(p) · GW(p)

Note that neither an abacus nor a calculator in a vacuum satisfy that definition.

I'll allow voltages and mental states to serve as evidence, even if they are not possible to measure directly.

Does a calculator with no labels on the buttons do arithmetic in the same sense that a standard one does?

Does the phrase "2+3=6" do arithmetic? What about the phrase "2*3=6"?

I will accept as obvious that arithmetic occurs in the case of a person using a calculator to perform arithmetic, but not obvious during precisely what periods arithmetic is occurring and not occurring.

comment by MugaSofer · 2012-12-14T12:42:49.614Z · LW(p) · GW(p)

Anyway, if you insist on an example of something that "does arithmetic" without any human intervention whatsoever, I can point to the arithmetic logic unit inside a plugged-in arcade machine in attract mode.

... which was plugged in and switched on by, well, a human.

I think the OP is using their own idiosyncratic definition of "doing" to require a conscious agent. This is more usual among those confused by free will.

comment by Rob Bensinger (RobbBB) · 2012-12-12T04:22:42.455Z · LW(p) · GW(p)

The only difference between "explaining the difference between conscious matter and non-conscious matter" and "explaining the difference between living and non-living matter" is that we don't yet know how to do the former.

It's impossible to express a sentence like this after having fully appreciated the nature of the Hard Problem. In fact, whether you're a dualist or a physicalist, I think a good litmus test for whether you've grasped just how hard the Hard Problem is is whether you see how categorically different the vitalism case is from the dualism case. See: Chalmers, Consciousness and its Place in Nature.

Physicalism implies that the "hard problem of consciousness" is solvable; physicalism is true; therefore the hard problem of consciousness has a solution.

Physicalism, plus the unsolvability of the Hard Problem (i.e., the impossibility of successful Type-C Materialism), implies that either Type-B Materialism ('mysterianism') or Type-A Materialism ('eliminativism') is correct. Type-B Materialism despairs of a solution while for some reason keeping the physicalist faith; Type-A Materialism dissolves the problem rather than solving it on its own terms.

Basically, I think that the evidence in favor of physicalism is a lot stronger than the evidence that the hard problem of consciousness isn't solvable

The probability of physicalism would need to approach 1 in order for that to be the case.

Replies from: CronoDAS, CronoDAS
comment by CronoDAS · 2012-12-12T05:09:02.017Z · LW(p) · GW(p)

It's impossible to express a sentence like this after having fully appreciated the nature of the Hard Problem. In fact, whether you're a dualist or a physicalist, I think a good litmus test for whether you've grasped just how hard the Hard Problem is is whether you see how categorically different the vitalism case is from the dualism case. See: Chalmers, Consciousness and its Place in Nature.

::follows link::

Call me the Type-C Materialist subspecies of eliminativist, then. I think that a sufficient understanding of the brain will make the solution obvious; the reason we don't have a "functional" explanation of subjective experience is not because the solution doesn't exist, but that we don't know how to do it.

Van Gulick (1993) suggests that conceivability arguments are question-begging, since once we have a good explanation of consciousness, zombies and the like will no longer be conceivable.

This is where I think we'll end up.

comment by CronoDAS · 2012-12-12T05:13:12.310Z · LW(p) · GW(p)

Basically, I think that the evidence in favor of physicalism is a lot stronger than the evidence that the hard problem of consciousness isn't solvable

The probability of physicalism would need to approach 1 in order for that to be the case.

It's a lot closer to 1 than a clever-sounding impossibility argument. See: http://lesswrong.com/lw/ph/can_you_prove_two_particles_are_identical/

comment by Eugine_Nier · 2012-12-11T01:53:39.829Z · LW(p) · GW(p)
  1. A list of all the objective, third-person, physical facts about the world does not miss any facts about the world.

What's your reason for believing this? The standard empiricist argument against zombies is that they don't constrain anticipated experience.

One problem with this line of thought is that we've just thrown out the very concept of "experience" which is the basis of empiricism. The other problem is that the statement is false: the question of whether I will become a zombie tomorrow does constrain my anticipated experiences; specifically, it tells me whether I should anticipate having any.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-11T02:12:00.908Z · LW(p) · GW(p)

I'm not a positivist, and I don't argue like one. I think nearly all the arguments against the possibility of zombies are very silly, and I agree there's good prima facie evidence for dualism (though I think that in the final analysis the weight of evidence still favors physicalism). Indeed, it's a good thing I don't think zombies are impossible, since I think that we are zombies.

What's your reason for believing this?

My reason is twofold: Copernican, and Occamite.

Copernican reasoning: Most of the universe does not consist of humans, or anything human-like; so it would be very surprising to learn that the most fundamental metaphysical distinction between facts ('subjective' v. 'objective,' or 'mental' v. 'physical,' or 'point-of-view-bearing' v. 'point-of-view-lacking, 'or what-have-you) happens to coincide with the parts of the universe that bear human-like things, and the parts that lack human-like things. Are we really that special? Is it really more likely that we would happen to gain perfect, sparkling insight into a secret Hidden Side to reality, than that our brains would misrepresent their own ways of representing themselves to themselves?

Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds). It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description -- the impersonal, 'objective' kind, which states a fact without specifying for whom the fact is. The world didn't need to turn out to be that way, just as it didn't need to look causally structured. This should give us reason to think that there may not be distinctions between fundamental kinds of facts, rather than that we happen to have lucked out and ended up in one of the universes with very few distinctions of this sort.

Neither of these considerations, of course, is conclusive. But they give us some reason to at least take seriously physicalist hypotheses, and to weight their theoretical costs and benefits against the dualists'.

One problem with this line of thought is that we've just thrown out the very concept of "experience" which is the basis of empiricism.

We've thrown out the idea of subjective experience, of pure, ineffable 'feels,' of qualia. But we retain any functionally specifiable analog of such experience. In place of qualitative red, we get zombie-red, i.e., causal/functional-red. In place of qualitative knowledge, we get zombie-knowledge.

And since most dualists already accepted the causal/functional/physical process in question (they couldn't even motivate the zombie argument if they didn't consider the physical causally adequate), there can be no parsimony argument against the physicalists' posits; the only argument will have to be a defense of the claim that there is some sort of basic, epistemically infallible acquaintance relation between the contents of experience and (themselves? a Self??...). But making such an argument, without begging the question against eliminativism, is actually quite difficult.

Replies from: thomblake, Peterdjones, Oligopsony, Eugine_Nier
comment by thomblake · 2012-12-13T19:42:43.658Z · LW(p) · GW(p)

In place of qualitative red, we get zombie-red, i.e., causal/functional-red. In place of qualitative knowledge, we get zombie-knowledge.

At this point, you're just using the language wrong. "knowledge" refers to what you're calling "zombie-knowledge" - whenever we point to an instance of knowledge, we mean whatever it is humans are doing. So "humans are zombies" doesn't work, unless you can point to some sort of non-human non-zombies that somehow gave us zombies the words and concepts of non-zombies.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-13T21:26:20.158Z · LW(p) · GW(p)

At this point, you're just using the language wrong.

That assumes a determinate answer to the question 'what's the right way to use language?' in this case. But the facts on the ground may underdetermine whether it's 'right' to treat definitions more ostensively (i.e., if Berkeley turns out to be right, then when I say 'tree' I'm picking out an image in my mind, not a non-existent material plant Out There), or 'right' to treat definitions as embedded in a theory, an interpretation of the data (i.e., Berkeley doesn't really believe in trees as we do, he just believes in 'tree-images' and misleadingly calls those 'trees'). Either of these can be a legitimate way that linguistic communities change over time; sometimes we keep a term's sense fixed and abandon it if the facts aren't as we thought, whereas sometimes we're more intensionally wishy-washy and allow terms to get pragmatically redefined to fit snugly into the shiny new model. Often it depends on how quickly, and how radically, our view of the world changes.

(Though actually, qualia may raise a serious problem for ostension-focused reference-fixing: It's not clear what we're actually ostending, if we think we're picking out phenomenal properties but those properties are not only misconstrued, but strictly non-existent. At least verbal definitions have the advantage that we can relatively straightforwardly translate the terms involved into our new theory.)

Moreover, this assumes that you know how I'm using the language. I haven't said whether I think 'knowledge' in contemporary English denotes q-knowledge (i.e., knowledge including qualia) or z-knowledge (i.e., causal/functional/behavioral knowledge, without any appeal to qualia). I think it's perfectly plausible that it refers to q-knowledge, hence I hedge my bets when I need to speak more precisely and start introducing 'zombified' terms lest semantic disputes interfere in the discussion of substance. But I'm neutral both on the descriptive question of what we mean by mental terms (how 'theory-neutral' they really are), and on the normative question of what we ought to mean by mental terms (how 'theory-neutral' they should be). I'm an eliminativist on the substantive questions; on the non-substantive question of whether we should be revisionist or traditionalist in our choice of faux-mental terminology, I'm largely indifferent, as long as we're clear and honest in whatever semantic convention we adopt.

comment by Peterdjones · 2012-12-13T19:25:40.263Z · LW(p) · GW(p)

Copernican reasoning: Most of the universe does not consist of humans, or anything human-like; so it would be very surprising to learn that the most fundamental metaphysical distinction between facts ('subjective' v. 'objective,' or 'mental' v. 'physical,' or 'point-of-view-bearing' v. 'point-of-view-lacking, 'or what-have-you) happens to coincide with the parts of the universe that bear human-like things, and the parts that lack human-like things. Are we really that special? Is it really more likely that we would happen to gain perfect, sparkling insight into a secret Hidden Side to reality, than that our brains would misrepresent their own ways of representing themselves to themselves?

It's not surprising that a system should have special insight into itself. If a type of system had special insight into some other, unrelated, type of system, then that would be peculiar. If every systems had insights (panpsychism) that would also be peculiar. But a system, one capable of haing insights, having special insights into itself is not unexpected

Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds).

That is not obvious. If the two kinds of stuff (or rather property) are fine-grainedly picked from some space of stuffs (or rather properties), then that would be more unlikely that just one being picked.

OTOH, if you have a just one, coarse-grained kind of stuff, and there is just one other coarse-grained kind of stuff, such that the two together cover the space of stuffs, then it is a mystery why you do not have both, ie every possible kind of stuff. A concrete example is the predominance of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.

(It's all about information and probability. Adding one fine grained kind of stuff to another means that two low probabilities get multiplies together, leading to a very low one that needs a lot of explainging. Having every logically possible kind of stuff has a high probability, because we don't need a lot of information to pinpoint the universe).

So..if you think of Mind as some very specific thing, the Occamite objection goes through. However, modern dualists are happy that most aspects of consciousness have physical explanations. Chalmers-style dualism is about explaining qualia, phenomenal qualities. The quantitative properties (Chalmers calls them stuctural-functional) of physicalism and intrinsically qualitative properties form a dyad that covers property-space in the same way that the matter-antimatter dyad covers stuff-space. In this way, modern dualism can avoid the Copernican Objection.

It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description -- the impersonal, 'objective' kind, which states a fact without specifying for whom the fact is.

(Here comes the shift from properties to aspects).

Although it does specify that the fact is outside me. If physical and mental properties are both intrinsic to the world, then the physical properties seem to be doing most of the work, and the mental ones seem redundant. However, if objectivity is seen as a perspective, ie an external perspective, it is no longer an empirical fact. It is then a tautology that the external world will seem, from the outside, to be objective, becaue objectivity just is the view from outside. And subjectivity, likewise, is the view from inside, and not any extra stuff, just another way of looking at the same stuff. There are in any case, a set of relations between a thing-and-itself, and another set between a thing-and-other-things Nothing novel is being introduced by noting the existence of inner and outer aspects. The novel content of the Dual Aspect solution lies on identifying the Objective Perspective with quantities (broadly including structures and functions) and the Subjective Perspective with qualities, so that Subjective Qualities, qualia, are just how neuronal processing seems from the inside. This point needs justication, which I believe I have, but will not nmention here.

As far as physicalism is concerned: physicalism has many meanings. Dual aspect theory is incompatible with the idea that the world is instrinsically objective and physical, since these are not intrinsic charateristics, accoding to DAT. DAT is often and rightly associated with neutral monism, the idea that the world is in itself neither mental nor physical, neither objective nor subjective. However, this in fact changes little for most physicalists: it does not suggest that there are any ghostly substances or indetectable properties. Nothing changes methodologically; naturalism, inerpreted as the investigation of the world from the objetive perspective can continue. The Strong Physicalist claim that a complete phyiscal description of the world is a complete dsecription tout court becomes problematic. Although such a description is a description of everything, it nonetheless leaves out the subjective perspectives embedded in it, which cannot be recovered just as Mary the superscientist cannot recover the subjective sensation of Red from the information she has. I believe that a correct understanding of the nature of information shows that "complete information" is a logically incoherent notion in any case, so that DAT does not entail the loss of anything that was ever available in that respect. Furthermore, the absence of complete information has little practical upshot because of the unfeasability of constructing such a complete decription in the first place. All in all, DAT means physicalism is technically false in a way that changes little in practice. The flipside of DAT is Neutral Monism. NM is an inherently attractive metaphsycis, because it means that the universe has no overall characteristic left dangling in need of an explanation -- no "why physical, rather than mental?".

As far as causality is concerned, the fact that a system's physical or objective aspects are enough to predict its behaviour does not mean that its subjective aspects are an unnecessary multiplication of entities, since they are only a different perspective on the same reality. Causal powers are vested in the neutral reality of which the subjective and the objective are just aspects. The mental is neither causal in itself, or causally idle in itself, it is rather a persepctive on what is causally empowered. There are no grounds for saying that either set of aspects is exclusively responsible for the causal behaviour of the system, since each is only a perspective on the system.

I have avoided the Copernican problem, special pleading for human consciousness by pinning mentality, and particualrly subjectivity to a system's internal and self-refexive relations. The counterpart to excesive anthropocentricism is insufficient anthopocentricism, ie free-wheeling panpsychism, or the Thinking Rock problem. I believe I have a way of showing that it is logically ineveritable that simple entities cannot have subjective states that are significantly different from their objective descriptions.

Replies from: RobbBB, RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-14T00:55:42.707Z · LW(p) · GW(p)

Nothing novel is being introduced by noting the existence of inner and outer aspects.

I'm not sure I understand what an 'aspect' is, in your model. I can understand a single thing having two 'aspects' in the sense of having two different sets of properties accessible in different viewing conditions; but you seem to object to the idea of construing mentality and physicality as distinct property classes.

I could also understand a single property or property-class having two 'aspects' if the property/class itself were being associated with two distinct sets of second-order properties. Perhaps "being the color of chlorophyll" and "being the color of emeralds" are two different aspects of the single property green. Similarly, then, perhaps phenomenal properties and physical properties are just two different second-order construals of the same ultimately physical, or ultimately ideal, or perhaps ultimately neutral (i.e., neither-phenomenal-nor-physical), properties.

I call the option I present in my first paragraph Property Dualism, and the option I present in my second paragraph Multi-Label Monism. (Note that these may be very different from what you mean by 'property dualism' and 'neutral monism;' some people who call themselves 'neutral monists' sound more to me like 'neutral trialists,' in that they allow mental and physical properties into their ontology in addition to some neutral substrate. True monism, whether neutral or idealistic or physicalistic, should be eliminative or reductive, not ampliative.) Is Dual Aspect Theory an intelligible third option, distinct from Property Dualism and Multi-Label Monism as I've distinguished them? And if so, how can I make sense of it? Can you coax me out of my parochial object/property-centric view, without just confusing me?

I'm also not sure I understand how reflexive epistemic relations work. Epistemic relations are ordinarily causal. How does reflexive causality work? And how do these 'intrinsic' properties causally interact with the extrinsic ones? How, for instance, does positing that Mary's brain has an intrinsic 'inner dimension' of phenomenal redness Behind The Scenes somewhere help us deterministically explain why Mary's extrinsic brain evolves into a functional state of surprise when she sees a red rose for the first time? What would the dynamics of a particle or node with interactively evolving intrinsic and extrinsic properties look like?

A third problem: You distinguish 'aspects' by saying that the 'subjective perspective' differs from the 'objective perspective.' But this also doesn't help, because it sounds anthropocentric. Worse, it sounds mentalistic; I understand the mental-physical distinction precisely inasmuch as I understand the mental as perspectival, and the physical as nonperspectival. If the physical is itself 'just a matter of perspective,' then do we end up with a dualistic or monistic theory, or do we instead end up with a Berkeleian idealism? I assume not, and that you were speaking loosely when you mentioned 'perspectives;' but this is important, because what individuates 'perspectives' is precisely what lends content to this 'Dual-Aspect' view.

All in all, DAT means physicalism is technically false in a way that changes little in practice.

Yes, I didn't consider the 'it's not physicalism!!' objection very powerful to begin with. Parsimony is important, but 'physicalism' is not a core methodological principle, and it's not even altogether clear what constraints physicalism entails.

comment by Rob Bensinger (RobbBB) · 2012-12-13T22:25:30.181Z · LW(p) · GW(p)

It's not surprising that a system should have special insight into itself.

It's not surprising that an information-processing system able to create representations of its own states would be able to represent a lot of useful facts about its internal states. It is surprising if such a system is able to infallibly represent its own states to itself; and it is astounding if such a system is able to self-represent states that a third-person observer, dissecting the objective physical dynamics of the system, could never in principle fully discover from an independent vantage point. So it's really a question of how 'special' we're talking.

If a type of system had special insight into some other, unrelated, type of system, then that would be peculiar.

I'm not clear on what you mean. 'Insight' is, presumably, a causal relation between some representational state and the thing represented. I think I can more easily understand a system's having 'insight' into something else, since it's easier for me to model veridical other-representation than veridical self-representation. (The former, for instance, leads to no immediate problems with recursion.) But perhaps you mean something special by 'insight.' Perhaps by your lights, I'm just talking about outsight?

If every systems had insights (panpsychism) that would also be peculiar.

If some systems have an automatic ability to non-causally 'self-grasp' themselves, by what physical mechanism would only some systems have this capacity, and not all?

if you have a just one, coarse-grained kind of stuff, and there is just one other coarse-grained kind of stuff, such that the two together cover the space of stuffs, then it is a mystery why you do not have both, ie every possible kind of stuff. A concrete example is the predominance of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.

If you could define a thingspace that meaningfully distinguishes between and admits of both 'subjective' and 'objective' facts (or properties, or events, or states, or thingies...), and that non-question-beggingly establishes the impossibility or incoherence of any other fact-classifications of any analogous sorts, then that would be very interesting. But I think most people would resist the claim that this is the one unique parameter of this kind (whatever kind that is, exactly...) that one could imagine varying over models; and if this parameter is set to value '2,' then it remains an open question why the many other strangely metaphysical or strangely anthropocentric parameters seem set to '1' (or to '0,' as the case may be).

But this is all very abstract. It strains comprehension just to entertain a subjective/objective distinction. To try to rigorously prove that we can open the door to this variable without allowing any other Aberrant Fundamental Categorical Variables into the clubhouse seems a little quixotic to me. But I'd be interested to see an attempt at this.

A concrete example is the predominance of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.

Sure, though there's a very important disparity between observed asymmetries between actual categories of things, and imagined asymmetries between an actual category and a purely hypothetical one (or, in this case, a category with a disputed existence). In principle the reasoning should work the same, but in practice our confidence in reasoning coherently (much less accurately!) about highly abstract and possibly-not-instantiated concepts should be extremely low, given our track record.

The quantitative properties (Chalmers calls them stuctural-functional) of physicalism and intrinsically qualitative properties form a dyad that covers property-space

How do we know that? If we were zombies, prima facie it seems as though we'd have no way of knowing about, or even positing in a coherent formal framework, phenomenal properties. But in that case, any analogous possible-but-not-instantiated-property-kinds that would expand the dyad into a polyad would plausibly be unknowable to us. (We're assuming for the moment that we do have epistemic access to phenomenal and physical properties.) Perhaps all carbon atoms, for instance, have unobservable 'carbonomenal properties,' (Cs) which are related to phenomenal and physical properties (P1s and P2s) in the same basic way that P1s are related to P2s and Cs, and that P2s are related to P1s and Cs. Does this make sense? Does it make sense to deny this possibility (which requires both that it be intelligible and that we be able to evaluate its probability with any confidence), and thereby preserve the dyad? I am bemused.

comment by Oligopsony · 2012-12-11T02:58:29.541Z · LW(p) · GW(p)

1) If you embrace SSA, then you being you should be more likely on humans being important than on panpsychism, yes? (You may of course have good reasons for preferring SIA.)

2) Suppose again redundantly dual panpsychism. Is there any a priori reason (at this level of metaphysical fancy) to rule out that experiences could causally interact with one another in a way that is isomorphic to mechanical interactions? Then we have a sort of idealist field describable by physics, perfectly monist. Or is this an illegitimate trick?

(Full disclosure: I'd consider myself a cautious physicalist as well, although I'd say psi research constitutes a bigger portion of my doubt than the hard problem.)

Replies from: Alejandro1, Vertigo
comment by Alejandro1 · 2012-12-11T17:07:33.420Z · LW(p) · GW(p)

The theory you propose in (2) seems close to Neutral Monism. It has fallen into disrepute (and near oblivion) but was the preferred solution to the mind-body problem of many significant philosophers of the late 19th-early 20th, in particular of Bertrand Russell (for a long period). A quote from Russell:

We shall seek to construct a metaphysics of matter which shall make the gulf between physics and perception as small, and the inferences involved in the causal theory of perception as little dubious, as possible. We do not want the percept to appear mysteriously at the end of a causal chain composed of events of a totally different nature; if we can construct a theory of the physical world which makes its events continuous with perception, we have improved the metaphysical status of physics, even if we cannot prove more than that our theory is possible.

comment by Vertigo · 2012-12-11T03:32:15.489Z · LW(p) · GW(p)

Ooo! Seldom do I get to hear someone else voice my version of idealism. I still have a lot of thinking to do on this, but so far it seems to me perfectly legitimate. An idealism isomorphic to mechanical interactions dissolves the Hard Problem of consciousness by denying a premise. It also does so with more elegance than reductionism since it doesn't force us through that series of flaming hoops that orbits and (maybe) eventually collapses into dualism.

This seems more likely to me so far than all the alternatives, so I guess that means I believe it, but not with a great deal of certainty. So far every objection I've heard or been able to imagine has amounted to something like, "But but but the world's just got to be made out of STUFF!!!" But I'm certainly not operating under the assumption that these are the best possible objections. I'd love to see what happens with whatever you've got to throw at my position.

comment by Eugine_Nier · 2012-12-11T02:47:02.230Z · LW(p) · GW(p)

Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds). It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description -- the impersonal, 'objective' kind, which states a fact without specifying for whom the fact is. The world didn't need to turn out to be that way, just as it didn't need to look causally structured. This should give us reason to think that there may not be distinctions between fundamental kinds of facts, rather than that we happen to have lucked out and ended up in one of the universes with very few distinctions of this sort.

The problem is that we already have two kinds of fundamental facts, (and I would argue we need more). Consider Eliezer's use of "magical reality fluid" in this post. If you look at context, it's clear that he's trying to ask whether the inhabitants of the non-causally stimulated universes poses qualia without having to admit he cares about qualia.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-11T02:55:52.224Z · LW(p) · GW(p)

Eliezer thinks we'll someday be able to reduce or eliminate Magical Reality Fluid from our model, and I know of no argument (analogous to the Hard Problem for phenomenal properties) that would preclude this possibility without invoking qualia themselves. Personally, I'm an agnostic about Many Worlds, so I'm even less inclined than EY to think that we need Magical Reality Fluid to recover the Born probabilities.

I also don't reify logical constructs, so I don't believe in a bonus category of Abstract Thingies. I'm about as monistic as physicalists come. Mathematical platonists and otherwise non-monistic Serious Scientifically Minded People, I think, do have much better reason to adopt dualism than I do, since the inductive argument against Bonus Fundamental Categories is weak for them.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-13T04:11:14.067Z · LW(p) · GW(p)

Eliezer thinks we'll someday be able to reduce or eliminate Magical Reality Fluid from our model, and I know of no argument (analogous to the Hard Problem for phenomenal properties) that would preclude this possibility without invoking qualia themselves.

I could define the Hard Problem of Reality, which really is just an indirect way of talking about the Hard Problem of Consciousness.

Personally, I'm an agnostic about Many Worlds, so I'm even less inclined than EY to think that we need Magical Reality Fluid to recover the Born probabilities.

As Eliezer discuses in the post, Reality Fluid isn't just for Many Worlds, it also relates to questions about stimulation.

I also don't reify logical constructs

Here's my argument for why you should.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-13T04:41:04.080Z · LW(p) · GW(p)

As Eliezer discuses in the post, Reality Fluid isn't just for Many Worlds, it also relates to questions about [simulation].

Only as a side-effect. In all cases, I suspect it's an idle distraction; simulation, qualia, and born-probability models do have implications for each other, but it's unlikely that combining three tough problems into a single complicated-and-tough problem will help gin up any solutions here.

Here's my argument for why you should.

Give me an example of some logical constructs you think I should believe in. Understand that by 'logical construct' I mean 'causally inert, nonspatiotemporal object.' I'm happy to sort-of-reify spatiotemporally instantiated properties, including relational properties. For instance, a simple reason why I consistently infer that 2 + 2 = 4 is that I live in a universe with multiple contiguous spacetime regions; spacetime regions are similar to each other, hence they instantiate the same relational properties, and this makes it possible to juxtapose objects and reason with these recurrent relations (like 'being two arbitrary temporal intervals before' or 'being two arbitrary spatial intervals to the left of').

comment by [deleted] · 2012-12-10T15:01:30.156Z · LW(p) · GW(p)

Daniel Dennett's 'Quining Qualia' (http://ase.tufts.edu/cogstud/papers/quinqual.htm) is taken ('round these parts) to have laid the theory of qualia to rest. Among philosophers, the theory of qualia and the classical empiricism founded on it are also considered to be dead theories, though it's Sellers "Empiricism and the Philosophy of Mind" (http://www.ditext.com/sellars/epm.html) that is seen to have done the killing.

Replies from: ArisKatsaris, None, Eliezer_Yudkowsky, Manfred, RobbBB, Peterdjones
comment by ArisKatsaris · 2012-12-10T16:07:24.250Z · LW(p) · GW(p)

Daniel Dennett's 'Quining Qualia' (http://ase.tufts.edu/cogstud/papers/quinqual.htm) is taken ('round these parts) to have laid the theory of qualia to rest.

I've not actually read this essay (will do so later today), but I disagree that most people here consider the issue of qualia and the "hard problem of consciousness" to be a solved one.

Time for a poll.

[pollid:372]

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-12T13:43:29.521Z · LW(p) · GW(p)

What about “I'd need to think more about this”?

comment by [deleted] · 2012-12-11T03:08:10.049Z · LW(p) · GW(p)

I just read 'Quining Qualia'. I do not see it as a solution to the hard problem of consciousness, at all. However, I did find it brilliant - it shifted my intuition from thinking that conscious experience is somehow magical and inexplicable to thinking that it is plausible that conscious experience could, one day, be explained physically. But to stop here would be to give a fake explanation...the problem has not yet been solved.

A triumphant thundering refutation of [qualia], an absolutely unarguable proof that [qualia] cannot exist, feels very satisfying—a grand cheer for the home team. And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.

-- Eliezer Yudkowsky, Dissolving the Question

Also, does anyone disagree with anything that Dennett says in the paper, and, if so, what, and why?

Replies from: Peterdjones, None
comment by Peterdjones · 2012-12-11T12:42:21.265Z · LW(p) · GW(p)

I think I have qualia. I probably don't have qualia as defined by Dennett, as simultaneously ineffable, intrinsic, etc, but there are nonetheless ways things seem to me.

comment by [deleted] · 2012-12-13T10:20:02.863Z · LW(p) · GW(p)

It maybe just my opinion, but please don't quote people and then insert edits into the quotation. Although at least you did do that with parenthesis.

By doing so you seem to say that free will and qualia are the same or interchangeable topics that share arguments for and against. But that is not the case. The question of free will is often misunderstood and is much easier to handle.

Qualia is, in my opinion, the abstract structure of consciousness. So on the underlying basic level you have physics and purely physical things, and on the more abstract level you have structure that is transitive with the basic level.

To illustrate what this means, I think Eliezer had an excellent example(though I'm not sure if his intention was similar): The spiking pattern of blue and actually seeing blue. But even the spiking pattern is far from completely reduced. But the idea is the same. On the level of consciousness you have experience which corresponds to a basic level thing. Very similar to the map and the territory analogue. Colorvision is hard to approach though, and it might be easier to start of with binary vision of 1 pixel. It's either 1 or 0. Imagine replacing your entire visual cortex with something that only outputs 1 or 0 - though brain is not binary - your entire field of vision having only 2 distinct experienced states. Although if you do that it certainly will result into mind-projection fallacy, since you can't actually change your visual cortex to only output 1 or 0. Anyway the rest of your consciousness has access to that information, and it's very very much easier to see how this binary state affects the decisions you make. And it's also much easier to do the transition from experience to physics and logic. Anyway then you can work your way back up to the normal vision by going several different pixels that are either 1 or 0.. To grayscale vision. But then colors make it much harder. But this doesn't resolve the qualia issue - how would feel like to have a 1-bit vision? How do you produce a set of rules that is transitive with the experience of vision?

Even if you grind everything down to the finest powder it still will be hard to see where this qualia business comes from, because you exist between the lines.

Replies from: None
comment by [deleted] · 2012-12-13T17:22:38.177Z · LW(p) · GW(p)

But this doesn't resolve the qualia issue - how would feel like to have a 1-bit vision? How do you produce a set of rules that is transitive with the experience of vision?

I agree that that doesn't resolve the qualia issue. To begin with, we'd need to write a SeeRed() function, that will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function. Even epiphenomenalists agree that this can be done, since they say consciousness has no physical effect on behavior. But here is my intuition (and pretty much every other reductionist's, I reckon) that leads me to reject epiphenomenalism: When I say, out loud (so there is a physical effect) "Wow, this flower I am holding is beautiful!", I am saying it because it actually looks beautiful to me! So I believe that, somehow, the perception is explainable, physically. And, at least for me, that intuition is much stronger than the intuition that conscious perception and computation are in separate magisteria.

We'll be able to get a lot further in this discussion once someone actually writes a SeeRed() function, which both epiphenomenalists and reductionists agree can be done.

Meanwhile, dualists think writing such a SeeRed() function is impossible. Time will tell.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-13T17:50:14.746Z · LW(p) · GW(p)

So I believe that, somehow, the perception is explainable, physically. And, at least for me, that intuition is much stronger than the intuition that conscious perception and computation are in separate magisteria.

It's possible for physicalism to be true, and computationalism false.

We'll be able to get a lot further in this discussion once someone actually writes a SeeRed() function, which both epiphenomenalists and reductionists agree can be done.

I'll say. Solving the problem does tend to solve the problem.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-10T23:25:28.520Z · LW(p) · GW(p)

I haven't read either of those but will read them. Also I totally think there was a respectable hard problem and can only stare somewhat confused at people who don't realize what the fuss was about. I don't agree with what Chalmers tries to answer to his problem, but his attempt to pinpoint exactly what seems so confusing seems very spot-on. I haven't read anything very impressive yet from Dennett on the subject; could be that I'm reading the wrong things. Gary Drescher on the other hand is excellent.

It could be that I'm atypical for LW.

EDIT: Skimmed the Dennett one, didn't see much of anything relatively new there; the Sellers link fails.

Replies from: Karl, None
comment by Karl · 2012-12-11T03:52:51.875Z · LW(p) · GW(p)

Also I totally think there was a respectable hard problem

So you do have a solution to the problem?

comment by [deleted] · 2012-12-11T01:26:57.480Z · LW(p) · GW(p)

I'll take a look at Drescher, I haven't seen that one.

Try this link? http://selfpace.uconn.edu/class/percep/SellarsEmpPhilMind.pdf

Sellars is important to contemporary philosophy, to the extent that a standard course in epistemology will often end with EPM. I'm not sure it's entirely worth your time though, because an argument against classical (not Bayesian) empiricism.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-11T02:46:03.150Z · LW(p) · GW(p)

Pryor and BonJour explain Sellars better than Sellars does. See: http://www.jimpryor.net/teaching/courses/epist/notes/given.html

The basic question is over whether our beliefs are purely justified by other beliefs, or whether our (visual, auditory, etc.) perceptions themselves 'represent the world as being a certain way' (i.e., have 'propositional content') and, without being beliefs themselves, can lend some measure of support to our beliefs. Note that this is a question about representational content (intentionality) and epistemic justification, not about phenomenal content (qualia) and physicalism.

comment by Manfred · 2012-12-10T15:34:28.845Z · LW(p) · GW(p)

Right - to hammer on the point, the common-ish (EDIT: Looks like I was hastily generalizing) LW opinion is that there never was any "hard problem of consciousness" (EDIT: meaning one that is distinct from "easy" problems of consciousness, that is, the ones we know roughly how to go about solving). It's just that when we meet a problem that we're very ignorant about, a lot of people won't go "I'm very ignorant about this," they'll go "This has a mysterious substance, and so why would learning more change that inherent property?"

Replies from: None, Richard_Kennaway, Peterdjones
comment by [deleted] · 2012-12-10T15:41:41.328Z · LW(p) · GW(p)

It should be remembered though that the guy who's famous for formulating the hard problem of consciousness is:

1) A fan of EY's TDT, who's made significant efforts to get the theory some academic attention. 2) A believer in the singularity, and its accompanying problems. 3) The student of Douglas Hofstrader. 4) Someone very interested in AI. 5) Someone very well versed and interested in physics and psychology. 6) A rare, but sometimes poster on LW. 7) Very likely one of the smartest people alive. etc. etc.

I think consciousness is reducible too, but David Chalmers is a serious dude, and the 'hard problem' is to be taken very, very seriously. It's very easy to not see a philosophical problem, and very easy to think that the problem must be solved by psychology somewhere, much harder to actually explain a solution/dissolution.

Replies from: Alejandro1, Manfred
comment by Alejandro1 · 2012-12-10T16:32:35.577Z · LW(p) · GW(p)

I agree with you about how smart Chalmers is and that he does very good philosophical work. But I think you have a mistake in terminology when you say

I think consciousness is reducible too, but David Chalmers is a serious dude, and the 'hard problem' is to be taken very, very seriously.

It is an understandable mistake, because it is natural to take "the hard problem" as meaning just "understanding consciousness", and I agree that this is a hard problem in ordinary terms and that saying "there is a reduction/dissolution" is not enough. But Chalmers introduced the distinction between the "hard problem" and the "easy problems" by saying that understanding the functional aspects of the mind, the information processing, etc, are all "easy problems". So a functionalist/computationalist materialist, like most people on this site, cannot buy into the notion that there is a serious "hard problem" in Chalmers' sense. This notion is defined in a way that begs the question assuming that qualia are irreducible. We should say instead that solving the "easy problems" is at the same time much less trivial than Chalmers makes it seem, and enough to fully account for consciousness.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-10T17:00:13.755Z · LW(p) · GW(p)

cannot buy into the notion that there is a serious "hard problem" in Chalmers' sense. This notion is defined in a way way that begs the question assuming that qualia are irreducible.

No it isn't. Here is what Chalmers says:

"It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does."

There is no statement of irreducubility there. There is a statement that we have "no good explanaion" and we don't.

Replies from: Alejandro1, dspeyer
comment by Alejandro1 · 2012-12-10T17:10:30.136Z · LW(p) · GW(p)

However, see how he contrasts it with the "easy problems" (from Consciousness and its Place in Nature - pdf):

What makes the easy problems easy? For these problems, the task is to explain certain behavioral or cognitive functions: that is, to explain how some causal role is played in the cognitive system, ultimately in the production of behavior. To explain the performance of such a function, one need only specify a mechanism that plays the relevant role. And there is good reason to believe that neural or computational mechanisms can play those roles.

What makes the hard problem hard? Here, the task is not to explain behavioral and cognitive functions: even once one has an explanation of all the relevant functions in the vicinity of consciousness—discrimination, integration, access, report, control—there may still remain a further question: why is the performance of these functions accompanied by experience?

It seems clear that for Chalmers any description in terms of behavior and cognitive function is by definition not addressing the hard problem.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-10T17:17:50.069Z · LW(p) · GW(p)

But that is not to say that qualia are irreducibole things, that is to say that mechanical explanations of qualia have not worked to date

comment by dspeyer · 2012-12-10T21:40:36.170Z · LW(p) · GW(p)

Why should physical processing give rise to a rich inner life at all?

What does this mean by "why"? What evolutionary advantage is there? Well, it enables imagination, which lets us survive a wider variety of dangers. What physical mechanism is there? That's an open problem in neurology, but they're making progress.

I've read this several times, and I don't see a hard philosophical problem.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-10T21:50:28.734Z · LW(p) · GW(p)

What does this mean by "why"?

It's definitely a how-it-happens "why" and not how-did-it-evolve "why"

Well, it enables imagination,

There's more to qualia than free-floating representations. There is no reason to suppose an AI's internal maps have phenomenal feels, no way of testing that they do, and no way of engineering them in.

I've read this several times, and I don't see a hard philosophical problem.

It's a hard scientific problem. How could you have a theory that tells you how the world seems to a bat on LSD? How can you write a SeeRed() function?

Replies from: DaFranker, Decius
comment by DaFranker · 2012-12-12T21:43:17.393Z · LW(p) · GW(p)

How can you write a SeeRed() function?

Presumably, the exact same way you'd write any other function.

In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.

If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human's "redness qualia". If prompted and sufficiently intelligent, this program will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function.

Of course, I'm arguing a bit by the premises here with "correct behavior" being "fully and coherently maintained". The space of inputs and outputs to take into account in order to make a program that would convince you of its possession of the redness qualia is too vast for us at the moment.

TL;DR: It all depends on what the SeeRed() function will be used for / how we want it to behave.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-12T21:59:35.399Z · LW(p) · GW(p)

In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.

False. In this case what matters is the perception of a red colour that occurs between input and ouput. That is what the Hard Problem, the problem of qualia is about.

If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human's "redness qualia"

That doesn't mean there are no qualia (I have them so I know there are). That also doesn't mean qualia just serendiptously arrive whenever the correct mapping from inputs to outputs is in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough.

Replies from: DaFranker
comment by DaFranker · 2012-12-12T22:07:06.987Z · LW(p) · GW(p)

That doesn't mean there are no qualia (I have them so I know there are). That also doesn't mean qualia just serendiptously arrive whenever the correct inputs and outputs are in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough

None of these were among my claims. For a program to reliably pass turing-like tests for seeing redness, a GLUT or zombielike would not cut it, you'd need some sort of internal system that generates certain inner properties and behaviors, one that would be effectively indistinguishable from qualia (this is my claim), and may very well be qualia (this is not my core claim, but it is something I find plausible).

Obviously I haven't solved the Hard Problem just by saying this. However, I do greatly dislike your apparent premise* that qualia can never be dissolved to patterns and physics and logic.

* If this isn't among your premises or claims, then it still does appear that way, but apologies in advance for the strawmanning.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-12T22:12:50.369Z · LW(p) · GW(p)

None of these were among my claims. For a program to reliably pass turing-like tests for seeing redness, a GLUT or zombielike would not cut it, you'd need some sort of internal system that generates certain inner properties and behaviors, one that would be effectively indistinguishable from qualia (this is my claim), and may very well be qualia (this is not my core claim, but it is something I find plausible).

Sorry that is most definitely "serendipitously arrive". You don't know how to engineer the Redness in explicilty, you are just assuming it must be there if everything else is in place.

However, I do greatly dislike your apparent premise* that qualia can never be dissolved to patterns and physics and logic.

The claimis more like "hasn't been", and you haven't shown me a SeeRed().

comment by Decius · 2012-12-12T21:36:16.340Z · LW(p) · GW(p)

Is there a reason to suppose that anybody else's maps have phenomenal feels, a way of testing that they do, or a way of telling the difference? Why can't those ways be generalized to Intelligent entities in general?

Replies from: Peterdjones
comment by Peterdjones · 2012-12-12T22:03:25.246Z · LW(p) · GW(p)

Is there a reason to suppose that anybody else's maps have phenomenal feels,

Yes: naturalism. It would be naturalistcially anomalous if their brains worked very smilarly , but their phenomenology were completely different.

a way of testing that they do,

No. So what? Are you saying we are all p-zombies?

Replies from: DaFranker, Decius
comment by DaFranker · 2012-12-12T22:10:28.513Z · LW(p) · GW(p)

No. So what? Are you saying we are all p-zombies?

I don't know about Decius, but...

I am.

I'm also saying that it doesn't matter. The p-zombies are still conscious. They just don't have any added "conscious" XML tags as per some imaginary, crazy-assed unnecessary definition of "consciousness".

Tangential to that point: I think any morality system which relies on an external supernatural thinghy in order to make moral judgments or to assign any terminal value to something is broken and not worth considering.

Replies from: nshepperd, Peterdjones
comment by nshepperd · 2012-12-13T03:38:00.089Z · LW(p) · GW(p)

You appear to be making an unfortunate assumption that what Chalmers and Peterdjones are talking about is crazy-assed unnecessary XML tags, as opposed to, y'know, regular old consciousness.

Replies from: DaFranker
comment by DaFranker · 2012-12-13T13:43:36.577Z · LW(p) · GW(p)

I'm not sure where my conception of p-zombies went wrong, then. P-zombies are assumed by the premise, if my understanding is correct, to behave physically exactly the same, down to the quantum level (and beyond if any exists), but to simply not have something being referred to as "qualia". This seems to directly imply that the "qualia" is generated neither by the physical matter, nor by the manner in which it interacts.

Like Eliezer, I believe physics and logic are sufficient to describe eventually everything, and so qualia and consciousness must be made of this physical matter and the way it interacts. Therefore, since the p-zombies have the same matter and the same interactions, they have qualia and consciousness.

What, then, is a non-p-zombie? Well, something that has "something more" (implied: Than physics or logic) added into it. Since it's something exceptional that isn't part of anything else so far in the universe to my knowledge, calling it a "crazy-ass unnecessary XML tag" feels very worthy of its plausibility and comparative algorithmic complexity.

The point being that, under this conception of p-zombies and with my current (very strong) priors on the universe, non-p-zombies are either a silly mysterious question with no possible answer, or something supernatural on the same level of silly as atom-fiddling tiny green goblins and white-winged angels of Pure Mercy.

Replies from: nshepperd
comment by nshepperd · 2012-12-13T14:10:06.078Z · LW(p) · GW(p)

Huh...

That's a funny way of thinking about it.

But anyway, EY's zombies sequences was all about saying that if physics and math is everything, then p-zombies are a silly mysterious question. Because a p-zombie was supposed to be like a normal human to the atomic level, but without qualia. Which is absurd if, as we expect, qualia are within physics and math. Hence there are no p-zombies.

I guess the point is that saying there are no non-p-zombies as a result of this is totally confusing, because it totally looks like saying no-one has consciousness.

(Tangentially, it probably doesn't help that apparently half of the philosophical world use "qualia" to mean some supernatural XML tags, while the other half use the word to mean just the-way-things-feel, aka. consciousness. You seem to get a lot of arguments between those in each of those groups, with the former group arguing that qualia are nonsense, and the latter group rebutting that "obviously we have qualia, or are you all p-zombies?!" resulting in a generally unproductive debate.)

Replies from: DaFranker
comment by DaFranker · 2012-12-13T14:16:21.455Z · LW(p) · GW(p)

I guess the point is that saying there are no non-p-zombies as a result of this is totally confusing, because it totally looks like saying no-one has consciousness.

Hah, yes. That seems to be partly a result of my inconsistent way of handling thought experiments that are broken or dissolved in the premises, as opposed to being rejected due to a later contradiction or nonexistent solution.

comment by Peterdjones · 2012-12-12T22:15:23.255Z · LW(p) · GW(p)

I'm also saying that it doesn't matter. The p-zombies are still conscious. They just don't have any added "conscious" XML tags as per some imaginary, crazy-assed unnecessary definition of "consciousness".

I have no idea what you are gettign at. Please clarify.

Tangential to that point: I think any morality system which relies on an external supernatural thinghy in order to make moral judgments or to assign any terminal value to something is broken and not worth considering.

That has no discernable relationship to anythign I have said. Have you confused me with someone else?

Replies from: DaFranker
comment by DaFranker · 2012-12-12T22:29:51.372Z · LW(p) · GW(p)

I'm not sure where I implied that I'm getting at anything. We're p-zombies, we have no additional consciousness, and it doesn't matter because we're still here doing things.

The tangent was just an aside remark to clarify my position, and wasn't to target anyone.

We may already agree on the consciousness issue, I haven't actually checked that.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-12T22:33:34.115Z · LW(p) · GW(p)

we have no additional consciousness,

I have no idea whay you mean by "additonal consciousness" -- although, since you are not "getting at anything" you perhaps mean nothing.

We're p-zombies

That seems a bold and contentious claim to me. OTOH, you say you are not "getting at anything". Who knows?

and wasn't to target anyone.

OK. "Getting at something" doens't mean criticising someone, it means making a point.

Replies from: DaFranker
comment by DaFranker · 2012-12-12T22:43:02.052Z · LW(p) · GW(p)

In that sense, what I was getting at is that asking the question of whether we are p-zombies is redundant and irrelevant, since there's no reason to want or believe in the existence of non-p-zombies.

The core of my claim is basically that our consciousness is the logic and physics that goes on in our brain, not something else that we cannot see or identify. I obviously don't have conclusive proof or evidence of this, otherwise I'd be writing a paper and/or collecting my worldwide awards for it, but all (yes, all) other possibilities seem orders of magnitude less likely to me with my current priors and model of the world.

TL;DR: Consciousness isn't made of ethereal acausal fluid nor of magic, but of real physics and how those real physics interact in a complicated way.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-12T23:21:18.559Z · LW(p) · GW(p)

since there's no reason to want or believe in the existence of non-p-zombies.

I believe in the existence of at least onen non-p-zombie, because I have at least indirect evidence of one in the form of my own qualia.

The core of my claim is basically that our consciousness is the logic and physics that goes on in our brain, not something else that we cannot see or identify.

We can see and identify our consciousness from the inside. It's self awareness. If you try to treat consciousness from the outside, you are bound to miss 99% of the point. None of this has antyhing to do with what consciousness is "made of".

Replies from: None, DaFranker
comment by [deleted] · 2012-12-13T15:19:45.948Z · LW(p) · GW(p)

I believe in the existence of at least onen non-p-zombie, because I have at least indirect evidence of one in the form of my own qualia.

I have a question about qualia from your perspective. If Omega hits you with an epiphenomenal anti-qualia hammer that injures your qualia and only your qualia such that you essentially have no qualia (I.E, you are a P-zombie) for an hour until your qualia recovers (When you are no longer a P-Zombie), what, if anything, might that mean?

1: You'd likely notice something, because you have evidence that qualia exist. That implies you would notice if they vanished for about an hour, since you would no longer be getting that evidence for that hour

2: You'd likely not notice anything, because if you did, a P-Zombie would not be just like you.

3: Epiphenomenal anti-qualia hammers can't exist. For instance, it might be impossible to affect your qualia and only your qualia, or perhaps it is impossible to make any reversible changes to qualia.

4: Something else?

Replies from: Peterdjones
comment by Peterdjones · 2012-12-13T15:30:19.460Z · LW(p) · GW(p)

Dunno, but try looking at this

Replies from: None
comment by [deleted] · 2012-12-13T17:26:07.337Z · LW(p) · GW(p)

I took a look. I found this quote:

This might seem reasonable at first - it is a strangely appealing image - but something very odd is going on here. My experiences are switching from red to blue, but I do not notice any change. Even as we flip the switch a number of times and my qualia dance back and forth, I will simply go about my business, not noticing anything unusual.

This seems to support an answer of:

2: You'd likely not notice anything, because if you did, a P-Zombie would not be just like you.

But if that's the case, it seems to contradict the idea of red qualia's existence even being a useful discussion. If you don't expect to notice when something vanishes, how do you have evidence that it exists or that it doesn't exist?

Now, to be fair, I can think you can construct something where it is meaningful to talk about something that you have no evidence of.

If an asteroid goes outside our light cone, we might say: "We have no evidence that this asteroid still exists since to our knowledge, evidence travels at the speed of light and this is outside our light cone. However, if we can invent FTL Travel, and then follow it's path, we would not expect it to not have winked out of existence right as it crossed our light cone, based on conservation of mass/energy."

That sounds like a comprehensible thing to say, possibly because it is talking about something's potential existence given the development of a future test.

And it does seem like you can also do that with Religious epiphenomenon, like souls, that we can't see right now.

"We have no evidence that our soul still exists since to our knowledge, people are perfectly intelligible without souls and we don't notice changes in our souls. However, if in the future we can invent soul detectors, we would expect to find souls in humans, based on religious texts."

That makes sense. It may be wrong, but if someone says that to me, My reaction would be "Yeah, that sounds plausible.", or perhaps "But how would you invent a soul detector?" much like my reaction would be to the FTL asteroid "Yeah, that sounds plausible.", or perhaps "But how would you invent FTL?"

I suppose, in essence, that these can be made to pay rent in anticipated experiences, but they are only under conditional circumstances, and those conditions may be impossible.

But for qualia, does this?

"We have no evidence that our qualia still exists since to our knowledge, P-zombies are perfectly intelligible without qualia and we don't notice changes in our qualia. However, if we can invent qualia detectors, we would expect to detect qualia in humans, based on thought experiments."

It doesn't in my understanding, because it seems like one of the key points of qualia is that we can notice it right now and that no on else can ever notice it. Except that according to one of its core proponents, we can't notice it either. I mean, I can form sentences about FTL or Souls and future expectations that seem reasonable, but even those types of sentences seem to fail at talking about qualia properly.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-13T18:19:57.411Z · LW(p) · GW(p)

2: You'd likely not notice anything, because if you did, a P-Zombie would not be just like you.

P-zombies are behaviourally like me. That means I would notn act as if I noticed anything. OTOH qualia are part of conciouness, so my conscious awarenss would change. I would be compelled to lie, in a sense.

Replies from: Decius
comment by Decius · 2012-12-14T00:30:34.237Z · LW(p) · GW(p)

Would you lie then, or are you lying now? You have just said that your experience of qualia is not evidence even to yourself that you experience qualia.

Or is there a possible conscious awareness change that has zero effect? Can doublethink go to that metalevel?

comment by DaFranker · 2012-12-13T13:33:02.063Z · LW(p) · GW(p)

I belive in the existence of at least on non-p-zombie, because I have at least indirect evidence of one in the form of my own qualia.

I must not be working with the right / same conception of p-zombies then, because to me qualia experience provides exactly zero bayesian evidence for or against p-zombies on its own.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-13T13:39:50.416Z · LW(p) · GW(p)

"A philosophical zombie or p-zombie in the philosophy of mind and perception is a hypothetical being that is indistinguishable from a normal human being except in that it lacks conscious experience, qualia, or sentience.[1] "--WP

I am of course taking a p-zombie to be lacking in qualia. I am not sure that alternatives are even coherent, since I don't see how other aspects of consciousness could go missing without affecting behaviour.

Replies from: DaFranker
comment by DaFranker · 2012-12-13T13:53:39.484Z · LW(p) · GW(p)

Wait, those premises just seem wrong and contradictory.

  1. To even work in the thought experiment, p-zombies live in a world with physics and logic identical to our own (with possibility of added components).
  2. In principle, qualia can either be generated by physics, logic, or something else (i.e. magic), or any combination thereof.
  3. There is no magic / something else.
  4. We have qualia, generated apparently only by physics and/or logic.
  5. p-zombies have the exact same physics and logic, but still no qualia.

???

My only remaining hypothesis is that p-zombies live in a world where the physics and logic are there, but there is also something else entirely magical that does not seem to exist in our universe that somehow prevents their qualia, by hypothesis. Very question-begging. Also unnecessarily complex. I am apparently incapable of working with thought experiments that defy the laws of logic by their premises.

Replies from: MugaSofer, Peterdjones
comment by MugaSofer · 2012-12-13T14:52:06.360Z · LW(p) · GW(p)

I am apparently incapable of working with thought experiments that defy the laws of logic by their premises.

That sounds like a serious problem. You should get that looked at.

comment by Peterdjones · 2012-12-13T14:17:09.604Z · LW(p) · GW(p)

You seem to have done a 180 shift from insiting that there are only zombies to saying there are no zombies.

3 There is no magic / something else. [..] I am apparently incapable of working with thought experiments that defy the laws of logic by their premises.

I don't know of any examples. Typically zombie gedankens do not take 3 as a premise, and conclude the oppoiste--that there is an extra non-physical ingredient as a conclusion.

Replies from: DaFranker
comment by DaFranker · 2012-12-13T14:27:19.592Z · LW(p) · GW(p)

You seem to have done a 180 shift from insiting that there are only zombies to saying there are no zombies.

Yes. My understanding of p-zombies was incorrect/different. If p-zombies have no qualia by the premises, as you've shown me a clear definition of, then we can't be p-zombies. (ignoring the details and assuming your experiences are like my own, rather than the Lords of the Matrix playing tricks on me and making you pretend you have qualia; I think this is a reasonable assumption to work with)

I don't know of any examples. Typically zombie gedankens do not take 3 as a premise, and conclude the oppoiste--that there is an extra non-physical ingredient as a conclusion.

So they write their bottom line in the premises of the thought experiment in a concealed manner? I'm almost annoyed enough to actually give them that question they're begging for so much.

Now E.Y.'s Zombie posts are starting to make a lot more sense.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-13T15:11:26.322Z · LW(p) · GW(p)

So they write their bottom line in the premises of the thought experiment in a concealed manner?

No. Leaving physicalism out as a premise is not the same as incuding non-physicalaism as a premise. Likewise, concluding non-physicalism is not assuming it.

Replies from: DaFranker, Decius
comment by DaFranker · 2012-12-13T15:23:27.775Z · LW(p) · GW(p)

There must be non-physical things to assume that there is any difference between "us" and "p-zombies". This is a logical requirement. They posit that there effectively is a difference, in the premises right there, by asserting that p-zombies do not have qualia, while we do.

  • Premise: P-zombies have all the physical and logical stuff that we do.
  • Premise: P-zombies DO NOT have qualia.
  • Premise: We have qualia.
  • Implied premise: This thought experiment is logically consistent.

The only way 4 is possible is if it is also implied that:

  • Implied premise: Either us, or P-Zombies, have something magical that adds or removes qualia.

By the reasoning which prompts them to come up with the thought experiment in the first place, it cannot be the zombies that have an additional magical component, because this would contradict the implied premise that the thought experiment is logically consistent (and would question the usefulness and purpose of the thought experiment).

Therefore:

  • "Conclusion": We have something magical that gives us qualia.
Replies from: nshepperd, CCC, shminux, Peterdjones
comment by nshepperd · 2012-12-13T23:25:10.300Z · LW(p) · GW(p)

The p-zombie thought experiment is usually intended to prove that qualia is magical, yes. This is one of those unfortunate cases of philosophers reasoning from conceivability, apparently not realising that such reasoning usually only reveals stuff about their own mind.

I wouldn't say "qualia is magic" is actually a premise, but the argument involves assuming "qualia could be magical" and then invalidly dropping a level of "could".

In this case the "could" is an epistemic "could" -- "I don't know whether qualia is magical". Presumably, iff qualia is magical, then p-zombies are possible (ie. exist in some possible world, modal-could), so we deduce that "it epistemic-could be the case that p-zombies modal-could exist". Then I guess because epistemic-could and modal-could feel like the same thing¹, this gets squished down to "p-zombies modal-could exist" which implies qualia is magical.

Anyway, the above seems like a plausible explanation of the reasoning, although I haven't actually talked to ay philosophers to ask them if this is how it went.

¹ And could actually be (partially or completely) the same thing, since unless modal realism is correct, "possible worlds" don't actually exist anywhere. Or something. Regardless, this wouldn't make the step taken above legal, anyway. (Note that the previous "could" there is an epistemic "could"! :p)

comment by CCC · 2012-12-13T15:35:12.970Z · LW(p) · GW(p)

I had always understood that "We have something magical that gives us qualia" was one of the explicit premises of p-zombies (p-zombies being defined as that which lacks that magical quality, but appears otherwise human). One could then see p-zombies as a way to try to disprove the "something magical" hypothesis by contradiction - start with someone who doesn't have that magical something, continue on from there, and stop once you hit a contradiction.

Replies from: Peterdjones, DaFranker
comment by Peterdjones · 2012-12-13T15:53:23.173Z · LW(p) · GW(p)

We have something magical that gives us qualia" was one of the explicit premises of p-zombies

Nope. eg.

  • According to physicalism, all that exists in our world (including consciousness) is physical.
  • Thus, if physicalism is true, a logically-possible world in which all physical facts are the same as those of the actual world must contain everything that exists in our actual world. In particular, conscious experience must exist in such a possible world.
  • In fact we can conceive of a world physically indistinguishable from our world but in which there is no consciousness (a zombie world). From this (so Chalmers argues) it follows that such a world is logically possible.
  • Therefore, physicalism is false. (The conclusion follows from 2. and 3. by modus tollens.)

(Chalmer's argument according to WP)

One could then see p-zombies as a way to try to disprove the "something magical" hypothesis by contradiction

Replies from: CCC
comment by CCC · 2012-12-13T16:00:37.714Z · LW(p) · GW(p)
  • Thus, if physicalism is true, a logically-possible world in which all physical facts are the same as those of the actual world must contain everything that exists in our actual world. In particular, conscious experience must exist in such a possible world.
  • In fact we can conceive of a world physically indistinguishable from our world but in which there is no consciousness (a zombie world). From this (so Chalmers argues) it follows that such a world is logically possible.

These two steps are contradictory. In the first one, you state that a world physically indistinguishable from ours must include consciousness; then in the very next point, you consider a world physically indistinguishable from ours which does not include consciousness to be logically possible - exactly what the previous step claims is not logically possible.

Or am I misunderstanding something?

Replies from: Peterdjones, arundelo
comment by Peterdjones · 2012-12-13T16:04:23.261Z · LW(p) · GW(p)

The first includes "if physicalism is true", the second doens't.

Replies from: CCC
comment by CCC · 2012-12-13T16:09:41.898Z · LW(p) · GW(p)

Ah, right. Thanks, I somehow missed that.

So the second is then implicitly assuming that physicalism is not true; it seems to me that the whole argument is basically a longwinded way of saying "I can't imagine how consciousness can possibly be physical, therefore since I am conscious, physicalism is false".

One might as easily imagine a world physically indistinguishable from ours, but in which there is no gravity, and thence conclude that gravity is not physical but somehow magical.

Replies from: Peterdjones, DaFranker, MugaSofer
comment by Peterdjones · 2012-12-13T16:14:34.941Z · LW(p) · GW(p)

For some values of "imagine". Given relativity, it would be pretty difficult to coheretly unplug gravity from mass, space and acceleration. It would be easier under Newton. I conclude that the unpluggabiliy of qualia means we just don't have a relativity-grade eplanation of them, an explanation that makes them deeply interwoven with other things.

Replies from: CCC, Decius
comment by CCC · 2012-12-13T16:25:46.480Z · LW(p) · GW(p)

I conclude that the unpluggabiliy of qualia means we just don't have a relativity-grade eplanation of them, an explanation that makes them deeply interwoven with other things.

That seems like a reasonable conclusion to draw.

comment by Decius · 2012-12-14T00:36:22.773Z · LW(p) · GW(p)

Not really. Just postulate something which does not have the same proportionality constant relating inertia to mass.

Replies from: Alejandro1
comment by Alejandro1 · 2012-12-14T00:59:45.401Z · LW(p) · GW(p)

Inertia and mass are the same thing. You probably meant "the same proportionality constant between mass and gravitational force", that is, imagine that the value of Newton's constant G was different.

But this (like CCC's grandparent post introducing the gravity analogy) actually goes in Chalmers' favor. Insofar as we can coherently imagine a different value of G with all non-gravitational facts kept fixed, the actual value of G is a new "brute fact" about the universe that we cannot reduce to non-gravitational facts. The same goes for consciousness with respect to all physical facts, according to Chalmers. He explicitly compares consciousness to fundamental physical quantities like mass and electric charge.

The problem is that one aspect of the universe being conceptually irreducible at the moment (which is all that such thought experiments prove) does not imply it might forever remain so when fundamental theory changes, as Peterdjones says. Newton could imagine inertia without gravity at all, but after Einstein we can't. Now we are able to imagine a different value of G, but maybe later we won't (and I can actually sketch a plausible story of how this might come to happen if anyone is interested).

Replies from: Decius
comment by Decius · 2012-12-15T00:23:12.991Z · LW(p) · GW(p)

No, I meant a form of matter which coexisted with current forms of matter but which was accelerated by a force disproportionately to the amount of force exerted through the gravity force. One such possibility would be something that is 'massless' in that it isn't accelerated by gravity but that has electric charge.

And by definition, the value of G is equal to 1, just like every other proportionality constant. I wasn't postulating that MG/NS^2 have a different value.

comment by DaFranker · 2012-12-13T16:12:27.121Z · LW(p) · GW(p)

One might as easily imagine a world physically indistinguishable from ours, but in which there is no gravity, and thence conclude that gravity is not physical but somehow magical.

Oooh, good one. I'm trying this if someone ever seriously tries to argue p-zombies with me.

comment by MugaSofer · 2012-12-17T20:45:17.609Z · LW(p) · GW(p)

a world physically indistinguishable from ours, but in which there is no gravity

Most versions of the Zombie Argument I've seen don't specify that the world be physically identical to ours, merely indistinguishable.

comment by DaFranker · 2012-12-13T15:41:25.162Z · LW(p) · GW(p)

Agreed.

I'm being told that this is not the case, but I'm struggling to understand how.

comment by Shmi (shminux) · 2012-12-13T23:59:46.606Z · LW(p) · GW(p)

Implied premise: Either us, or P-Zombies, have something magical that adds or removes qualia.

I'm curious about your definition of "magical". Is it the same as dualism)?

Replies from: DaFranker
comment by DaFranker · 2012-12-14T14:39:31.462Z · LW(p) · GW(p)

Within this discussion, I've tried to consistently use "magic" as meaning "not physics or logic". Essentially, things that, given a perfect model of the (physical) universe that we live in, would be considered impossible or would go against all predictions for no cause that we can attribute to physics or logic or both.

So dualism is only one example, another could be intervention by the Lords of the Matrix (depending on how you draw boundaries for "universe that we live in"), and God or ontologically basic mental entities could be others.

So the assertion "we have something magical" is equivalent to "qualia is made of nonlogics" (although "nonlogics" is arguably still much more useful than "nonapples" as a conceptspace pointer).

Replies from: None
comment by [deleted] · 2012-12-14T14:44:53.010Z · LW(p) · GW(p)

Technically qualia is "non-physics". Since if a human with a brain that does thinking is physics + logic, qualia is just the logic given the physics.

Replies from: DaFranker
comment by DaFranker · 2012-12-14T14:50:56.890Z · LW(p) · GW(p)

Errh, yes. Thank you. I think "nonlogics" is a decent fix, in light of this.

comment by Peterdjones · 2012-12-13T15:28:11.680Z · LW(p) · GW(p)

Errr, yes..that is the intended conclusion. But I don't think you can say an argument is question begging beccause the intended conclusion follows from the premises taken jointly.

Replies from: DaFranker
comment by DaFranker · 2012-12-13T15:37:13.681Z · LW(p) · GW(p)

And how, pray tell, did they reach into the vast immense space of possible hypotheses and premises, and pluck out this one specific set of premises which just so happens that if you accept it completely, it inevitably must result in the conclusion that we have something magical granting us qualia?

The begging was done while choosing the premises, not in one of the premises individually.

Premise: All Bob Chairs must have seventy three thousand legs exactly.
Premise: Things we call chairs are illusions unless they are Bob Chairs.
Premise: None of the things we call chairs have exactly seventy three thousand legs.
Therefore, all of the things we call chairs are illusions and do not exist.

I seriously don't see how the above argument is any more reasonable and any more or less question-begging than the p-zombie argument I've made in the grandparent. No single premise here assumes the conclusion, right? So no problem!

ETA: Perhaps it's more clear if I just say that in order for the premises of the grandparent to be logically valid, one must also assume as a premise that having the information patterns of the human brain without creating qualia is possible in the first place. This is the key point that is the source of the question begging: It is assumed that the brain interactions do not create qualia, implicitly as part of the premises, otherwise the statement "P-zombies have the same brain interactions that we do but no qualia" is directly equivalent to "A -> B, A, ¬B".

So for A (brain interactions identical to us), B (possess qualia), and C (has magic):

  1. (A -> B) <==> ¬B -> ¬A
  2. ((C -> B) OR (AC -> B)) <==> ¬(A -> B)
  3. A
  4. ¬B

Refactor to one single "question-begging" premise:
((((C ->B) OR (AC -> B)) -> C) <==> ¬(¬B -> ¬A)) AND A AND ¬B

...therefore C.

Replies from: Peterdjones, Peterdjones
comment by Peterdjones · 2012-12-13T15:48:44.441Z · LW(p) · GW(p)

And how, pray tell, did they reach into the vast immense space of possible hypotheses and premises, and pluck out this one specific set of premises which just so happens that if you accept it completely, it inevitably must result in the conclusion that we have something magical granting us qualia?

I suppose they have the ability to formulate arguments that support their views. Are you saying that the honest way to argue is to fling premises together at random and see what happens?

The begging was done while choosing the premises, not in one of the premises individually.

Joint implication by premsies is validity not petitio principi.

Premise: All Bob Chairs must have seventy three thousand legs exactly. Premise: Things we call chairs are illusions unless they are Bob Chairs. Premise: None of the things we call chairs have exactly seventy three thousand legs. Therefore, all of the things we call chairs are illusions and do not exist.

That is an example of a True Scotsman fallacy, or argument by tendentious redefinition. I don't see the parallel.

Replies from: DaFranker
comment by DaFranker · 2012-12-13T16:05:50.799Z · LW(p) · GW(p)

Eh. I'm bad at informal fallacies, apparently.

However, all they've done is pick specific premises that hide clever assumptions that logically must end up with their desired conclusion, without any reason in particular to believe that their premises make any sense. See the amateur logic I did in my edits of the grandparent.

It is very much assumed, by asserting the first, third and fourth premises, that qualia does not require brain interactions, as a prerequisite for positing the existence of p-zombies in the thought experiment.

comment by Peterdjones · 2012-12-13T16:20:36.852Z · LW(p) · GW(p)

Again: not assuming physicalism it not the same as assuming non-physicalism.

Replies from: DaFranker
comment by DaFranker · 2012-12-13T16:25:51.977Z · LW(p) · GW(p)

They assume (correctly) that if ¬B and A, then ¬(A -> B)

Then they assume ¬B and A.

...

Replies from: Peterdjones
comment by Peterdjones · 2012-12-13T16:33:17.312Z · LW(p) · GW(p)

You've flattened out all the stuff about conceivability and logical possibility.

Replies from: DaFranker
comment by DaFranker · 2012-12-13T16:37:40.521Z · LW(p) · GW(p)

I have, but unfortunately that's mostly because I don't know the formal nomenclature and little details of writing conceivability and possibility logical statements.

I wouldn't really trust myself to write formal logic with conceivability and probability without missing a step or strawmanning one of the premises at some point, with my currently very minimal understanding of that stuff.

comment by Decius · 2012-12-14T00:44:18.348Z · LW(p) · GW(p)

But putting in the statement that zombies have all of the physical and logical characteristics of people, but lack some other characteristic, requires that some non-physical characteristic exists. You can't say "I don't assume magic" and then assume a magician!

Replies from: MugaSofer, Peterdjones
comment by MugaSofer · 2012-12-14T12:39:37.471Z · LW(p) · GW(p)

Well, I understand that if consciousness was physical, but didn't effect our behavior, then removing that physical process would result in a zombie. That's usually the example given, not magic.

Replies from: Peterdjones, Decius
comment by Peterdjones · 2012-12-14T16:51:14.634Z · LW(p) · GW(p)

The usual p-zombie argment in the literature does not assume consciousness is entirely physical. Which is not the same as assuming it is non physical...

Replies from: MugaSofer
comment by MugaSofer · 2012-12-15T15:04:02.365Z · LW(p) · GW(p)

Just to be clear, the fact that they talk about bridging laws or such doesn't mean they didn't generate the idea with magical thinking, or that is has a hope in hell of being actually true. It just means they managed to put a band-aid over that particular fallacy.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-15T17:49:11.939Z · LW(p) · GW(p)

So physicalism is apriori true, even when there is no physical explanaion of some phenomenon?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-15T19:03:14.490Z · LW(p) · GW(p)

No comment. That's not what I said and I'm not saying it now. My point is that, while the p-zombie argument may have been formulated with "magical" explanations in mind, it does not directly reference them in the form usually presented.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-16T19:39:30.735Z · LW(p) · GW(p)

I see little point in ignoring what an argument states explicily in favour of speculations about what the formulaters had in mind. I also think that rhetorical use of the word "magic" is mind killing. Quantum teleportation might seem magical to a 19th century physicist, but it still exists.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-17T17:42:31.434Z · LW(p) · GW(p)

Which is why my point is that that the argument makes no mention of "magic".

My point is that, while the p-zombie argument may have been formulated with "magical" explanations in mind, it does not directly reference them in the form usually presented.

comment by Decius · 2012-12-15T00:31:10.391Z · LW(p) · GW(p)

Removing something physical doesn't create a p-zombie, it creates a lobotomized person. If there was a form of brain damage that could not be detected by any means and had no symptoms, would it be a possible side effect of medication?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-15T16:08:55.327Z · LW(p) · GW(p)

Removing something physical doesn't create a p-zombie, it creates a lobotomized person.

Supposedly the argument works just as well as a counterfactual.

Replies from: Decius
comment by Decius · 2012-12-15T20:31:09.289Z · LW(p) · GW(p)

Compare two people who are physically identical except for one thing which doesn't change anything else micro or macro scale. Clearly, one of them is a p-zombie, because that one lacks qualia.

I still don't understand what the difference is between someone who lacks consciousness but is otherwise identical to someone who has consciousness.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-15T21:38:16.484Z · LW(p) · GW(p)

With actual humans, p-zombies are almost certainly impossible. But imagine a world in which humans aren't controlled by their brains; the Zombie Fairy intervenes and makes them act as she predicts they would act. Now the Zombie Fairy is so good at her job that the people of this world experience controlling their own bodies; but in actuality, they have no effect on their actions (except by coincidence.) If one of their brains was somehow altered without the Fairy's knowledge, they would discover their strange predicament (but be unable to tell anyone - they would live out their life as a silent observer.) If one of their brains was destroyed without the Fairy's noticing, they would continue as as a lifeless puppet, indistinguishable from regular humans - a p-zombie.

Now, it could be argued that the Fairy - who is what is usually referred to as a Zombie Master - is herself conscious, and as such these zombies are not true p-zombies. But this should give you some idea of what people are imagining when they say "p-zombie".

Replies from: Decius
comment by Decius · 2012-12-15T22:10:04.883Z · LW(p) · GW(p)

That scenario sounds identical to "everybody is a p-zombie".

Is there also a perception fairy, since perceiving the zombie fairy's influence doesn't create any physical changes in brain state or behavior?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-15T22:24:51.517Z · LW(p) · GW(p)

That scenario sounds identical to "everybody is a p-zombie".

It is! Unless of course you happen to be one of the poor people who exist solely to grant said zombies qualia.

Is there also a perception fairy, since perceiving the zombie fairy's influence doesn't create any physical changes in brain state or behavior?

Perception proceeds as normal in this counterfactual world. Of course, this world is not necessarily identical to our world, depending on how obvious the Perception Fairy is.

Replies from: Decius
comment by Decius · 2012-12-16T03:37:46.514Z · LW(p) · GW(p)

Does "As normal" mean that noticing the effects of the zombie fairy results in electrochemical changes in the brain that are different from those which occur in the absence of noticing those effects?

For some reason I can understand it better if I think of a sentient computer with standard input devices as things that it considers "real", and a debugger that reads and alters memory states at will, outside the loop of what the machine can know. Assuming that such a system could be self-aware in the same sense that I think I am, how would it respond if every time it asked a class of question, the answer was modified by 'magic'?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-16T12:27:55.175Z · LW(p) · GW(p)

Does "As normal" mean that noticing the effects of the zombie fairy results in electrochemical changes in the brain that are different from those which occur in the absence of noticing those effects?

...yes? How would one notice something without changing brain-state to reflect that?

For some reason I can understand it better if I think of a sentient computer with standard input devices as things that it considers "real", and a debugger that reads and alters memory states at will, outside the loop of what the machine can know. Assuming that such a system could be self-aware in the same sense that I think I am, how would it respond if every time it asked a class of question, the answer was modified by 'magic'?

I think you may have misunderstood. The fairy controls the bodies, but has perfectly predicted in advance what the human would have done. Thus whatever they try to do is simultaneously achieved by the fairy; but they have no effect on their bodies. The fairy doesn't alter their brains at all. If something else did alter their brain, but for some reason the fairy didn't notice and update her predictions, then they would become "out of sync" with their body.

Replies from: Decius
comment by Decius · 2012-12-16T20:48:59.819Z · LW(p) · GW(p)

Brain state is in principle detectable. If the fairy changes brain state, the fairy is detectable by physical means and thus physical.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-17T17:39:15.347Z · LW(p) · GW(p)

Oh, I see. Yes, the fairy is physical; the brains, however, could in principle be epiphenomenal (although they aren't, in this example.)

comment by Peterdjones · 2012-12-14T16:48:57.169Z · LW(p) · GW(p)

You need to specify whether your "putting in" is assuming or concluding. In general, it would help to refer to a concrete example of a p-zombie argment from a primary source.

Replies from: Decius
comment by Decius · 2012-12-15T00:36:00.642Z · LW(p) · GW(p)

Defining. A p-zombie is defined by all of the primary sources as having all of the physical qualities that humans have, but lacking something that humans have.

A magician is defined as a human that can do magic. Magicians (people identical to humans but with supernatural powers) don't prove anything about physicalism any more than p-zombies do, unless it can be shown that either are exemplified.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-15T17:54:38.119Z · LW(p) · GW(p)

unless it can be shown that either are exemplified.

The literature suggests that p-zombies can be significant if they are only conceptually possible. In fact, zombie theorists like Chalmers think they are naturalistically impossible and so cannot be exemplified. You may not like arguments from conceptiual possibility, but he has argued for his views, where you have so far only expressed opinion.

Replies from: Decius
comment by Decius · 2012-12-15T20:16:44.127Z · LW(p) · GW(p)

Then the literature suggests that magicians can be significant if they are only conceptually possible. And the conceptual possibility of non-physicalism disproves physicalism.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-16T19:36:35.917Z · LW(p) · GW(p)

Then the literature suggests that magicians can be significant if they are only conceptually possible.

The literature does not talk about magicians.

Replies from: Decius
comment by Decius · 2012-12-16T20:45:30.418Z · LW(p) · GW(p)

Magicians are defined as physically identical to humans and p-zombies but they have magic. Magic has no physical effects, doesn't even trigger neurons, but humans with magic experience it and regular humans and p-zombies don't.

So it has all of the characteristics of qualia. Any evidence for qualia is also evidence for this type of magic.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-16T20:59:31.211Z · LW(p) · GW(p)

No.Qulia are not defined as epiphenomenal or non physical.

Replies from: DaFranker, Decius
comment by DaFranker · 2012-12-17T20:28:26.311Z · LW(p) · GW(p)

Yes. The argument of the grandparent is logically consistent AFAICT.

P-zombies are (Non-self-contradictory) IFF qualia comes from nonlogics and nonphysics.

Qualia comes from nonlogics and nonphysics IFF nonlogics and nonphysics are possible. (this is trivially obvious)

P(Magicians | "nonlogics and nonphysics are possible") > P(Magicians | ¬"nonlogics and nonphysics are possible")

ETA: That last one is probably misleading / badly written. Is there a proper symbol for "No definite observation of X or ¬X", AKA the absence of this piece of evidence?

comment by Decius · 2012-12-17T19:59:42.480Z · LW(p) · GW(p)

If qualia is defined such that is is conceptually possible that one person can experience qualia while a physically identical person cannot the other does not, then qualia are defined to be non physical.

Replies from: Peterdjones, MugaSofer
comment by Peterdjones · 2012-12-18T10:56:20.230Z · LW(p) · GW(p)

No, they are just implied to be. There is an infinty of facts implied by the definition of "2" but they are not in the definiion, which is finite.

comment by MugaSofer · 2012-12-17T20:09:27.040Z · LW(p) · GW(p)

Didn't we have his exact same argument? Even if qualia are generated by our (physical) brains, this doesn't mean that they could counterfactually be epiphenomenal if something was reproducing the effects they have on our bodies.

Replies from: Decius
comment by Decius · 2012-12-17T20:16:18.425Z · LW(p) · GW(p)

The same could be said of cats: Even if cats are part of the physical universe, they could counterfactually be epiphenomenal if something was reproducing the effects they have on the world.

How does the argument apply to qualia and not to cats?

Replies from: DaFranker, MugaSofer
comment by DaFranker · 2012-12-17T20:19:40.993Z · LW(p) · GW(p)

Gravity!

I think I'm seeing a pattern in this topic of discussion. And it is reminiscent of a certain single-sided geometric figure.

comment by MugaSofer · 2012-12-17T20:43:45.418Z · LW(p) · GW(p)

Well, if something is reproducing the effect of cats on the world we have no reason to posit cats as existing anyway, unless we are cats.

Replies from: Decius, shminux, Decius
comment by Decius · 2012-12-19T02:51:00.094Z · LW(p) · GW(p)

Well, if something is reproducing the effect of cats on the world we have no reason to posit cats as existing anyway, unless we are cats.

What about all of the observations of cats? Aren't they adequate reason to posit cats as existing?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T13:44:29.550Z · LW(p) · GW(p)

Um, no. Not if something is reproducing them.

Replies from: Decius
comment by Decius · 2012-12-19T20:13:10.657Z · LW(p) · GW(p)

Taboo 'reproducing'.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T21:05:13.633Z · LW(p) · GW(p)

Generating effects indistinguishable from the result of an ordinary cat - from reflected light to half-eaten mice. Of course, there are a few ... extra effects in there. So you know none of you are ordinary cats.

The epiphenomenal cats, on the other hand, are completely undetectable. Except to themselves.

Replies from: Decius
comment by Decius · 2012-12-20T00:45:59.549Z · LW(p) · GW(p)

I'm not granting cats a point of view for this discussion: they are something that we can agree clearly exists and we can describe their boundaries with a fair degree of precision.

What do these 'extra effects' look like, and are they themselves proof that physicalism is wrong?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-20T19:35:05.712Z · LW(p) · GW(p)

The whole point was that if the cats have a point of view, then they have the information to posit themselves; even though an outside observer wouldn't.

Replies from: Decius
comment by Decius · 2012-12-20T23:59:31.210Z · LW(p) · GW(p)

Are you saying that qualia have a point of view, or are positing themselves?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-21T08:11:48.691Z · LW(p) · GW(p)

It's subjective information. I can't exactly show my qualia to you; I can describe them, but so can a p-zombie.

Didn't I say I wasn't going to discuss qualia with you until you actually knew what they were? Because you're starting to look like a troll here. Not saying you are one, but ...

Replies from: Decius
comment by Decius · 2012-12-21T16:42:19.016Z · LW(p) · GW(p)

So, you're saying that it is subjective whether qualia have a point of view, or the ability to posit themselves?

Because I have all of the observations needed to say that cats exist, even if they don't technically exist. I do not have the observations needed to say that there is a non-physical component to subjective experience.

Replies from: nshepperd, MugaSofer
comment by nshepperd · 2012-12-21T23:34:41.570Z · LW(p) · GW(p)

Who's talking about non-physical components? "Qualia" has more than one meaning.

comment by MugaSofer · 2012-12-21T22:44:09.324Z · LW(p) · GW(p)

Y'know, I did say I wasn't going to discuss qualia with you unless you knew what they were. Do some damn research, then come back here and start arguments about them.

comment by Shmi (shminux) · 2012-12-18T00:44:44.543Z · LW(p) · GW(p)

unless we are cats

or even if we were.

Replies from: DaFranker, MugaSofer
comment by DaFranker · 2012-12-18T18:30:35.075Z · LW(p) · GW(p)

I'm very confused. Are you implying that experiencing qualia is no reason to posit that qualia exists, period?

Or maybe you're just saying "Hey, unless the cats have conscious self-aware minds that can experience cats, then they still can't either!" - which I took for granted and assumed the jump from there to "assuming cats have the required mental parts" was a trivial inference to make.

Replies from: shminux
comment by Shmi (shminux) · 2012-12-18T19:38:09.957Z · LW(p) · GW(p)

I just don't see the need for the exception in MugaSofer's statement, whether you agree with the statement itself or not.

Replies from: DaFranker
comment by DaFranker · 2012-12-18T19:41:30.806Z · LW(p) · GW(p)

So if something were shown to be reproducing the effect of human minds on the world, you would have no reason to posit yourself as existing anyway?

Replies from: shminux
comment by Shmi (shminux) · 2012-12-18T19:52:10.335Z · LW(p) · GW(p)

If you are an artifact of such a reproduction, would you call yourself existing in the same way as if you weren't?

Replies from: DaFranker, MugaSofer
comment by DaFranker · 2012-12-18T19:55:21.391Z · LW(p) · GW(p)

I would.

That's a bit why I'm confused as to why you're (it seems to me) claiming we have no reason to posit self-existence in such a case.

Maybe your objection is that we should taboo and dissolve that whole "existing" thing?

Replies from: shminux
comment by Shmi (shminux) · 2012-12-18T20:02:38.815Z · LW(p) · GW(p)

I would.

OK, it's just that the statement "if something is reproducing the effect of cats on the world we have no reason to posit cats as existing" declares that something that is not really a "cat" the way we perceive it, but only an "effect of a cat", then it does not "exist". Ergo, if you are only an effect of a cat, you don't exist as a cat.

Maybe your objection is that we should taboo and dissolve that whole "existing" thing?

Wouldn't that be nice, but unfortunately EY-style realism and my version of instrumentalism seem to diverge at that definition.

Replies from: DaFranker, MugaSofer
comment by DaFranker · 2012-12-18T20:16:14.471Z · LW(p) · GW(p)

Oh. Then we agree, I think, on the fundamentals of what makes a cat "exist" or not.

Does this also imply the same exist-"exist" perception problem with qualia in your model, or am I horribly misinterpreting your thoughts?

Replies from: shminux
comment by Shmi (shminux) · 2012-12-18T20:59:07.199Z · LW(p) · GW(p)

Re qualia, I don't understand what you are asking. The term means no more to me than a subroutine in a reasonably complex computer program, if currently run on a different substrate.

Replies from: DaFranker
comment by DaFranker · 2012-12-18T21:10:58.122Z · LW(p) · GW(p)

And, if I understand correctly, this subroutine exists (and is felt / has effect on its host program) whether or not it "exists as qualia" in the particular sense that some clever arguer wants to define qualia as anything other than that subroutine. The fact that there is an effect of the subroutine is all that is required for the subroutine to exist in the first sense, while whether it is "the subroutine" or only a mimicking effect is only relevant for the second sense of "exist", which is irrelevant to you.

Is this an accurate description?

Replies from: shminux
comment by Shmi (shminux) · 2012-12-18T22:20:55.100Z · LW(p) · GW(p)

Pretty much, as I don't consider this "second sense" to be well defined.

comment by MugaSofer · 2012-12-18T20:35:50.237Z · LW(p) · GW(p)

if you are only an effect of a cat, you don't exist as a cat.

But I specifically stated you were a cat, not an effect of a cat.

Replies from: shminux
comment by Shmi (shminux) · 2012-12-18T20:51:51.269Z · LW(p) · GW(p)

I'm not sure how to tell the difference, or even if there is one.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T01:31:48.185Z · LW(p) · GW(p)

In this case, feel free to assume no-one ever tries to observe cat brains. The "simulation" only has to reproduce your actions, which it does with magic.

comment by MugaSofer · 2012-12-18T20:34:49.909Z · LW(p) · GW(p)

If you are an artifact of such a reproduction, would you call yourself existing in the same way as if you weren't?

Could you taboo the bolded phrase, please?

Replies from: shminux
comment by Shmi (shminux) · 2012-12-18T20:53:15.123Z · LW(p) · GW(p)

Sure. an artifact of such a reproduction = whatever you mean by "effect of cats" in your original statement.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T01:03:01.393Z · LW(p) · GW(p)

Oh, well there's your problem then. You're not part of "the effect of cats". That's stuff like air displacement, reflected light, purring, that sort of thing.

Replies from: shminux
comment by Shmi (shminux) · 2012-12-19T02:01:03.632Z · LW(p) · GW(p)

Where do effects of cats stop and cats begin?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T14:30:50.764Z · LW(p) · GW(p)

If you're using some nonstandard epistemology that doesn't distinguish between observations that point to something and the thing itself, then nothing. Otherwise the difference between a liar and a reality warper.

Replies from: shminux, DSimon
comment by Shmi (shminux) · 2012-12-19T18:32:33.291Z · LW(p) · GW(p)

Looks like we have an insurmountable inferential distance problem both ways, so I'll stop here.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T19:41:02.292Z · LW(p) · GW(p)

Fair enough.

comment by DSimon · 2012-12-19T15:44:52.716Z · LW(p) · GW(p)

Careful, effects are not the same things as observations.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T19:49:40.149Z · LW(p) · GW(p)

Interesting point. Observations are certainly effects, but you're right, not all effects are observations. Of course, the example wouldn't be hurt by my specifying that they only bother faking effects that will lead to observations ;)

Replies from: DaFranker
comment by DaFranker · 2012-12-19T19:59:42.815Z · LW(p) · GW(p)

the example wouldn't be hurt by my specifying that they only bother faking effects that will lead to observations ;)

I think it would. I think it's not the same example at all anymore.

Something that reproduces all effects of cats is effectively producing all the molecular interactions and neurons and flesh and blood and fur that we think are what produces our observations of cats.

On the other hand, something that only reproduces the effects that lead directly to observations is, in its simplest form, something that analyzes minds and finds out where to inject data into them to make these minds have the experiences of the presence of cats, and analyzes what other things in the world a would-be-cat would change, and just change those directly (i.e. if a cat would've drank milk and produced feline excrement, then milk disappears and feline excrement appears, and a human's brain is modified such that the experience of seeing a cat drink milk and make poo is simulated).

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T21:13:45.220Z · LW(p) · GW(p)

Something that reproduces all effects of cats is effectively producing all the molecular interactions and neurons and flesh and blood and fur

Not unless something is somehow interacting with their neurons, which I stated isn't happening for simplicity, and most of the time not for the blood or flesh.

On the other hand, something that only reproduces the effects that lead directly to observations is, in its simplest form, something that analyzes minds and finds out where to inject data into them to make these minds have the experiences of the presence of cats, and analyzes what other things in the world a would-be-cat would change, and just change those directly (i.e. if a cat would've drank milk and produced feline excrement, then milk disappears and feline excrement appears, and a human's brain is modified such that the experience of seeing a cat drink milk and make poo is simulated).

Oh, I meant the interactions occur where they would if the cat was real, but these increasingly-godlike fairies are lazy and don't bother producing them if their magic tells them it wouldn't lead to an observation.

Replies from: DaFranker
comment by DaFranker · 2012-12-19T21:26:25.891Z · LW(p) · GW(p)

My (admittedly lacking) understanding of Information Theory precludes any possibility of perfectly reproducing all effects of the presence of cats throughout the universe (or multiverse or whatever) without having in some form or another a perfect model or simulation of all the individual interactions of the base elements which cats are made of. This would, as it contains the same patterns within the model which when made of "physical matter" produce cats, essentially still produce cats.

So if there's a mechanism somewhere making sure that the reproduction is perfect, it's almost certainly (to my knowledge) "simulating" the cats in some manner, in which case the cats are in that simulation and perceive the same experiences they would if they were "really" there in atoms instead of being in the simulation.

If you posit some kind of ontologically basic entity that somehow magically makes a universal consistency check for the exact worldstates that could plausibly be computed if the cat were present, without actually simulating any cat, then sure... but I think that's also not the same problem anymore. And it requires accepting a magical premise.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T21:39:43.730Z · LW(p) · GW(p)

Oh, right. Yup, anything simulating you that perfectly is gonna be conscious - but it might be using magic. For example, perhaps they pull their data out of parallel universe where you ARE real. Or maybe they use some black-swan technique you can't even imagine. They're fairies, for godssake. And you're an invisible cat. Don't fight the counterfactual.

Replies from: DaFranker
comment by DaFranker · 2012-12-19T21:43:17.321Z · LW(p) · GW(p)

Haha, that one made me laugh. Yes, it's fighting the counterfactual a bit, but I think that this is one of the reasons why there was a chasm of misunderstandings in this and other sub-threads.

Anyway, I don't see any tangible things left to discuss here.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T22:06:33.525Z · LW(p) · GW(p)

Victory! Possibly for both sides, that could well be what's causing the chasm.

comment by MugaSofer · 2012-12-18T18:14:57.812Z · LW(p) · GW(p)

So you're saying we shouldn't believe in ourselves?

Replies from: shminux
comment by Shmi (shminux) · 2012-12-18T19:34:38.028Z · LW(p) · GW(p)

To paraphrase EY, What do you think you know [about yourself], and how do you think you know it?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-18T19:47:06.713Z · LW(p) · GW(p)

Oh, you mean we shouldn't assume we're the same as the other cats. Obviously there's some possibility that we're unique, but (assuming our body is "simulated" as well, obviously) it seems like all "cats" probably contain epiphenomenal cats as well. Do you think everyone else is a p-zombie? Obviously it's a remote possibility, but...

Replies from: shminux
comment by Shmi (shminux) · 2012-12-18T19:57:09.774Z · LW(p) · GW(p)

Oh, you mean we shouldn't assume we're the same as the other cats.

No, I did not mean that, unless one finds some good evidence supporting this additional assumption. My point was quite the opposite, that your statement "if something is reproducing the effect of cats on the world we have no reason to posit cats as existing" does not need a qualifier.

Do you think everyone else is a p-zombie?

Not sure why you bring that silly concept up...

Replies from: MugaSofer
comment by MugaSofer · 2012-12-18T20:07:36.081Z · LW(p) · GW(p)

No, I did not mean that, unless one finds some good evidence supporting this additional assumption. My point was quite the opposite, that your statement "if something is reproducing the effect of cats on the world we have no reason to posit cats as existing" does not need a qualifier.

Look, if all "cats" are actually magical fairies using their magic to reproduce the effect of cats, yet I find myself as a cat - whose effect on the world consists of a fairy pretending to be me so well even I don't notice (except just now, obviously.). Thus, for the one epiphenomenal cat I can know about - myself - I am associated with a "cat" that perfectly duplicates my actions. I can't check if all "cats" have similar cats attached, since they would be epiphenomenal, but it seems likely, based on myself, that there are.

Do you think everyone else is a p-zombie?

Not sure why you bring that silly concept up...

Because the whole point of this cat metaphor was to make a point about p-zombies. That's what they are. They're p-zombies for cats instead of qualia.

Replies from: Decius
comment by Decius · 2012-12-19T02:49:24.135Z · LW(p) · GW(p)

Because the whole point of this cat metaphor was to make a point about p-zombies. That's what they are. They're p-zombies for cats instead of qualia.

Well, the point was to point out that we only think things exist because we experience them, and therefore that anything which duplicates the experience is as real as the original artifact.

Suppose there were to be no cats, but only a magical fairy which knocks things from the mantlepiece and causes us to hallucinate in a consistent manner (among other things). There is no reason to consider that world distinguishable, even in principle, from the standard model.

Now, suppose that you couldn't see cats, but instead could see the 'cat fairy'. What is different now, assuming that the cat fairy is working properly and providing identical sensory input as the cats?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T14:51:53.758Z · LW(p) · GW(p)

There is no (observable) difference. That's the point. But presumably someone found a way to check for fairies.

Replies from: Decius
comment by Decius · 2012-12-19T20:16:31.017Z · LW(p) · GW(p)

If there is no observable (even in principle) difference, what's the difference? P-zombies are not intended or described as equivocal to humans.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T20:59:54.082Z · LW(p) · GW(p)

There are two differences: the presence of the fairy (which can be observed ... somehow) and the possibility of deviating from the mind. P-zombies are described as acting just like humans, but lack consciousness. "Cats" are generally like the human counterparts to p-zombies (who act just the same - by definition - but have epiphenomenal consciousness.)

TL;DR: it's observable in principle. But I, as author, have decreed that you arn't getting to check if your friends are cats as well as "cats".

Y'know, I'm starting to think this may have been a poor example. It's a little complicated.

Replies from: Decius
comment by Decius · 2012-12-20T00:59:31.760Z · LW(p) · GW(p)

Complicated isn't a bad thing;

If the fairy is observable despite being in principle not observable... I break.

If it is in principle possible to experience differently from what a quantum scan of the brain and body would indicate, but behave in accordance with physicalism ... how would you know if what you experienced was different from what you thought you experienced, or if what you thought was different from what you honestly claimed that you thought?

That would seem to be close to several types of abnormal brain function, where a person describes themself as not in control of their body. I think those cases are better explained by abnormal internal brain communication, but further direct evidence may show that the 'reasoning' and 'acting' portions of some person are connected similarly enough to normal brains that they should be working the same way, but aren't. If there is a demonstrated case either of a pattern of neurons firing corresponding to similar behavior in all typical brains and a different behavior in a class of brains of people with such abnormal functioning (or in physically similar neurons firing differently under similar stimuli), then I would accept that as evidence that the fairy perceived by those people existed.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-20T19:46:56.267Z · LW(p) · GW(p)

Complicated isn't a bad thing;

Well, it's proving hard to explain.

If the fairy is observable despite being in principle not observable... I break.

It's observable. The cats are epiphenomenal, and thus unobservable, except to themselves.

If it is in principle possible to experience differently from what a quantum scan of the brain and body would indicate, but behave in accordance with physicalism ... how would you know if what you experienced was different from what you thought you experienced, or if what you thought was different from what you honestly claimed that you thought?

Pardon?

That would seem to be close to several types of abnormal brain function, where a person describes themself as not in control of their body. I think those cases are better explained by abnormal internal brain communication, but further direct evidence may show that the 'reasoning' and 'acting' portions of some person are connected similarly enough to normal brains that they should be working the same way, but aren't. If there is a demonstrated case either of a pattern of neurons firing corresponding to similar behavior in all typical brains and a different behavior in a class of brains of people with such abnormal functioning (or in physically similar neurons firing differently under similar stimuli), then I would accept that as evidence that the fairy perceived by those people existed.

Well, if they can tell you what the problem is then they clearly have some control. More to the point, it is a known feature of the environment that all observed cats are actually illusions produced by fairies. It is a fact, although not generally known, that there are also epiphenomenal (although acted upon by the environment) cats; these exist in exactly the same space as the illusions and act exactly the same way. If you are a human, this is all fine and dandy, if bizarre. But if you are a sentient cat (roll with it) then you have evidence of the epiphenomenal cats, even though this evidence is inherently subjective (since presumably the illusions are also seemingly sentient, in this case.)

Replies from: Decius
comment by Decius · 2012-12-21T00:06:44.307Z · LW(p) · GW(p)

If it is in principle possible to experience differently from what a quantum scan of the brain and body would indicate, but behave in accordance with physicalism ... how would you know if what you experienced was different from what you thought you experienced, or if what you thought was different from what you honestly claimed that you thought?

Pardon?

How could you tell if you were experiencing something differently from the way a p-zombie would (or, if you are a p-zombie, if you were experiencing something differently from the way a human would)?

But if you are a sentient cat (roll with it) then you have evidence of the epiphenomenal cats, even though this evidence is inherently subjective (since presumably the illusions are also seemingly sentient, in this case.)

In every meaningful way, the cat fairy is a cat. There is no way for an epiphenomenal sentient cat to differentiate itself from a cat fairy, nor any way for a cat fairy to differentiate itself from whatever portions of 'cats' it controls (without violating the constraints on cat fairy behavior). Of course, there's also the conceivability of epiphenomenal sentient ghosts which cannot have any effect on the world but still observe. (That's one of my death nightmares—remaining fully perceptive and cognitive but unable to act in any way.)

Replies from: nshepperd, Eugine_Nier, MugaSofer
comment by nshepperd · 2012-12-21T08:35:50.919Z · LW(p) · GW(p)

You seem to be somewhat confused about the notion of a p-zombie. A p-zombie is something physically identical to a human, but without consciousness. A p-zombie does not experience anything in any way at all. P-zombies are probably self-contradictory.

comment by Eugine_Nier · 2012-12-21T03:48:44.604Z · LW(p) · GW(p)

How could you tell if you were experiencing something differently from the way a p-zombie would (or, if you are a p-zombie, if you were experiencing something differently from the way a human would)?

I am experiencing something, therefore I am not a p-zombie.

Replies from: Decius
comment by Decius · 2012-12-21T06:01:07.162Z · LW(p) · GW(p)

Consider the possibility that you are not experiencing everything that humans do. Can you provide any evidence, even to yourself, that you are? Could a p-zombie provide that same evidence?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-22T07:42:13.033Z · LW(p) · GW(p)

Consider the possibility that you are not experiencing everything that humans do.

How is this relevant? My point is that I'm experiencing what I'm experiencing.

Replies from: Decius, Osuniev
comment by Decius · 2012-12-22T07:57:12.839Z · LW(p) · GW(p)

I'm experiencing what I'm experiencing.

And p-zombies are experiencing what they're experiencing. You can't use a similarity to distinguish.

Replies from: nshepperd
comment by nshepperd · 2012-12-22T09:34:13.927Z · LW(p) · GW(p)

P-zombies aren't experiencing anything. By definition.

Replies from: Kawoomba, Decius
comment by Kawoomba · 2012-12-22T09:46:34.987Z · LW(p) · GW(p)

Those two statements are both tautologically true and do not contradict one another.

comment by Decius · 2012-12-22T15:38:03.727Z · LW(p) · GW(p)

What would be different, to you, if you weren't experiencing anything, but were physically identical?

Replies from: Eugine_Nier, nshepperd
comment by Eugine_Nier · 2012-12-23T01:44:46.019Z · LW(p) · GW(p)

I wouldn't be experiencing anything.

Replies from: Decius
comment by Decius · 2012-12-23T04:54:06.488Z · LW(p) · GW(p)

I thought it had been established that wasn't a difference.

comment by nshepperd · 2012-12-22T21:45:11.550Z · LW(p) · GW(p)

Are you asking what I would experience? Because I wouldn't. Not to mention that such a thing can't happen if, as I expect, subjective experience arises from physics.

Replies from: Decius
comment by Decius · 2012-12-22T22:33:46.382Z · LW(p) · GW(p)

Sorry, I thought you were disagreeing with me.

comment by Osuniev · 2012-12-23T02:36:59.448Z · LW(p) · GW(p)

"How is this relevant?"

It is relevant because i you cannot find any experimental differences betweenn you and a you NOT experiencing, then maybe there is no such difference.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-24T06:02:30.386Z · LW(p) · GW(p)

i you cannot find any experimental differences betweenn you and a you NOT experiencing

I cannot present you with evidence that I am experiencing, except maybe by analogy with yourself. I, however, know that I experience because I experience it.

comment by MugaSofer · 2012-12-21T08:26:32.794Z · LW(p) · GW(p)

How could you tell if you were experiencing something differently from the way a p-zombie would (or, if you are a p-zombie, if you were experiencing something differently from the way a human would)?

Because p-zombies aren't conscious. By definition.

In every meaningful way, the cat fairy is a cat. There is no way for an epiphenomenal sentient cat to differentiate itself from a cat fairy, nor any way for a cat fairy to differentiate itself from whatever portions of 'cats' it controls (without violating the constraints on cat fairy behavior). Of course, there's also the conceivability of epiphenomenal sentient ghosts which cannot have any effect on the world but still observe. (That's one of my death nightmares—remaining fully perceptive and cognitive but unable to act in any way.)

Well, the cat does have an associated cat fairy. So, since the only cat fairy who's e-cat it could observe (its own) has one, I think it should rightly conclude that all cat fairies have cats. But yes, epiphenomenal sentient "ghosts" are possible, and indeed the p-zombie hypothesis requires that the regular humans are in fact such ghosts. They just don't notice. Yes, there are people arguing this is true in the real world, although not all of them have worked out the implications.

Replies from: Decius
comment by Decius · 2012-12-21T16:37:22.659Z · LW(p) · GW(p)

What would be the subjective difference to you if you weren't 'conscious'?

Replies from: None
comment by [deleted] · 2012-12-21T16:40:58.272Z · LW(p) · GW(p)

To have a subjective anything, you have to be conscious. By definition, if you consider whether you're a P-zombie, you're conscious and hence not one.

Replies from: Decius
comment by Decius · 2012-12-22T02:50:08.536Z · LW(p) · GW(p)

Now conceive of something which is similar to consciousness, but distinct; like consciousness, it has no physical effects on the world, and like consciousness, anyone who has it experiences it in a manner distinct from their physicality. Call this 'magic', and people who posses it 'magi'.

What aspect does magic lack that consciousness has, such that a p-zombie cannot consider if it is conscious, but a human can ask if they are a magi?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-22T17:08:59.461Z · LW(p) · GW(p)

Who said consciousness has no effects on the physical world? Apart from those idiots making the p-zombie argument that is. Pretty much everyone here thinks that's nonsense, including me and, statistically, probably srn347 (although you never know, I guess.)

Regarding your Magi, if it affects their brain, it's not epiphenomenal. So there's that.

Replies from: Decius
comment by Decius · 2012-12-22T21:44:32.934Z · LW(p) · GW(p)

The point I am trying to make is that P-zombies are nonsensical. I'm demonstrating that they are equally sensible as an absurd thing.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-23T14:20:31.181Z · LW(p) · GW(p)

And the point I am trying to make is that p-zombies are not only a coherent idea, but compatible with human-standard brains as generally modelled on LW. That they don't in any way demonstrate the point they were intended to make is quite another thing.

Replies from: wedrifid, Decius
comment by wedrifid · 2012-12-24T00:18:47.109Z · LW(p) · GW(p)

And the point I am trying to make is that p-zombies are not only a coherent idea, but compatible with human-standard brains as generally modelled on LW.

Yes, it merely requires redefining things like 'conscious' or 'experience' (whatever you decide p-zombies do not have) to be something epiphenomenal and incidentally non-existent.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-24T05:26:48.857Z · LW(p) · GW(p)

Um, could you please explain this comment? I think there's a fair chance you've stumbled into the middle of this discussion and don't know what I'm actually talking about (except that it involves p-zombies, I guess.)

Replies from: wedrifid
comment by wedrifid · 2012-12-24T06:41:55.312Z · LW(p) · GW(p)

I think there's a fair chance you've stumbled into the middle of this discussion and don't know what I'm actually talking about (except that it involves p-zombies, I guess.)

I know only the words spoken, not those intended. (And concluded early in the conversation that the entire subthread should be truncated and replaced with a link). So much confusion and muddled thinking!)

Replies from: MugaSofer
comment by MugaSofer · 2012-12-24T15:54:03.074Z · LW(p) · GW(p)

Seems reasonable. For reference, then, I suggested the analogous thought experiment of fairies using magic to reproduce all the effects of cats on the environment. Also, there are epiphenomenal ghost cats that occupy the same space and are otherwise identical to the fairies' illusions, down to the subatomic level. An outside observer would, of course, have no reason to postulate these epiphenomenal cats, but if the cats themselves were somehow conscious, they would.

This was intended to help with understanding p-zombies, since it avoids the ... confusing ... aspects.

Replies from: wedrifid
comment by wedrifid · 2012-12-24T22:33:06.118Z · LW(p) · GW(p)

This was intended to help with understanding p-zombies, since it avoids the ... messy ... aspects.

Like brains and rotting flesh?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-25T13:21:48.313Z · LW(p) · GW(p)

Whoops. Changed it to "confusing".

comment by Decius · 2012-12-23T16:39:11.588Z · LW(p) · GW(p)

How is it that something which is physically identical to a human and has a physical difference from a human is a coherent concept?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-24T04:52:26.495Z · LW(p) · GW(p)

It's not. I meant that we can replace the soul or whatever with a neurotypical human brain and still get a coherent thought experiment.

Replies from: Decius
comment by Decius · 2012-12-24T05:26:05.620Z · LW(p) · GW(p)

Were you saying that the results of that experiment were completely uninteresting?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-24T15:56:07.896Z · LW(p) · GW(p)

Well, I personally find it an interesting concept. It's basically a reformulation of standard Sequences stuff, though, so it shouldn't be surprising, at least 'round here.

comment by Decius · 2012-12-17T22:28:14.395Z · LW(p) · GW(p)

How does that not apply to qualia, unless we are qualia?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-17T22:44:17.295Z · LW(p) · GW(p)

We experience qualia. Just like the cats experience being cats.

EDIT: are you arguing we have insufficient evidence to posit qualia?

Replies from: Decius
comment by Decius · 2012-12-18T00:13:01.338Z · LW(p) · GW(p)

I experience qualia in exactly the same sense that I experience cats.

All of the evidence I have to posit qualia is due to effects that qualia have on me. Likewise for cats.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-18T17:49:33.343Z · LW(p) · GW(p)

I'm pretty sure this comment means you don't understand the concept of "qualia".

Replies from: Decius
comment by Decius · 2012-12-19T00:36:29.553Z · LW(p) · GW(p)

How do you experience cats?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T12:02:22.362Z · LW(p) · GW(p)

Unless you actually understand what "qualia" means, I'm not going to bother discussing the topic with you. If you have, if fact, done the basic research necessary to discuss p-zombies, than I'm probably misinterpreting you in some way. But I don't think I am.

Replies from: Decius
comment by Decius · 2012-12-19T19:57:05.116Z · LW(p) · GW(p)

Oddly enough, I feel that if you had done the basic research and explored the same lines of though I did, you would agree with me.

My questions, by the way, aren't rhetorical. I'm trying to pin down where your understanding differs from mine.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-19T21:27:14.699Z · LW(p) · GW(p)

My questions, by the way, aren't rhetorical.

Neither are mine.

comment by Decius · 2012-12-14T00:23:35.575Z · LW(p) · GW(p)

I'm saying that there is no difference between a p-zombie and the alternative.

comment by Manfred · 2012-12-10T16:32:16.081Z · LW(p) · GW(p)

Though on the other hand, we don't have room to take everything serious dudes say seriously - too many dudes, not enough time.

If a problem happens not to exist, then I suppose one will just have to nerve onesself and not see it. Yes, there are non-hard problems of consciousness, where you explain how a certain process or feeling occurs in the brain, and sure, there are some non-hard problems I'd wave away with "well, that's solved by psychology somewhere." But no amount of that has any bearing on the "hard problem," which will remain in scare quotes as befits its effective nonexistence - finding a solution to a problem that is not a problem would be silly.

(EDIT: To clarify, I am not saying qualia do not exist, I am saying some mysterious barrier of hardness around qualia does not exist.)

Replies from: Peterdjones, None
comment by Peterdjones · 2012-12-10T17:01:42.988Z · LW(p) · GW(p)

If a problem happens not to exist, then I suppose one will just have to nerve onesself and not see it.

OK. Then demonstrate that the HP does not exist, in terms of Chalmer's specification, by showing that we do have a good explanation.

Replies from: Manfred
comment by Manfred · 2012-12-10T20:04:11.015Z · LW(p) · GW(p)

Well, said Achilles, everybody knows that if you have A and B and "A and B imply Z," then you have Z.

How an Algorithm Feels From Inside.
The Visual Cortex is Used to Imagine
Stimulating the Visual Cortex Makes the Blind See

This sort of thing is sufficient for me, like Achilles' explanations were enough for Achilles. But if, say, the perception of the hard problem was causally unrelated to the actual existence of a hard problem (for epiphenominalism, this is literally what is going on), then gosh, it would seem like no matter what explanations you heard, the hard problem wouldn't go away - so it must be either a proof of dualism or a mistake.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-10T20:59:02.374Z · LW(p) · GW(p)

This sort of thing is sufficient for me

But not for me. Indeed. I am pretty sure none of those articles is even intended as a solution to the HP. And if they are, why not publish them is a journal and become famous?

How an Algorithm Feels From Inside.

Intended as a solution to FW.

Stimulating the Visual Cortex Makes the Blind See

So? Every living qualiaphile accepts some sort of relationship between brain states and qualia.

if, say, the perception of the hard problem was causally unrelated to the actual existence of a hard problem (for epiphenominalism, this is literally what is going on),

So? I said nothing about epiphenomenalism

Replies from: Manfred
comment by Manfred · 2012-12-10T21:49:40.535Z · LW(p) · GW(p)

So? I said nothing about epiphenomenalism

The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.

Other than that, I don't have much to respond to here, since you're just going "So?"

Replies from: Peterdjones
comment by Peterdjones · 2012-12-10T22:01:00.940Z · LW(p) · GW(p)

The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.

I can't find the posting, and I don't see how the MPF would relate to e12ism anyway.

The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.

How did you expect to convive me? I am familar with all the stuff you are quoting, and I still think there is an HP. So do many people.

comment by [deleted] · 2012-12-10T16:35:35.393Z · LW(p) · GW(p)

For practical reasons, I think that's fair enough...so long as we're clear that the above is a fully general counterargument.

Replies from: Manfred
comment by Manfred · 2012-12-10T17:01:18.725Z · LW(p) · GW(p)

Right. I have not said any actual arguments against the hard problem of consciousness.

EDIT: Was true when I said it, then I replied to PeterD, not that it worked (as I noted in that very post, the direct approach has little chance against a confusion)

Replies from: Peterdjones
comment by Peterdjones · 2012-12-10T17:05:23.195Z · LW(p) · GW(p)

Argument for the importance of the HP: it is about the only thing that would motivate an educated 21st century person into doubting physcalism.

comment by Richard_Kennaway · 2012-12-10T15:56:05.509Z · LW(p) · GW(p)

The rest mostly go, "this could only be explained by a mysterious substance, there are no mysterious substances, therefore this does not exist."

Replies from: Peterdjones
comment by Peterdjones · 2012-12-10T16:06:44.866Z · LW(p) · GW(p)

I don't know why you guys keep harping about substances. Substance dualism has been out of favour for a good century.

Replies from: Manfred
comment by Manfred · 2012-12-10T16:54:32.726Z · LW(p) · GW(p)

Sorry, I was misusing terminology. Any ignorance-generating / ignorance-embodying explanation (e.g.s quantum mysticism / elan vital) uses what I'm calling "mysterious substance."

Basically I'm calling "quantum" a mysterious substance (for the quantum mystics), even though it's not like you can bottle it.

Maybe I should have said "mysterious form?" :D

comment by Peterdjones · 2012-12-10T15:51:45.498Z · LW(p) · GW(p)

There is a Hard Prolem, becuase there is basically no (non eliminative) science or technology of qualia at all. We cna get a start on the problem of building cognition, memory and perception into an AI, but we can;t get a start on writing code for Red or Pain or Salty. You can thell there is basically no non-eliminative science or technology of qualia because the best LWers' can quote is Dennett's eliminative theory.

comment by Rob Bensinger (RobbBB) · 2012-12-11T02:38:20.536Z · LW(p) · GW(p)

Among philosophers, the theory of qualia and the classical empiricism founded on it are also considered to be dead theories

Do you have evidence of this? The PhilPapers survey suggests that only 56.5% of philosophers identify as 'physicalists,' and 59% think that zombies are conceivable (though most of these think zombies are nevertheless impossible). It would also help if you explained what you mean by 'the theory of qualia.'

though it's Sellers "Empiricism and the Philosophy of Mind" (http://www.ditext.com/sellars/epm.html) that is seen to have done the killing.

Sellars' argument, I think, rests on a few confusions and shaky assumptions. I agree this argument is still extremely widely cited, but I think that serious epistemologists no longer consider it conclusive, and a number reject it outright. Jim Pryor writes:

These anti-Given arguments deserve a re-examination, in light of recent developments in the philosophy of mind. The anti-Given arguments pose a dilemma: either (i) direct apprehension is not a state with proposition content, in which case it's argued to be incapable with providing us with justification for believing any specific proposition; or (ii) direct apprehension is a state with propositional content. This second option is often thought to entail that direct apprehension is a kind of believing, and hence itself would need justification. But it ought nowadays to be very doubtful that the second option does entail such things. These days many philosophers of mind construe perceptual experience as a state with propositional content, even thought experience is distinct from, and cannot be reduced to, any kind of belief. Your experiences represent the world to you as being a certain way, and the way they represent the world as being is their propositional content. Now, surely, its looking to you as if the world is a certain way is not a kind of state for which you need any justification. Hence, this construal of perceptual experience seems to block the step from 'has propositional content' to 'needs justification'. Of course, what are 'apprehended' by perceptual experiences are facts about your perceptual environment, rather than facts about your current mental states. But it should at least be clear that the second horn of the anti-Given argument needs more argument than we've seen so far.

Replies from: None
comment by [deleted] · 2012-12-11T03:04:21.610Z · LW(p) · GW(p)

Do you have evidence of this?

I mentioned in a subsequent post that there was an ambiguity in my original claim. Qualia have been used by philosophers to do two different jobs: 1) as the basis of the hard problem of consciousness, and 2) as the foundation of foundationalist theories of empiricism. Sellars essay, in particular is aimed at (2), not (1), and the mention of 'qualia' to which I was responding was probably a case of (1). The question of physicalism and the conceivability of p-zombies isn't directly related to the epistemic role of qualia, and one could reject classical empiricism on the basis of Sellars' argument while still believing that the reality of irreducible qualia speak against physicalism and for the conceivability of p-zombies.

Sellers' argument, I think, rests on a few confusions and shaky assumptions.

That may be, it's a bit outside my ken. Thanks for posting the quote. I won't go trying to defend the overall organization EPM, which is fairly labyrinthine, but I have some confidence in its critiques: I'd need more familiarity with Pryor's work to level a serious criticism, but he on the basis of your quote he seems to me to be missing the point: Sellars is not arguing that something's appearing to you in a certain way is a state (like a belief) which requires justification. He argues that it is not tenable to think of this state as being independent of (e.g. a foundation for) a whole battery of concepts including epistemic concepts like 'being in standard perceptual conditions'. Looking a certain way is posterior (a sophistication of) its being that way. Looking red is posterior to simply being red. And this is an attack on the epistemic role of qualia insofar as this theory implies that 'looking red' is in some way fundamental and conceptually independent.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-11T03:21:20.897Z · LW(p) · GW(p)

Sellars is not arguing that something's appearing to you in a certain way is a state (like a belief) which requires justification. He argues that it is not tenable to think of this state as being independent of (e.g. a foundation for) a whole battery of concepts including epistemic concepts like 'being in standard perceptual conditions'. Looking a certain way is posterior (a sophistication of) its being that way. Looking red is posterior to simply being red. And this is an attack on the epistemic role of qualia insofar as this theory implies that 'looking red' is in some way fundamental and conceptually independent.

Yes, that is the argument. And I think its soundness is far from obvious, and that there's a lot of plausibility to the alternative view. The main problem is that this notion of 'conceptual content' is very hard to explicate; often it seems to be unfortunately confused with the idea of linguistic content; but do we really think that the only things that should add or take away any of my credence in any belief is the words I think to myself? In any case, Pryors' paper Is There Non-Inferential Justification? is probably the best starting point for the rival view. And he's an exceedingly lucid thinker.

Replies from: None
comment by [deleted] · 2012-12-11T16:16:13.365Z · LW(p) · GW(p)

I'll read the Pryor article, in more detail, but from your gloss and from a quick scan, I still don't see where Pryor and Sellars are even supposed to disagree. I think, without being totally sure, that Sellars would answer the title question of Pryor's article with an emphatic 'yes!'. Experience of a red car justifies belief that the car is red. While experience of a red car also presupposes a battery of other concepts (including epistemic concepts), these concepts are not related to the knowledge of the redness of the car as premises to a conclusion.

Here's a quote from EPM p148, which illustrates that the above is Sellars' view (italics mine). Note that in the following, Sellars is sketching the view he wants to attack:

One of the forms taken by the Myth of the Given is the idea that there is, indeed must be, a structure of particular matter of fact such that (a) each fact can not only be noninferentially known to be the case, but presupposes no other knowledge either of particular matter of fact, or of general truths; and (b) such that the noninferential knowledge of facts belonging to this structure constitutes the ultimate court of appeals for all factual claims -- particular and general -- about the world. It is important to note that I characterized the knowledge of fact belonging to this stratum as not only noninferential, but as presupposing no knowledge of other matter of fact, whether particular or general. It might be thought that this is a redundancy, that knowledge (not belief or conviction, but knowledge) which logically presupposes knowledge of other facts must be inferential. This, however, as I hope to show, is itself an episode in the Myth.

So Sellars wants to argue that empiricism has no foundation because experience (as an epistemic success term) is not possible without knowledge of a bunch of other facts. But it does not follow from this that a) Sellars thinks knowledge derived from experience is inferential, or b) Sellars thinks non-inferential knowledge as such is a problem.

But that said, I haven't read enough of Pryor's paper(s) to understand his critiques. I'll take a look.

comment by Peterdjones · 2012-12-10T15:16:10.974Z · LW(p) · GW(p)

I'm not at all convinced that all LWers have been persuaded that they don't have qualia.

Among philosophers, the theory of qualia and the classical empiricism founded on it are also considered to be dead theories

Amongst some philosophers.

t's Sellers "Empiricism and the Philosophy of Mind" (http://www.ditext.com/sellars/epm.html) that is seen to have done the killing.

Hmmm. The only enthusiast for Sellars I know finds it necessary to adopt Direct Realism, which is a horribly flawed theory. In fact most of the problems with it consist of reconciling it with a naturalistic world view.

Replies from: None
comment by [deleted] · 2012-12-10T15:28:06.554Z · LW(p) · GW(p)

I'm not at all convinced that all LWers have been persuaded that they don't have qualia.

Well, it's probably important to distinguish between to uses to which the theory of qualia is put: first as the foundation of foundationalist empiricism, and second as the basis for the 'hard problem of consciousness'. Foundationalist theories of empiricism are largely dead, as is the idea that qualia are a source of immediate, non-conceptual knowledge. That's the work that Sellars (a strident reductivist and naturalist) did.

Now that I read it again, I think my original post was a bit misleading because I implied that the theory of qualia as establishing the 'hard problem' is also a dead theory. This is not the case, and important philosophers still defend the hard problem on these grounds. Mea Culpa.

The only enthusiast for Sellars I know finds it necessary to adopt Direct Realism, which is a horribly flawed theory. In fact most of the problems with it consist of reconciling it with a naturalistic world view.

Once direct realism as an epistemic theory is properly distinguished from a psychological theory of perception, I think it becomes an extremely plausible view. I think I'd probably call myself a direct realist.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-12-12T04:00:09.256Z · LW(p) · GW(p)

Foundationalist theories of empiricism are largely dead, as is the idea that qualia are a source of immediate, non-conceptual knowledge.

I'd have said that qualia are not a source of unprocessed knowledge, but the processing isn't conceptual.

I take 'conceptual' to mean thought which is at least somewhat conscious and which probably can be represented verbally. What do you mean by the word?

Replies from: None
comment by [deleted] · 2012-12-12T04:45:23.997Z · LW(p) · GW(p)

I take 'conceptual' to mean thought which is at least somewhat conscious and which probably can be represented verbally. What do you mean by the word?

I mean 'of such a kind as to be a premise or conclusion in an inference'. I'm not sure whether I agree with your assessment or not: if by 'non-conceptual processing' you mean to refer to something like a physiological or neurological process, then I think I disagree (simply because physiological processes can't be any part of an inference, even granting that often times things that are part of an inference are in some way identical to a neurological process).

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-12-13T05:45:03.840Z · LW(p) · GW(p)

I think we're looking at qualia from different angles. I agree that the process which leads to qualia might well be understood conceptually from the outside (I think that's what you meant). However, I don't think there's an accessible conceptual process by which the creation of qualia can be felt by the person having the qualia.

comment by MrMind · 2012-12-10T15:43:53.950Z · LW(p) · GW(p)

I don't know what others accept as a solution to the qualia problem, but I've found the explanations in "How an algorithm feels from the inside" quite on spot. For me, the old sequences have solved the qualia problem, and from what I see the new sequence presupposes the same.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-12-18T11:09:53.664Z · LW(p) · GW(p)

I've found the explanations in "How an algorithm feels from the inside" quite on spot.

I'm not sure I understand what it means for an algorithm to have an inside, let alone for an algorithm to "feel" something from the inside. "Inside" is a geometrical concept, not an algorithmical one.

Please explain what the inside feeling of e.g. the Fibonacci sequence (or an algorithm calculating such) would be.

Replies from: MrMind
comment by MrMind · 2012-12-18T16:17:06.203Z · LW(p) · GW(p)

I'm not sure I understand what it means for an algorithm to have an inside, let alone for an algorithm to "feel" something from the inside. "Inside" is a geometrical concept, not an algorithmical one.

Well, that's just the title, you know? The original article was talking about cognitive algorithms (an algorithm, not any algorithm). Unless you assume some kind of un-physical substance having a causal effect on your brain and your continued existence after death, you is what your cognitive algorithm feels when it's run on your brain wetware.

"Inside" is a geometrical concept, not an algorithmical one.

That's not true: every formal system that can produce a model of a subsets of its axioms might be considered as having an 'inside' (as in set theory: constructible model are called 'inner model'), and that's just one possible definition.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-12-18T16:40:37.779Z · LW(p) · GW(p)

The original article was talking about cognitive algorithms (an algorithm, not any algorithm).

So what's the difference between cognitive algorithms with the ability of "feeling from the inside" and the non-cognitive algorithms which can't "feel from the inside"?

Unless you assume some kind of un-physical substance having a causal effect on your brain and your continued existence after death, you is what your cognitive algorithm feels when it's run on your brain wetware.

Please don't construct strawmen. I never once mentioned unphysical substances having any causal effect, nor do I believe in such. Actually from my perspective it seems to me that it is you who are referring to unphysical substances called "algorithms" "models", the "inside", etc. All these seem to me to be on the map, not on the territory.

And to say that I am my algorithm running on my brain doesn't help dissolve for me the question of qualia anymore than if some religious guy had said that I'm the soul controlling my body.

Replies from: MrMind
comment by MrMind · 2012-12-18T17:13:17.543Z · LW(p) · GW(p)

So what's the difference between cognitive algorithms with the ability of "feeling from the inside" and the non-cognitive algorithms which can't "feel from the inside"?

If I knew I would have already written an AI. This is an NP problem, easy to check, hard to find a solution for: I knew that the one running on my brain is of the kind, and the one spouting Fibonacci number is not. I can only guess that involves some kind of self-representation.

Please don't construct strawmen. I never once mentioned unphysical substances having any causal effect, nor do I believe in such.

Sorry if I seemed to do so, I wasn't attributing those beliefs to you, I was just listing the possible escape routes from the argument.

Actually from my perspective it seems to me that it is you who are referring to unphysical substances called "algorithms" "models", the "inside", etc. All these seem to me to be on the map, not on the territory.

Well, if you already do not accept those concepts, you need to tell me what your basic ontology is so we can agree on definitions. I thought that we already have "algorithm" covered by "Please explain what the inside feeling of e.g. the Fibonacci sequence (or an algorithm calculating such) would be"

And to say that I am my algorithm running on my brain doesn't help dissolve for me the question of qualia anymore than if some religious guy had said that I'm the soul controlling my body.

That's because it was not the question that my sentence was answering. You have to admit that writing "I'm not sure I understand what it means for an algorithm to have an inside" is a rather strange way to ask "Please justify the way the sequence has in your opinion dissolved the qualia problem". If you're asking me that, I might just want to write an entire separate post, in the hope of being clearer and more convincing.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-12-18T19:53:44.496Z · LW(p) · GW(p)

If I knew I would have already written an AI.

I think this is confusing qualia with intelligence. There's no big confusion about how an algorithm run on hardware can produce something we identify as intelligence -- there's a big confusion about such an algorithm "feeling things from the inside".

Well, if you already do not accept those concepts, you need to tell me what your basic ontology is so we can agree on definitions.

It seems to me that in a physical universe, the concept of "algorithms" is merely an abstract representation in our minds of groupings of physical happenings, and therefore algorithms are no more ontologically fundamental than the category of "fruits" or "dinosaurs".

Now starting with a mathematical ontology instead, like Tegmark IV's Mathematical Universe Hypothesis, it's physical particles that are concrete representations of algorithms instead (very simple algorithms in the case of particles). In that ontology, where algorithms are ontologically fundamental and physical particles aren't, you can perhaps clearly define qualia as the inputs of the much-more-complex algorithms which are our minds...

That's sort-of the way that I would go about dissolving the issue of qualia if I could. But in a universe which is fundamentally physical it doesn't get dissolved by positing "algorithms" because algorithms aren't fundamentally physical...

Replies from: MrMind
comment by MrMind · 2012-12-21T10:52:56.906Z · LW(p) · GW(p)

I'm going to write a full-blown post so that I can present my view more clearly. If you want we can move the discussion there when it will be ready (I think in a couple of days).

comment by [deleted] · 2012-12-11T11:13:56.657Z · LW(p) · GW(p)

My description of 'elegance' admittedly did invoke agent-dependent concepts like 'unexpectedly' short or 'surprisingly' general.

I think elegance has to invoke agent-dependent concepts, because I think it's a composite description, which involves intuitive managing of agents, or descriptions of agents, rather. Intuitively it feels like something to be described as elegant requires "an intent" or a goal - and goal seems like it requires a description which involves some thingy trying to do something - that is met in a way that meets some comparative criterion, that this particular thing that met that particular goal, was somehow different from your expectations.

In otherwords it might not even be possible to define elegance without invoking agent-related concepts. (Or maybe it's just whatever conception of elegance I had)

comment by Irgy · 2012-12-11T04:42:39.006Z · LW(p) · GW(p)

rightness plays no role in that-which-is-maximized by the blind processes of natural selection

That being the case, what is it about us that makes us care about "rightness" then? What reason do you have for believing that the logical truth of what is right will has more influence on human behaviour than it would on any other general intelligence?

Certainly I can agree that there's reasons to worry another intelligence might not care about what's "right", since not every human really cares that much about it either. But it feels like your expected level of caring is "not at all", whereas my expected level of caring is "about as much as we do". Don't get me wrong, the variance in my estimate and the risk involved is still enough to justify the SI and its work. I just wonder about the difference between the two estimates.

Replies from: Manfred
comment by Manfred · 2012-12-12T23:12:55.329Z · LW(p) · GW(p)

That being the case, what is it about us that makes us care about "rightness" then?

Biology and socialization? You couldn't raise a human baby to have any arbitrary value system, our values are already mostly set by evolution. And it so happens that evolution has pointed us in the direction of valuing rightness, for sound causal reasons of course.

comment by JoshuaFox · 2012-12-10T12:30:36.652Z · LW(p) · GW(p)

Is Schmidhuber's formalization of elegance the sort of thing you are seeking to do with rightness?

comment by timtyler · 2012-12-29T13:33:09.824Z · LW(p) · GW(p)

Unfortunately, this logical fact does not correspond to the truth-condition of any meaningful proposition computed by Clippy in the course of how it efficiently transforms the universe into paperclips, in much the same way that rightness plays no role in that-which-is-maximized by the blind processes of natural selection.

Evolution via natural selection is what gave you your moral sense in the first place. That is the result of selection on your ancestors' genes, selection on memes and selection on structures in your brain. Morailty exists today because the genes and memes supporting it had good reproductive success in the past. Science even understands how selection favoured human moral systems these days - via a mixture of genetic and cultural kin selection, reciprocity, reputations and symbiology.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-29T22:08:40.086Z · LW(p) · GW(p)

Science even understands how selection favoured human moral systems these days - via a mixture of genetic and cultural kin selection, reciprocity, reputations and symbiology.

Um, no. There are a variety of postulated mechanisms, most (all?) of which are little more than just-so stories - from group selection to game theory. It is probable that the actual mechanism(s) have been suggested already, but Science does not know if that is the case. If it turned out tomorrow that aliens fell from the sky and gave us morality as an experiment then that would would be surprising based on the prior probability, but it wouldn't contradict anything in the scientific literature.

Replies from: timtyler
comment by timtyler · 2012-12-29T22:45:35.230Z · LW(p) · GW(p)

There are a variety of postulated mechanisms, most (all?) of which are little more than just-so stories - from group selection to game theory.

Kin selection and reciprocity are "just so stories"? Hmm. Have fun with that step back into the scientific dark ages. Scientists know a lot about why humans cooperate and behave in a moral manner.

If it turned out tomorrow that aliens fell from the sky and gave us morality as an experiment then that would would be surprising based on the prior probability, but it wouldn't contradict anything in the scientific literature.

Right - but the "aliens did it" explanation is a lot like the "god did it" one. Tremendously unlikely - but not completely disprovable. Most scientists don't require such a high degree of certainty.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-30T12:34:27.081Z · LW(p) · GW(p)

Kin selection and reciprocity are "just so stories"? Hmm. Have fun with that step back into the scientific dark ages.

Obviously, kin selection can be a genuine phenomenon in the right circumstances. So can group selection, although it's obviously much rarer. (I'm afraid I don't know as much about reciprocity, but I assume the same is true for that.) But while just-so stories like "populations that help each other will last longer" or "you are more likely to encounter relatives, so helping everyone you meet is a net win" may sound reasonable, to humans, but evolution is not swayed by such arguments.

Scientists know a lot about why humans cooperate and behave in a moral manner.

Source?

Right - but the "aliens did it" explanation is a lot like the "god did it" one. Tremendously unlikely - but not completely disprovable. Most scientists don't require such a high degree of certainty.

If we already knew and understood reasons why morality would evolve, and we learned that it was actually aliens, then that would mean that we were mistaken about said reasons (unless the aliens just simulated our future evolution, if you're fighting the counterfactual.)

Replies from: timtyler
comment by timtyler · 2012-12-30T13:41:54.843Z · LW(p) · GW(p)

Kin selection and reciprocity are "just so stories"? Hmm. Have fun with that step back into the scientific dark ages.

Obviously, kin selection can be a genuine phenomenon in the right circumstances. So can group selection, although it's obviously much rarer.

The modern scientific consensus is that kin selection and group selection are equivalent, explain the same set of phenomena and make the same predictions. For details of this see here. This has cleared up many of the issues relating to group selection - though there are still disagreements regarding what terminology and methodology it is best to use - and things like whether we really need two frameworks.

Scientists know a lot about why humans cooperate and behave in a moral manner.

Source?

Well, this is from my reading of the literature. Darwinism and Human Affairs laid out the basics in 1979. Some things have changed since then, but not the basics. We know more about the role of culture and reputations these days. Whitfield's "reputations" book has a good summary of the literature there.

A nice summary article about modern views of cooperation is here. Human cooperation is similar, but with culture and reputations playing a larger role.

Of course there are still disagreements in the field. However, if you look at recent books on the topic, there is also considerable consensus:

I'm not clear about why the "aliens did it" hypothesis is worth continued discussion. Scientists don't think that aliens gave us our moral sense. The idea reminds me more of medieval theology than science. All manner of bizarre discoveries could refute modern scientific knowledge in a wide range of fields. But in most cases, the chances of that happening look very slender. In which case: so what?

Replies from: wedrifid, MugaSofer
comment by wedrifid · 2012-12-31T03:50:30.202Z · LW(p) · GW(p)

The modern scientific consensus is that kin selection and group selection are equivalent, explain the same set of phenomena and make the same predictions.

This seems suspiciously similar to saying "kin selection exists and group selection basically doesn't" but with less convenient redefinition of "group selection".

Replies from: timtyler
comment by timtyler · 2013-01-01T13:21:16.409Z · LW(p) · GW(p)

They can't be equivalent if group selection doesn't exist - since kin selection is well established orthodoxy.

Both the kin selection and group selection concepts evolved after being invented. This is normal for scientific concepts: our ideas about gravity and light evolved in a similar manner.

comment by MugaSofer · 2012-12-30T19:19:19.283Z · LW(p) · GW(p)

The modern scientific consensus is that kin selection and group selection are equivalent, explain the same set of phenomena and make the same predictions. For details of this see here. This has cleared up many of the issues relating to group selection - though there are still disagreements regarding what terminology and methodology it is best to use - and things like whether we really need two frameworks.

...

Group selection.

[snip links]

I believe I already characterized such work as just-so stories and/or speculation in the absence of sufficient evidence. And showing that there are a wide variety of different hypotheses only proves my point: we do not know what caused morality. We now a few things that might have caused morality, but are currently beyond our ability to evaluate. We also know a wide variety of theories that are demonstrably false, but sound persuasive to a cursory examination and thus survive in various forms.

I'm not clear about why the "aliens did it" hypothesis is worth continued discussion. Scientists don't think that aliens gave us our moral sense. The idea reminds me more of medieval theology than science.

I'm not saying alien actually did give us our "moral sense", as you put it. In any case, who cares what it reminds you of? It's a perfectly lawful and coherent hypothesis.

All manner of bizarre discoveries could refute modern scientific knowledge in a wide range of fields. But in most cases, the chances of that happening look very slender. In which case: so what?

The whole point is that this one would not, in fact, refute modern scientific knowledge. How hard is this to understand? If aliens showed up and claimed to have given us hunger, or a sex drive, or blood, then we would laugh in their faces; even many subtle psychological heuristics and biases have well-understood evolutionary underpinnings. Morality does not. Presumably it has poorly-understood evolutionary underpinnings, but we do not know what they are.

Replies from: timtyler
comment by timtyler · 2012-12-30T20:00:52.727Z · LW(p) · GW(p)

Group selection

You're kidding, right? That page is a joke - and not in a good way.

And showing that there are a wide variety of different hypotheses only proves my point: we do not know what caused morality.

Not really. It is often true that if there are multiple explanations on the table, only one is right - or one is heavily dominant. Cooperation isn't really like that. There really are many different reasons why advanced organisms cooperate and are nice to each other under different circumstances. You could group most of them together by saying that niceness often pays, either directly or to kin - but that explanation is vague and kind-of obvious: we know a lot of details beyond that.

We surely have "sufficient evidence". We have all of recorded history. This isn't like the quest for an elusive high-energy particle - much of the relevant evidence is staring us in the face every day.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-31T18:58:06.169Z · LW(p) · GW(p)

You're kidding, right? That page is a joke - and not in a good way.

I disagree. It seems to me that your position is the joke, and that page is both accurate and informative. You are claiming, with a straight face, (unless you're a particularly subtle troll, I suppose,) that group selection occurs in nature, and is not a trivially-wrong explanation for human morality.

Not really. It is often true that if there are multiple explanations on the table, only one is right - or one is heavily dominant. Cooperation isn't really like that. There really are many different reasons why advanced organisms cooperate and are nice to each other under different circumstances. You could group most of them together by saying that niceness often pays, either directly or to kin - but that explanation is vague and kind-of obvious: we know a lot of details beyond that.

"Niceness" pays under certain, highly-specific circumstances. Under almost all circumstances, it does not. The fact that their are people trying to claim that humans fall into many sepaerate categories that result in "niceness" being selected for could, indeed, be due to to us falling into all these categories. More likely, however, is that this is caused by people simply making up reasons why morality should evolve - starting with the bottom line and filling the page with justifications.

We surely have "sufficient evidence". We have all of recorded history. This isn't like the quest for an elusive high-energy particle - much of the relevant evidence is staring us in the face every day.

The fact that something happened is evidence that there is some explanation why it happened, a fact that I have repeatedly acknowledged. It is not necessarily sufficient evidence for a specific reason why it happened. This is so basic that I'm kind of shocked you're actually making this argument; I have updated my estimation that you are a troll based on it. Of course, it's possible that I have misinterpreted the argument you were trying to make, but assuming I haven't: there are many thing that we know evolved, that we do not understand how. The amount of these has gone down over time, as our understanding of evolution and raw data about biology has increased. But there are still open problems in evolutionary biology, and human morality is one of them. This should not be a surprise; we know relatively little about human evolution considering how intensively our biology has been studied, thinking about the "source of morality" is particularly vulnerable to certain biases,and most people tend to construct "just-so stories" when attempting to reason about evolutionary biology.

Replies from: Vaniver, timtyler
comment by Vaniver · 2012-12-31T19:28:48.305Z · LW(p) · GW(p)

that page is both accurate and informative.

That page is an example of reversed stupidity. timtyler's early statement that "kin selection and group selection are equivalent" should have tipped you off to him not making that particular mistake.

comment by timtyler · 2013-01-01T14:16:06.184Z · LW(p) · GW(p)

You're kidding, right? That page is a joke - and not in a good way.

I disagree. It seems to me that your position is the joke, and that page is both accurate and informative.

No, it's uninformed, out of date and incorrect. It's one of the more embarassing pages on the wiki.

You are claiming, with a straight face, (unless you're a particularly subtle troll, I suppose,) that group selection occurs in nature, and is not a trivially-wrong explanation for human morality.

So, that is what the modern scientific consensus on group selection says. It has kin selection and group selection making the same predctions. It's been known since the 1970s that there was massive overlap between the concepts. In the last decade, most of the scientists in these fields have publicly recognised that the quest to find which is a superset of the other has petered out - and now we have:

I think most evolutionists now agree that kin and group selection are the same thing.

  • Peter Richerson, 2012

There is widespread agreement that group selection and kin selection — the post-1960s orthodoxy that identifies shared interests with shared genes — are formally equivalent.

  • Marek Kohn, 2008

It is remarkable that kin selection has been widely accepted and group selection widely disparaged when, for simple genetic models, they are actually equivalent mathematically.

  • Michael Wade

Inclusive fitness theory, summarised in Hamilton’s rule, is a dominant explanation for the evolution of social behaviour. A parallel thread of evolutionary theory holds that selection between groups is also a candidate explanation for social evolution. The mathematical equivalence of these two approaches has long been known.

  • James Marshall

Kin selection explains phenomena such as human lactation. Group selection explains it too (it makes the same predictions). Kin selection largely explains parental care and nepotism - which have a moral dimension. QED.

"Niceness" pays under certain, highly-specific circumstances. Under almost all circumstances, it does not.

That's a straw-man characterisation of the idea. It pays in enough cases for it to evolve genetically. Not all cooperation is due to DNA genes (some is due to culture), but cooperation is a widespread phenomenon, and most scientists agree that humans have niceness in their DNA genes - more than their nearest relatives (chimpanzees and bonobos) do.

We surely have "sufficient evidence". We have all of recorded history. This isn't like the quest for an elusive high-energy particle - much of the relevant evidence is staring us in the face every day.

The fact that something happened is evidence that there is some explanation why it happened, a fact that I have repeatedly acknowledged. It is not necessarily sufficient evidence for a specific reason why it happened. This is so basic that I'm kind of shocked you're actually making this argument; I have updated my estimation that you are a troll based on it. Of course, it's possible that I have misinterpreted the argument you were trying to make, but assuming I haven't: there are many thing that we know evolved, that we do not understand how.

I think you have the wrong end of the stick there. To recap, you wrote:

I believe I already characterized such work as just-so stories and/or speculation in the absence of sufficient evidence.

I claimed that we do have enough evidence to go on. I stand by that. Without performing any more experiments, we have enough information to figure out the evolutionary basis of human morality, in considerable detail. Basically, we have an enormous mass of highly pertinent information. It's more than enough to go on, I reckon.

comment by Peterdjones · 2012-12-10T21:39:08.172Z · LW(p) · GW(p)

I think that addressing metaethics conceptually, or as you would say logically, is the right way to go. I also think that doing so is just the kind of armchair conceptual analysis that philosphers go in for, and which is regularly criticised here.

comment by HalMorris · 2012-12-12T05:30:45.449Z · LW(p) · GW(p)

Amartya Sen suggests we look at our intuition about what we mean by fairness, and gives an example three (coincidentally, as in the example with the pie) people argue in effect that: 1) Fairness is about equal distribution (i.e. tending towards socialism) 2) Fairness is about the making best use of whatever w.r.t. "the greatest good for the greatest number" 3) Fairness is about being able to keep what you produced (i.e. tending towards "taxation is theft")

He argues that anyone not dwelling in some cloud cuckoo of the mind designed by Marx, Rand, or whomever is likely to see merit in all three claims, and can't totally trash any of them, but the social contract theorists take the wrong-headed approach that we must settle the question once and for all, and then what? Well if we've discovered the absolute truth then it's our duty to impose it on everybody else.

The interpretation I come away with is if we go and start looking for the most glaring examples of unfairness by any of the three versions (without trying to rank them), and try to make a significant move towards resolving that. No overarching theory or contract required; no getting everyone in alignment first.

Replies from: MugaSofer, Eugine_Nier
comment by MugaSofer · 2012-12-14T12:21:53.377Z · LW(p) · GW(p)

Amartya Sen suggests we look at our intuition about what we mean by fairness

I'm not sure what other method we could use for defining fairness.

Replies from: HalMorris
comment by HalMorris · 2012-12-14T20:34:40.100Z · LW(p) · GW(p)

One might use a system of objective ethics based on an epistemology whose bedrock foundation is "Existence exists!".

It's been tried.

Replies from: nshepperd, MugaSofer
comment by nshepperd · 2012-12-16T00:15:37.271Z · LW(p) · GW(p)

Well, that sounds about as likely to correctly define the word "fair" as to correctly define the word "banana".

comment by MugaSofer · 2012-12-15T15:27:46.046Z · LW(p) · GW(p)

Source?

Replies from: HalMorris
comment by HalMorris · 2012-12-15T19:07:13.288Z · LW(p) · GW(p)

Ayn Rand, either the John Galt speech in Atlas Shrugged, or her book Objectivist Epistemology. This is not a passing bit pulled out of context, she really does reiterate this over and over as a key to understanding -- also sometimes equating it to Aristotle's great discovery that A = A.

If you construe "existence" as the set of all things that exist, then it is a refusal to accept Russell's paradox (We can't define sets in terms of something more elementary, but we can think about properties they should not have - and a set belonging to itself is not a good thing - i.e. leads to a contradiction).

Replies from: MugaSofer
comment by MugaSofer · 2012-12-15T19:54:55.547Z · LW(p) · GW(p)

Speaking as someone who has never read a word written by Ayn Rand, how the hell does one get from there to anything approaching "fairness" or "morality"? Genuinely asking here.

Replies from: HalMorris
comment by HalMorris · 2012-12-15T22:01:32.753Z · LW(p) · GW(p)

That's quite a big mystery that I'm far from solving. I'm pretty much with Hume that you can't get "ought" from "is". As with the Bible, the Koran, and the Book of Mormon, I couldn't bear the sort of exclusive attention it would take to read Atlas Shrugged, so I listened to it on audio -- all 60+ hours. I made a late discovery that Audible.com lets you speed up their audio, and so towards the end was listening to it at 2x normal speed. The John Galt speech lasts 4 hours in normal time or 2 at 2x speed. He supposedly just appeared with some pirate radio system or hijacked the state's system and proceeded to lecture the world for 4 hours, and the implication is that after this event, everything will be different. The language has a lot in common with those other books I mentioned: lots of "blah, blah, blah, and this is an intolerable abomination".

Anyway, for remedial education, if you are one of these people like me who hasn't read everything, audio can help a lot, and there is a treasure of public domain texts on audio at Librivox.org in case you didn't know. I'm starting to listen to Kant. 2x speed won't work, but I think 1.4x may for me. Sometimes I find slow reading more distracting to the other ongoing activity such as driving than I find fast reading. Audacity software can change the effective speed of MP3 files (without producing a chipmunks like effect).

As usual I've free associated a bit but hope there is something useful in it.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-15T22:21:03.358Z · LW(p) · GW(p)

The John Galt speech lasts 4 hours

... wow.

Anyway, if Objectivists are claiming to have reached morality from tautology, I'm inclined to throw that in with all the other nonsense they spout that I know for a fact to be wrong. Now that you say it, I do recall seeing something along the lines of "the fundamental truth that A=A" in an Objectivist ... I don't want to say rant, it was pretty short ... but I don't recall noticing an actual, rational argument in there so it's probably trivially wrong.

Replies from: HalMorris
comment by HalMorris · 2012-12-15T22:27:07.350Z · LW(p) · GW(p)

Incidently, the mid-20c school of thought called "General Semantics" held that A != A (or at least not always), and their logo was an A with a bar over it; I think it may have helped inspire the early cognitive psychologist Albert Ellis to invent a new language called E Prime which was simply English with all forms of the verb "to be" removed. He is supposed to have written one book in E Prime, and as far as I know that was the end of it.

Anyway General Semantics preceded Ayn Rand, so maybe it ticked her off.

Replies from: MixedNuts
comment by MixedNuts · 2012-12-15T22:46:41.891Z · LW(p) · GW(p)

Is that where van Vogt's Ā come from?

Replies from: beoShaffer, HalMorris
comment by beoShaffer · 2012-12-16T04:29:21.357Z · LW(p) · GW(p)

Yes.

comment by HalMorris · 2012-12-15T23:21:15.254Z · LW(p) · GW(p)

Sorry, no idea.

comment by Eugine_Nier · 2012-12-13T05:13:54.592Z · LW(p) · GW(p)

Well if we've discovered the absolute truth then it's our duty to impose it on everybody else.

Why? As Eliezer says here:

The syllogism we desire to avoid runs: "I think Susie said a bad thing, therefore, Susie should be set on fire."

Replies from: HalMorris
comment by HalMorris · 2012-12-13T17:16:49.354Z · LW(p) · GW(p)

Why? As Eliezer says here:

I can't tell what your point of view is on this. E. seems to be arguing (rightly imho) that we have an interest in other people's "truths".

It may help to know that "Well if we've discovered the absolute truth then it's our duty to impose it on everybody else." wasn't my attempt at establishing a real norm, but rather I was following a not too uncommon (or rather not uncommon enough) way of thinking.

The syllogism we desire to avoid runs: "I think Susie said a bad thing, therefore, Susie should be set on fire."

Yes, I'd like to avoid that sort of ..um.. proposal -- I can't quite see why one would call it a syllogism.

Replies from: wedrifid
comment by wedrifid · 2012-12-13T22:44:20.894Z · LW(p) · GW(p)

Yes, I'd like to avoid that sort of ..um.. proposal -- I can't quite see why one would call it a syllogism.

People would act as if it is a syllogism if they had one of the relevant (and not especially uncommon/unrealistic) premises. It would be a syllogism like...

  • Susie said a bad thing
  • People who say bad things should be set on fire
  • Therefore, Susie should be set on fire.
comment by HalMorris · 2012-12-12T18:18:34.217Z · LW(p) · GW(p)

One of the most heinous modern "new media" practices is the repurposing of my browser's back button. God help LW if they've done that, but it looks like maybe they have. All I know is somehow the site shunted me into a one-way dead end street so I had to get out and start all over again.

Tell me it isn't so.

Replies from: Alicorn
comment by Alicorn · 2012-12-12T19:15:53.365Z · LW(p) · GW(p)

I don't think we have any features like this. If you describe exactly what happened to this guy, he may be able to figure out what's wrong.

Replies from: HalMorris
comment by HalMorris · 2012-12-13T17:46:57.726Z · LW(p) · GW(p)

I was just blowing off steam about a practice I hate which is more and more common in (mostly commercial) web sites: when they've got you where they want you, they disallow going back. I don't know how it really happened; it might have been all my fault.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-13T17:52:55.863Z · LW(p) · GW(p)

it's probably cock up rather than conspiracy.

Replies from: HalMorris
comment by HalMorris · 2012-12-13T17:58:46.549Z · LW(p) · GW(p)

On commercial sites, when I end up on a page from which the back key won't work, and my only option is to make some sort of commitment (Click "Yes! I'm just dying to know about ..."), and if I try to kill the window, and get a pop-up that says "Wait! You don't really want to navigate away from this page, do you?"

I'd call it design, though I wouldn't call it conspiracy.

Replies from: shminux
comment by Shmi (shminux) · 2012-12-13T18:11:54.941Z · LW(p) · GW(p)

Holding down the back button should show you the full history, just select one of the pages farther back. I am not aware of any sites blocking that feature. You will still get the popup, though.

Replies from: HalMorris
comment by HalMorris · 2012-12-13T19:03:36.955Z · LW(p) · GW(p)

Thank you! that's extremely helpful. The list of previous pages used to be part of a "Go to" button which disappeared, and I thought the functionality was lost forever.

comment by HalMorris · 2012-12-12T05:45:40.694Z · LW(p) · GW(p)

I'm utterly new to this forum and have been excited to learn of its existenc, but for years have tried to run my own one-man show of a project of discovering and promoting "practical epistemology", in reaction to what looks like a massive breakdown in common sense w.r.t. the recognition of what sources are likely to be reliable (i.e. trust anonymous email that was probably churned out in some movement conservative boiler room, and totally distrust say the climate science community). I started out wanting to be deep and philosophical but got sidetracked, as illustrated by http://therealtruthproject.blogspot.com/2011/08/my-not-really-right-wing-mom-and-her.html

I was hoping "Less Wrong" might might have more bias towards "the good" and less towards the perfect. Does that resonate with anyone?

I totally acknowledge that somebody's got to try to ward off the worst possibilities of the AI "singularity", but I'm more worried at the moment about a really bad variation on Arthur C. Clark's Childhood's End.

Replies from: 9eB1, MugaSofer, Emile, Tenoke
comment by 9eB1 · 2012-12-12T14:52:31.878Z · LW(p) · GW(p)

Since your blog posts are almost entirely (partisan) political in nature, you should know that traditional political discussion is discouraged here in most threads, except the monthly politics thread. The idea that political discussion is often broken is generally called Politics is the Mindkiller, and there is a whole sequence of old posts on the topic.

Replies from: HalMorris
comment by HalMorris · 2012-12-12T17:40:10.747Z · LW(p) · GW(p)

I'm actually quite happy with the "Politics is the Mindkiller" meme and wish it all the success in the world. Those blogs represent myself in a world in which LW was unknown.

On the other hand, the underlying concern isn't so much (partisanly) political, but rather about trying to take on the epistemology (or anti-epistemology) of propagandists and liars. As it happens, movement conservatives seem to be the greatest modern day masters of the art and are most willing to us it, whereas once upon a time, it was Marxist-Leninists.

Robert McChesney, possibly still an unrepentant Marxist, has admirably stated that the consummation most devoutly to be wished is a sane media environment, and one powerful at exposing what's really happening, and well attended to by the populace (heavily paraphrasing a half-remembered statement), and then he claims he'd be content to see people work out whatever politics they work out.

comment by MugaSofer · 2012-12-12T10:53:02.361Z · LW(p) · GW(p)

I'm more worried at the moment about a really bad variation on Arthur C. Clark's Childhood's End.

I haven't read that. Could you clarify?

comment by Emile · 2012-12-12T14:44:23.792Z · LW(p) · GW(p)

Welcome to LessWrong! You may want to introduce yourself in the Welcome Thread (though it's getting a bit old and huge).

I don't know what you mean by having "more bias towards the good and less towards the perfect", so it doesn't resonate with me :)

(I checked out your blog but it seems to talk an awful lot about the minutae of US politics, as a Frenchman I can't relate much to those chain emails you seem to talk a lot about :P)

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T15:04:43.801Z · LW(p) · GW(p)

I don't know what you mean by having "more bias towards the good and less towards the perfect", so it doesn't resonate with me :)

Practicality, I should think.

Replies from: HalMorris
comment by HalMorris · 2012-12-12T23:19:58.786Z · LW(p) · GW(p)

Sort of. Maybe I should have said ("better" instead of "good") vs perfect. There is an attitude prevalent in many disciplines (and in some indisciplines) "optimization problems, theorems, whatever are always the greatest all purpose tools", so you have utilitarianism, Pareto optimality, Arrow's Theorem, or rather attempts to "fix" it, ... but my semi-educated guess is in in trying to distil some problem inspired by the real world into an optimization problem, you have to put some of the terms into a Procrustean bed, so they come out stretched, or missing heads or feet, or something like that.

"Better" is the name of a recent book by the way. Anybody read it?

comment by Tenoke · 2012-12-12T12:36:36.064Z · LW(p) · GW(p)

I totally acknowledge that somebody's got to try to ward off the worst possibilities of the AI "singularity", but I'm more worried at the moment about a really bad variation on Arthur C. Clark's Childhood's End.

As in you are afraid that we are going to be assimilated by the AI?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T13:00:05.495Z · LW(p) · GW(p)

He's talking about something other than SAI. Probably aliens.

Replies from: HalMorris
comment by HalMorris · 2012-12-12T17:24:00.299Z · LW(p) · GW(p)

Well, there were some aliens involved.

First off, w.r.t. my saying somebody's got to try to ward off the worst possibilities of the AI "singularity", that is to give due respect to what (correct me if I'm wrong) seems to be the primary purpose of the SI, and Eliezer_Yudkowsky's avowed life purpose (based on bloggingheads conversations ca 2009-10).

The Childhood's End analogy was pretty off the cuff, and a "really bad variation" of it may or may not be, on reflection, a good analogue for any danger to present society, but here's the jist of the book, which imho is probably the most interesting think Clark ever wrote (though I'm not well read in Clarkeana, and hardly even a sci-fi fan since I was 16/17 around 44 years ago). Anyway, here goes:

[Spoiler alert] It is the future (ha ha), and children and young folks are beginning to act peculiarly, to speak in private languages, and indeed to understand eachother in an alarmingly rapid way (that is, old folks are blown away by it). This is the conclusion you get maybe halfway through the book.

At some point, aliens appear who call themselves "midwives", who facilitate a process by which everyone who's not too young and unmalleable merges somehow into one big mind. The aliens admit that for some reason their species just can't manage it at all, and so they can only roam the universe looking for planets containing intelligent life forms on the verge of such a transition, and ease the birthing pains. These are genuinely well meaning aliens, not like the ones who carry the book To Serve Mankind (It's a cookbook!).

Is it too silly to say the present world has some resemblance to the early part of the book? But I don't think the alien cavalry will show up, and there's no way in hell we're tending towards one great supermind, but we might at least somewhat resemble several semi-superminds communicating at the speed of light, each with its own separate reality, and each paranoid (more of less justifiably, unless vicious circle can be broken) and in some cases violently inclined towards the other.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T19:51:26.629Z · LW(p) · GW(p)

... oh. You're worried the internet will eat you?

Replies from: HalMorris
comment by HalMorris · 2012-12-12T21:12:38.940Z · LW(p) · GW(p)

What, did you only read the word "cookbook"? That was quite tangential.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T09:10:32.780Z · LW(p) · GW(p)

It is the future (ha ha), and children and young folks are beginning to act peculiarly, to speak in private languages, and indeed to understand eachother in an alarmingly rapid way (that is, old folks are blown away by it).

[...]

Is it too silly to say the present world has some resemblance to the early part of the book?

If that didn't refer to the 'net, what did it refer to?

(I have encountered this idea elsewhere, so I may have pattern-matched.)

Replies from: HalMorris
comment by HalMorris · 2012-12-13T16:48:30.891Z · LW(p) · GW(p)

Who said it doesn't refer to the net? Of course it does. The Internet is inevitable, and in many ways great, but also presents problems that we pay some attention to, much as subatomic physics, and its corollary atomic energy do. It is reasonably arguable (whether true or not) that Nazism would never have happened without the radio, or that the USSR's police state required the telephone and other high speed means of communication.

While thinking about how great these things are, I think we'd be wise to do some thought experiments on what possibly catastrophic and unforeseen consequences that might facilitate. Not in order to outlaw them, but to be not totally clueless at spotting them in case they do manifest.

(And pure thought is somewhat overrated. The extent our armed forces remain competent depends largely on war games. But there was a big blind spot if we didn't have a very active terrorist "red team" trying to cook up whatever possibilities the current environment presents (i.e. box knives, and open enrollment classes in flying 747s)

Replies from: Eugine_Nier, MugaSofer
comment by Eugine_Nier · 2012-12-14T02:42:49.646Z · LW(p) · GW(p)

It is reasonably arguable (whether true or not) that Nazism would never have happened without the radio, or that the USSR's police state required the telephone and other high speed means of communication.

USSR's police state required high speed one-to-many means of communication. The Soviet leadership was absolutely terrified of many-to-many means of communication, going so far as to impose extremely tight controls on access to photocopiers, even most high level members of the party couldn't get access.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-14T12:11:50.451Z · LW(p) · GW(p)

Well, in fairness, photocopiers are commonly used for making posters, flyers and so on, especially back then.

Replies from: Peterdjones
comment by Peterdjones · 2012-12-14T12:30:03.734Z · LW(p) · GW(p)

It't not that it was irrational

Replies from: MugaSofer
comment by MugaSofer · 2012-12-14T12:46:05.624Z · LW(p) · GW(p)

The Soviet leadership was absolutely terrified of many-to-many means of communication, going so far as to impose extremely tight controls on access to photocopiers, even most high level members of the party couldn't get access. [emphasis added]

That would seem to imply that it was an overreaction, demonstrating the depths of their paranoia, or at least that's how I interpreted it.

comment by MugaSofer · 2012-12-13T19:14:15.143Z · LW(p) · GW(p)

It is reasonably arguable (whether true or not) that Nazism would never have happened without the radio

It is? I can't say I've ever heard that before. Could you elaborate?

Who said it doesn't refer to the net?

... you did? I thought?

For clarity: are you or are you not worried that the internet will evolve into a superintelligence(s), taking us with it?

Replies from: HalMorris
comment by HalMorris · 2012-12-14T03:34:34.769Z · LW(p) · GW(p)

It is? I can't say I've ever heard that before. Could you elaborate?

As it was a casual remark in passing, I don't plan to debate, and "reasonably arguable" is a fairly low bar. But, Hitler had a mesmerizing speaking presence, at least for the people he connected with. He probably would never have amounted to anything except somebody in the German establishment, wanting to quell the chaos that followed the end of WWI, hired him to lecture groups of soldiers to reign them in, and he "discovered he had a voice". Once he became chancellor, it took 3-4 years to go from fairly chaotic thuggery against jews and, over time, whoever would not return the Hitler salute, to even get to Kristalnacht, and in that time he perfected the art of haranguing all Germans at one time. If you didn't have your radio tuned in to his speeches, your neighbour might report your unpatriotic behaviour.

For clarity: are you or are you not worried that the internet will evolve into a superintelligence(s), taking us with it?

It seems like one of the least of our worries. As a medium, I think it's one factor in many in laying the ground for people getting more and more into separate and hostile mental universes, such that a high percentage of people can believe that Obama is a Muslim and a Marxist (at the same time), and that global warming is a hoax which is part of an international conspiracy to turn the world into one socialist state. It used to be rare to find someone who thought the moon landings were faked, but now I think certainly 15-30% of Americans have delusions of that magnitude.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-14T11:08:44.403Z · LW(p) · GW(p)

As it was a casual remark in passing, I don't plan to debate, and "reasonably arguable" is a fairly low bar. But, Hitler had a mesmerizing speaking presence, at least for the people he connected with. He probably would never have amounted to anything except somebody in the German establishment, wanting to quell the chaos that followed the end of WWI, hired him to lecture groups of soldiers to reign them in, and he "discovered he had a voice". Once he became chancellor, it took 3-4 years to go from fairly chaotic thuggery against jews and, over time, whoever would not return the Hitler salute, to even get to Kristalnacht, and in that time he perfected the art of haranguing all Germans at one time. If you didn't have your radio tuned in to his speeches, your neighbour might report your unpatriotic behaviour.

Oh, I wasn't disputing, just asking for more information.

As a medium, I think it's one factor in many in laying the ground for people getting more and more into separate and hostile mental universes, such that a high percentage of people can believe that Obama is a Muslim and a Marxist (at the same time), and that global warming is a hoax which is part of an international conspiracy to turn the world into one socialist state. It used to be rare to find someone who thought the moon landings were faked, but now I think certainly 15-30% of Americans have delusions of that magnitude.

Oh, I see. I latched on to the wrong part of your summary

...well, I can see your point, certainly. I'm not sure if you're factoring in the increased ease of encountering opposing viewpoints, but I suspect you are :/

comment by HalMorris · 2012-12-12T05:10:09.504Z · LW(p) · GW(p)

W.r.t. "You could even say that certain proofs are elegant even if no conscious agent sees them.", this can only happen if you are not a conscious agent -- at least to meaningfully say certain proofs are elegant, you have to see them.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T10:55:21.851Z · LW(p) · GW(p)

Um, no. That's like saying that there isn't really six apples until you count them.

Replies from: HalMorris
comment by HalMorris · 2012-12-12T16:47:17.852Z · LW(p) · GW(p)

Absolutely not a good analogy, or rather, only if you think elegance is an objective property, like the number 6. Granted it rather looked like that's the direction in which EY's argument was going. Besides, I didn't react to the statement "certain proofs are elegant...", but rather to the statement "You could .. say that certain proofs are elegant even if no conscious agent sees them", whereas in truth You can't say (in a meaningful way though you can mouth any words you want, I'm not denying that), that a proof is elegant unless you see it. Maybe there's a way to object it one takes "see" too literally -- you can't say anything meaningful unless you've seen it, or a shadow of it, or function or extract, or at least a number output by, say, an "elegance measuring algorithm", and in the latter case, you would just be parroting the evaluation of the algorithm, so that wouldn't seem all that meaningful come to think of it -- so it would have to be a shadow, function...... that conveys enough of the original so that your mental process might have something to add.

Replies from: MugaSofer, lavalamp
comment by MugaSofer · 2012-12-12T19:57:31.170Z · LW(p) · GW(p)

No, you cant know that a proof is elegant until you see it. Quite different.

Replies from: wedrifid, HalMorris
comment by wedrifid · 2012-12-13T01:43:07.620Z · LW(p) · GW(p)

No, you cant know that a proof is elegant until you see it. Quite different.

I'd be surprised if this is actually true. There are features of a proofs that can be themselves proven without actually identifying the proof itself.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T09:16:35.238Z · LW(p) · GW(p)

You can know about things without observing them? Excellent! I could do with a map of New York, you see, but I'm much too busy to go there and draw one...

Seriously, though, you may have misunderstood a part of this conversation.

Replies from: wedrifid
comment by wedrifid · 2012-12-13T12:48:38.609Z · LW(p) · GW(p)

You can know about things without observing them?

Yes, I recommend looking into the novel new divination techniques "Physics" and "Mathematics". The former allows one to form a tolerably accurate model of the present based on knowledge of precursor states. The latter allows reasoning about the logical implications of assumed axioms.

Excellent! I could do with a map of New York, you see, but I'm much too busy to go there and draw one...

Which brings us to the third mystic divination art: Google it.

Next time, try opening with that.

Seriously, though, you may have misunderstood a part of this conversation.

Instead consider that disagreement with a particular claim of yours does not, in fact, imply that I support your opponent's position. In fact, it doesn't imply that I care about the rest of the conversation at all. The particular claim about what can and cannot be known about a proof without seeing (or actually deriving) said proof is the only part remotely interesting. It is intuitive but likely to be false.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T14:24:22.628Z · LW(p) · GW(p)

Wait ... are you suggesting, say, I could predict the elegance of a proof without observing it, perhaps by my awareness that it was formulated by someone who values elegance, for example.

Well, I can't argue with that. Of course, it is somewhat irrelevant to this discussion, but ... fair enough, I suppose. Quibble accepted.

The amended version: "You can't know that a proof is elegant until someone sees it. "

Replies from: thomblake
comment by thomblake · 2012-12-13T19:36:37.952Z · LW(p) · GW(p)

You can't know that a proof is elegant until someone sees it.

Sorry, that doesn't capture it either. You can prove all sorts of things about a proof that nobody's found yet, without actually finding the proof yet. It would not be terribly surprising if elegance was one of those things.

Replies from: MugaSofer, wedrifid
comment by MugaSofer · 2012-12-13T20:10:08.678Z · LW(p) · GW(p)

Oh. OK.

You're absolutely right. I hadn't thought of that. Point, I guess.

comment by wedrifid · 2012-12-13T21:20:45.397Z · LW(p) · GW(p)

Sorry, that doesn't capture it either. You can prove all sorts of things about a proof that nobody's found yet, without actually finding the proof yet. It would not be terribly surprising if elegance was one of those things.

Thanks thomblake, this was what I was getting at. It is likely to be possible to prove some things such as "There exists of proof of X that is below complexity measure Y" while also knowing "X is perceived by the relevant audience to have complexity of at least Z". That could be the kind of information that allows us to expect that the proof will be perceived as "elegant".

Replies from: thomblake
comment by thomblake · 2012-12-13T21:30:35.887Z · LW(p) · GW(p)

If I am remembered for anything, it will be for elucidating the words of wiser men.

On a tangential note, is there a word I could have used above instead of "men" that would preserve the flow but is gender-neutral? I couldn't find one. Ideally one falling syllable.

ETA: The target word should probably end in a nasal or approximate consonant, or else a vowel.

Replies from: BerryPick6, TheOtherDave, Richard_Kennaway, shminux
comment by BerryPick6 · 2012-12-13T22:25:08.257Z · LW(p) · GW(p)

On a tangential note, is there a word I could have used above instead of "men" that would preserve the flow but is gender-neutral? I couldn't find one. Ideally one falling syllable.

'Minds'? 'Tongues'?

Replies from: thomblake
comment by thomblake · 2012-12-14T15:37:14.001Z · LW(p) · GW(p)

"Minds" is pretty good, and I also like "souls", but "wiser" seems like the wrong adjective in both cases and the ending fricative is less pleasant. Still, I think I'll use one of those in the future when formulating such statements. Thanks!

comment by TheOtherDave · 2012-12-13T22:15:33.522Z · LW(p) · GW(p)

Were I writing it, I would likely go with "it will be for elucidating the words of those wiser than I."

But if you insist on the structure, perhaps "folk"?

Replies from: Kindly, MixedNuts, thomblake
comment by Kindly · 2012-12-14T14:36:32.067Z · LW(p) · GW(p)

The correct pronoun to use, if you insist, is "those wiser than me" (or "those wiser than I am"). Normally I wouldn't be correcting you, but someone who puts an "I" in that sentence probably cares about pronouns.

Replies from: army1987, TheOtherDave
comment by A1987dM (army1987) · 2012-12-14T20:06:08.269Z · LW(p) · GW(p)

“Than” governing nominative pronouns it's widely attested, especially in older texts (I think the standard analysis is that there's an implicit verb after it); it's just terribly stilted those days.

Replies from: thomblake
comment by thomblake · 2012-12-14T20:10:04.828Z · LW(p) · GW(p)

Thank you.

comment by TheOtherDave · 2012-12-14T21:58:59.839Z · LW(p) · GW(p)

You are absolutely correct... and yet, I would probably keep "I" there.

comment by MixedNuts · 2012-12-13T23:11:21.101Z · LW(p) · GW(p)

Or "ones".

comment by thomblake · 2012-12-14T15:22:53.572Z · LW(p) · GW(p)

Thanks - "folk" technically fits the requirements, but totally changes the feel. I'm not sure you can say "folk" and still sound solemn. And I'm not a fan of the hard ending consonant. You're definitely casting a wider net than I was though, and I now imagine there's something to be found.

Replies from: TimS
comment by TimS · 2012-12-14T15:27:22.488Z · LW(p) · GW(p)

For me, Berrypick6's suggestion of "minds" has the denotative formality that you desire. Can't comment on the phonetics.

comment by Richard_Kennaway · 2012-12-14T12:46:22.920Z · LW(p) · GW(p)

"Heads"? "...the words of those wiser than me"?

comment by Shmi (shminux) · 2012-12-14T00:08:43.741Z · LW(p) · GW(p)

What's wrong with "people" (plural) or "person" (sing.)?

Replies from: wedrifid
comment by wedrifid · 2012-12-14T11:50:50.399Z · LW(p) · GW(p)

What's wrong with "people" (plural) or "person" (sing.)?

Nothing in denotative expression but a lot in terms of poetic flow and syllable count. Substituting "people" into that context just wouldn't have sounded pretty. In fact it would make the attempt at eloquent elucidation seem contrived and forced---leaving it worse off than if the meaning had just been conveyed unadorned and without an attempt to appear quotable and deep.

I was actually surprised by TheOtherDave's response. My poetic module returned null and I was somehow fairly certain that there just wasn't a word that would fit the requirements.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-14T20:11:03.253Z · LW(p) · GW(p)

Yes. With “people” it sounds less like an epitaph and more like a commercial slogan to me. My favourite suggestion so far is MixedNuts'.

comment by HalMorris · 2012-12-12T22:33:32.621Z · LW(p) · GW(p)

If you say it but don't know it, well that's why I said "can't meaningfully...".

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T08:54:56.882Z · LW(p) · GW(p)

A hypothesis is true or false before it is tested.

Replies from: HalMorris
comment by HalMorris · 2012-12-13T17:02:23.019Z · LW(p) · GW(p)

Doesn't change my view that you can't "meaningfully" say what you have no grounds for knowing. In my view of the world, which it that it isn't ruled by dream logic, and the past is fixed, whether we can know certain things about it or not, Abraham Lincoln either did or did not masturbate on his 15th birthday. But no one can "meaningfully" (in the sense in which I've used the word) say that he did or didn't.

Anyway, going back a bit, there are problems with "elegance", like a lack of agreement on what it means, unless you say "elegance as defined by ..." (assuming you can come up with a coherent definition)" Then you could say there is a set of all elegant mathematical proofs whether anyone has ever thought them or not. (Maybe not really, because Gödel, I believe shows that no matter how hard we try, we will never have a "well defined" (let along the one and only correct) specification of the set of all mathematical proofs).

Replies from: MugaSofer, Strange7
comment by MugaSofer · 2012-12-13T19:28:54.515Z · LW(p) · GW(p)

Abraham Lincoln either did or did not masturbate on his 15th birthday. But no one can "meaningfully" (in the sense in which I've used the word) say that he did or didn't.

OK, that's not the local definition of "meaningful". That explains the confusion.

there are problems with "elegance", like a lack of agreement on what it means, unless you say "elegance as defined by ..."

Well, yeah. But we can look at proofs and sort 'em into "elegant" and "inelegant", I guess, so presumably the are criteria buried somewhere in our circuitry. Doubtless inordinately complex ones.

comment by Strange7 · 2012-12-14T01:45:32.916Z · LW(p) · GW(p)

Conceivably someone could have observed Lincoln's activities (or lack thereof) at the relevant time and written down their observations. Such a record might still exist, but not be known to historians, let alone the general public; and yet anyone who had read it would be able to meaningfully say.

Replies from: HalMorris
comment by HalMorris · 2012-12-14T03:47:59.520Z · LW(p) · GW(p)

It's conceivable the way it's conceivable that the English upper class are giant lizards in disguise. If you've read much 19c history and sources, you should know that nobody said anything about anybody masturbating or not, and Lincoln at that time probably lived a mile from his nearest neighbour.

Lincoln is an interesting example because if you read enough biographies of him, it becomes funny just how much mileage people can get out of the most trivial and dubious piece of evidence about his early life.

Anyway, the past is full of things that either happened or didn't -- at least I don't believe they're like Schrodinger's cat, but which we'll never know if they did or not.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-14T12:05:30.742Z · LW(p) · GW(p)

It's conceivable the way it's conceivable that the English upper class are giant lizards in disguise.

Yup. That's generally considered a form of conceivable, at least around here.

(You might want to try lurking around, reading sequences and interesting comments, at least until you absorb the local jargon, assumptions, and so on. Learning from experience probably works, but it has a high cost in karma, or even regular reputation if you're lucky.)

Replies from: HalMorris
comment by HalMorris · 2012-12-14T16:39:17.365Z · LW(p) · GW(p)

Yeah. Karma is good. I've never put a bumper sticker on my car but if I did it would probably say either

My Karma Ran Over My Dogma

or

Question Bumper-sticker Slogans

I have a mental block for reading the instruction manual, and a strong prejudice towards experimentalism, so while over time I'm sure to soak up a lot of the threads, you'll probably see me going on my bumptious way.

Thanks

Replies from: HalMorris
comment by HalMorris · 2012-12-14T16:49:29.770Z · LW(p) · GW(p)

P.S. As an online book dealer, I've spent most of the last 11 years working alone and losing my social skills. While I'm sure to make mistakes, it's exhilarating to be talking on a forum where the responses are above the level of "poopy-head".

comment by lavalamp · 2012-12-12T16:54:52.408Z · LW(p) · GW(p)

By my reading, the meaning of that statement is that EY is claiming that elegance is (at least partially) objective.

Replies from: HalMorris
comment by HalMorris · 2012-12-12T18:07:45.546Z · LW(p) · GW(p)

Didn't I cover that? ("Granted it rather looked like that's the direction in which EY's argument was going")? Did you really read what I wrote?

But I think a statement like "you could say 2 x 3 = 6" would sound funny.