Posts

Comments

Comment by Frank_Hirsch on [deleted post] 2008-06-29T16:13:00.000Z
[Cyan wrote:] In reply to Q1, I'd want to introduce new terminology like "implicit understanding" and "explicit understanding" (paralleling the use of that terminology in reference to memory).

You mean like the distinction between competence and performance?

Comment by Frank_Hirsch on [deleted post] 2008-06-29T14:55:00.000Z

Laura: In a comment marked as general I do not expect to find a sharply asymmetric statement about a barely (if at all) asymmetric issue.

Comment by Frank_Hirsch on [deleted post] 2008-06-29T04:22:00.000Z
[Laura ABJ:] While I think I have insight into why a lot of men might FAIL with women, that doesn't mean I get THEM...

You are using highly loaded and sexist language. Why is it only the men who fail with the women? Canst thou not share in the failure, bacause thou art so obviously superior?

Comment by Frank_Hirsch on [deleted post] 2008-06-29T03:13:00.000Z

Q:
Did Sarah understand Mike? She could articulate important differences, but seemed unable to act accordingly, to accept his actions, to communicate her needs to him, or even to understand why V-Day went sour.
A:
Sarah and Mike seem to be in exactly the same position. Either they learn it or they learn to live with it. Or not.

Q:
Question #2: How far does understanding need to go? Some understanding of differences is helpful, but only when it's followed by acceptance of the differences. That's an attitude rather than an exercise in logic.
A:
This is even stranger than #1. Sorry, does not compute.


Comment by Frank_Hirsch on [deleted post] 2008-06-29T02:35:00.000Z

well, i wonder how gender is actually defined, if six have been claimed.
can you give a line on the model which is used?
my very rough first model allows for (2+n)(2+m)222 combinations. that's at least 32 for the corner-cases alone. i say if it's worth doing, then it's worth doing right.

Comment by Frank_Hirsch on Timeless Control · 2008-06-09T09:19:48.000Z · LW · GW
Frank, Demonstrated instances of illusory free-will don't seem to me to be harder or easier to get rid of than the many other demonstrated illusory cognitive experiences. So I don't see anything exceptional about them in that regard.

HA, I do. It is a concept I suspect we are genetically biased to hold, an outgrowth of the distinction between subject (has a will) and object (has none). Why are be biased to do so? Because, largely, it works very well as a pattern for explanations about the world. We are built to explain the world using stories, and these stories need actors. Even when you are convinced that choice does not exist, you'll still be bound to make use of that concept, if only for practical reasons. The best you can do is try to separate the "free" from the "choice" in an attempt to avoid the flawed connotation. But we have trouble conceptualising choice if it's not free; because then, how could it be a choice? All that said, I seem to remember someone saying something like: "Having established that there is no such thing as a free will, the practical thing to do is to go on and pretend there was.".

Comment by Frank_Hirsch on Timeless Control · 2008-06-08T21:34:24.000Z · LW · GW

HA: How come you think I defend any "non-illusory human capacity to make choices"? I am just wondering why the illusion seems so hard to get rid of. Did I fail so miserably at making my point clear?

Comment by Frank_Hirsch on Timeless Control · 2008-06-07T14:45:14.000Z · LW · GW
If your mind contains the causal model that has "Determinism" as the cause of both the "Past" and the "Future", then you will start saying things like, "But it was determined before the dawn of time that the water would spill - so not dropping the glass would have made no difference".

Nobody could be that screwed up! Not dropping the glass would have been no option. =)

About all that free-will stuff: The whole "free will" hypothesis may be so deeply rooted in our heads because the explanatory framework of identifying agents with beliefs about the world, objectives, and the "will" to change the world according to these beliefs and objectives just works so remarkably well. Much like Newtons theory of gravity: In terms of the ratio of predictive_accuracy_in_standard_situations to operational_complexity Newton's gravity kicks donkey. So does the Free Will (TM). But that don't mean it's true.

Comment by Frank_Hirsch on Living in Many Worlds · 2008-06-05T11:39:51.000Z · LW · GW

steven: To much D&D? I prefer chaotic neutral... Hail Eris! All hail Discordia! =)

Comment by Frank_Hirsch on Timeless Identity · 2008-06-03T09:32:58.000Z · LW · GW
[Eliezer says:] And if you're planning to play the lottery, don't think you might win this time. A vanishingly small fraction of you wins, every time.

I think this is, strictly speaking, not true. A more extreme example: While recently talking with a friend, he asserted that "In one of the future worlds, I might jump up in a minute and run out onto the street, screaming loudly!". I said: "Yes, maybe, but only if you are already strongly predisposed to do so. MWI means that every possible future exists, not every arbitrary imaginable future.". Although your assertion in the case about lottery is much weaker, I don't believe it's strictly true.

Comment by Frank_Hirsch on Principles of Disagreement · 2008-06-02T22:13:26.000Z · LW · GW

The Taxi anecdote is ultra-geeky - I like that! ;-)

Also, once again I accidentally commented on Eliezers last entry, silly me!

Comment by Frank_Hirsch on The Rhythm of Disagreement · 2008-06-02T21:35:53.000Z · LW · GW
[Unknown wrote:] [...] you should update your opinion [to] a greater probability [...] that the person holds an unreasonable opinion in the matter. But [also to] a greater probability [...] that you are wrong.

In principle, yes. But I see exceptions.

[Unknown wrote:] For example, since Eliezer was surprised to hear of Dennett's opinion, he should assign a greater probability than before to the possibility that human level AI will not be developed with the foreseeable future. Likewise, to take the more extreme case, assuming that he was surprised at Aumann's religion, he should assign a greater probability to the Jewish religion, even if only to a slight degree.

Well, admittedly, the Dennett quote depresses me a bit. If I were in Eliezers shoes, I'd probably also choose to defend my stance - you can't dedicate your life to something with just half a heart!

About Auman's religion: That's one of the cases where I refuse to adapt my assigned probability one iota. His belief about religion is the result of his prior alone. So is mine, but it is my considered opinion that my prior is better! =)

Also, if I may digress a bit, I am sceptical about Robin's Hypothesis that humans in general update to little from other people's beliefs. My first intuition about this was that the opposite was the case (because of premature convergence and resistance to paradigm shifts). After having second thoughts, I believe the amount is probably just about right. Why? 1) Taking other people's beliefs as evidence is an evolved trait, and so is probably the approximate amount. 2) Evolution is smarter than I (and Robin, I presume).

Comment by Frank_Hirsch on The Rhythm of Disagreement · 2008-06-02T06:03:17.000Z · LW · GW

Unknown: Well, maybe yeah, but so what? It's just practically impossible the completely re-evaluate every belief you hold whenever someone says something that asserts the belief to be wrong. That's nothing at all to do with "overconfidence", but it's everything to do with sanity. The time to re-evaluate your beliefs is when someone gives a possibly plausible argument about the belief itself, not just an assertion that it is wrong. Like e.g. whenever someone argues anything, and the argument is based on the assumption of a personal god, I dismiss it out of hand without thinking twice - sometimes I do not even take the time to hear them out! Why should I, when I know it's gonna be a waste of time? Overconfidence? No, sanity!

Comment by Frank_Hirsch on A Premature Word on AI · 2008-06-01T22:38:00.000Z · LW · GW

Nick:
I thought the assumption was that SI is to S to get any ideas about world domination?

Comment by Frank_Hirsch on A Premature Word on AI · 2008-06-01T21:48:00.000Z · LW · GW

Makes me think:
Wouldn't it be rather recommendable, if instead of heading straight for an (risky) AGI, we worked on (safe) SIs and then have them solve the problem of Friendly AGI?

Comment by Frank_Hirsch on Configurations and Amplitude · 2008-04-10T12:11:00.000Z · LW · GW

botogol:

Eliezer (and Robin) this series is very interesting and all, but.... aren't you writing this on the wrong blog?

I have the impression Eliezer writes blog entries in much the same way I read Wikipedia: Slowly working from A to B in a grandiose excess of detours... =)

Comment by Frank_Hirsch on Quantum Explanations · 2008-04-09T23:11:43.000Z · LW · GW

Wow, good teaser for sure! /me is quivering with anticipation ^_^

Comment by Frank_Hirsch on The Generalized Anti-Zombie Principle · 2008-04-06T17:56:52.000Z · LW · GW

Caledonian:

One of the very many problems with today's world is that, instead of confronting the root issues that underlie disagreement, people simply split into groups and sustain themselves on intragroup consensus. [...] That is an extraordinarily bad way to overcome bias.

I disagree. What do we have to gain from bringing all-and-everyone in line with our own beliefs? While it is arguably a good thing to exchange our points of view, and how we are rationalising them, there will always be issues where the agreed evidence is just not strong enough to refute all but one way to look at things. I believe that sometimes you really do have to agree to disagree (unless all participants espouse bayesianism, that is), and move on to more fertile pastures. And even if all participants in a discussion claim to be rationalists, sometimes you'll either have to agree that someone is wrong (without agreeing on who it is, naturally) or waste time you could have spent on more promising endeavours.

Comment by Frank_Hirsch on The Generalized Anti-Zombie Principle · 2008-04-06T14:24:01.000Z · LW · GW

Will Pearson [about tiny robots replacing neurons]: "I find this physically implausible."

Um, well, I can see it would be quite hard. But that doesn't really matter for a thought experiment. To ask "What it would be like to ride on a light beam?" is quite as physically implausible as it gets, but seems to have produced a few rather interesting insights.

Comment by Frank_Hirsch on The Generalized Anti-Zombie Principle · 2008-04-06T01:42:22.000Z · LW · GW

[Warning: Here be sarcasm] No! Please let's spend more time discussing dubious non-disprovable hypotheses! There's only a gazillion more to go, then we'll have convinced everyone!

Comment by Frank_Hirsch on Zombie Responses · 2008-04-05T14:41:11.000Z · LW · GW

Apart from Occams Razor (multiplying entities beyond necessity) and Bayesianism (arguably low prior and no observation possible), how about the identity of indiscernibles: Anything inconsequential is indiscernible from anything that does not exist at all, therefore inconsequental equals nonexistent.

Admittedly, zombiism is not really irresistibly falsifiable... but that's only yet another reason to be sceptical about it! There are gazillions of that kind of theory floating around in the observational vacuum. You can pick any one of those, if you want to indulge your need to believe that kind of stuff, and watch those silly rationalists try to disprove you. A great pastime for boring parties!

Also, the concept of identity is twisted beyond recognition by zombiism: The psysical me causes the existence of something outside of the psysical me, which I define to be the single most important part of me. Huh?

Also, anyone to answer my earlier question? I asked: Can epiphenomenal things cause nothing at all, or can they (too, as can physical things can,) cause other epiphenomenal things? Maybe Richard, as our expert zombiist, might want to relieve me of my ignorance?

[Sorry for double posting in "Zombies! Zombies?" and here, but I didn't realise discussion had already moved on.]

Comment by Frank_Hirsch on Zombies! Zombies? · 2008-04-05T13:40:00.000Z · LW · GW

Apart from Occams Razor (multiplying entities beyond necessity) and Bayesianism (arguably low prior and no observation possible), how about the identity of indiscernibles:
Anything inconsequential is indiscernible from anything that does not exist at all, therefore inconsequental equals nonexistent.

Admittedly, zombiism is not really irresistibly falsifiable... but that's only yet another reason to be sceptical about it! There are gazillions of that kind of theory floating around in the observational vacuum. You can pick any one of those, if you want to indulge your need to believe that kind of stuff, and watch those silly rationalists try to disprove you. A great pastime for boring parties!

Also, the concept of identity is twisted beyond recognition by zombiism:
The psysical me causes the existence of something outside of the psysical me, which I define to be the single most important part of me. Huh?

Btw, anyone to answer my question further above?
I asked: Can epiphenomenal things cause nothing at all, or can they (too, as can physical things can,) cause other epiphenomenal things?
Maybe Richard, as our expert zombiist, might want to relieve me of my ignorance?

Comment by Frank_Hirsch on Zombies! Zombies? · 2008-04-05T01:10:45.000Z · LW · GW

I must say I found this rather convincing (but I might just be confirmation biased). Also, I have a question on the topic: The zombiists assume that the universe U of existing things is split into two exclusive parts, physical things P and epiphenomenal things E. The physical things P probably develop something like P(t+1)=f(P(t),noise), as we have defined that E does not influence P. But what does E develop like? Is it E(t+1)=f(P(t)[,noise]), or is it E(t+1)=f(P(t),E(t)[,noise])? I have somehow always assumed the first, but I do not remember having read it spellt out so unmistakeably.

Comment by Frank_Hirsch on Hand vs. Fingers · 2008-03-30T18:15:32.000Z · LW · GW

Richard: Yes, there is a reality beyond reality! Sure, it's not real in the sense that it is measurable or measurably interacts with our drab scientific reductionist reality, but it's... real! Really! I can feel it! So speak the Searle-addled...

Comment by Frank_Hirsch on Hand vs. Fingers · 2008-03-30T17:30:44.000Z · LW · GW

Caledonian: "Since we can't extrapolate our physics that far, we don't know whether they're truly compatible with our understanding of physics or not." For the sake for argument, I'll let that stand (as a conflict of minor importance). Still, why should we go and assume a non-reductionist model? That's multiplying entities beyond necessity.

Comment by Frank_Hirsch on Hand vs. Fingers · 2008-03-30T16:49:18.000Z · LW · GW

Caledonian: Sure you do. That's why we have biology and chemistry and neuroscience instead of having only one field: physics.

That's just a matter of efficiency (as I have tried to illuminate). There is nothing about those high level descriptions that is not compatible with physics. They are often more convenient and practical, but they do not add one iota of explanatory power.

Comment by Frank_Hirsch on Hand vs. Fingers · 2008-03-30T16:17:15.000Z · LW · GW

PK: I don't see the ++ in your nice example, it's perfectly valid C... =)

Caledonian, Ian C.: I know of no models of reality that have superior explanatory power than the standard reductionist one-level-to-bind-them-all position (apologies for the pun). So why add more? In a certain way "our maps [are] part of reality too", but not in any fundamental sense. To simulate a microchip doing a FFT, it's quite sufficient to simulate the physical processes in it's logic gates. You need not even know what the chip is actually supposed to do. You just need a very precise description of the chip. If you do know what it's doing, it's of course much more efficient to directly use the same algorithm it is also using. That will also dramatically cut down on the length of it's description. But that does not make the FFT algorithm fundamental in any way. It is just a way to look at what is happening. I mean, really, this shouldn't be so hard to grasp...

Comment by Frank_Hirsch on Explaining vs. Explaining Away · 2008-03-20T00:35:33.000Z · LW · GW

• Sarah is hypnotized and told to take off her shoes when a book drops on the floor. Fifteen minutes later a book drops, and Sarah quietly slips out of her loafers. “Sarah,”, asks the hypnotist, “why did you take off your shoes?” “Well . . . my feet are hot and tired.”, Sarah replies. “It has been a long day”. • George has electrodes temporarily implanted in the brain region that controls his head movements. When neurosurgeon José Delgado (1973) stimulates the electrode by remote control, George always turns his head. Unaware of the remote stimulation, he offers a reasonable explanation for it: “I’m looking for my slipper.” “I heard a noise.” “I’m restless.” “I was looking under the bed.”

The point is: That's how the brain works, always. It is only in special circumstances, like the ones described, that the fallaciousness of these "explanations from hindsight" becomes obvious.

Comment by Frank_Hirsch on Explaining vs. Explaining Away · 2008-03-19T16:22:08.000Z · LW · GW

Frank Hirsch: How do you propose to lend credibility to your central tenet "If you seem to have free will, then you have free will"?

Ian C.: I'm not deducing (potentially wrongly) from some internal observation that I have free will. The knowledge that I chose is not a conclusion, it is a memory. If you introspect on yourself making a decision, the process is not (as you would expect): consideration (of pros and cons) -> decision -> option selected. It is in fact: consideration -> 'will' yourself to decide -> knowledge of option chosen + memory of having chosen it. The knowledge that you chose is not worked out, it is just given to you directly. So their is no scope for you to err.

No scope to err? Surely you know that human memory is just about the least reliable source of information you can appeal to? Much of what you seem to remember about your decision process is constructed in hindsight to explain your choice to yourself. There is a nice anecdote about what happens if you take that hindsight away:

In an experiment, psychologist Michael Gazzaniga flashed pictures to the left half of the field of vision of split-brain patients. Being shown the picture of a nude woman, one patient smiles sheepishly. Asked why, she invents — and apparently believes — a plausible explanation: “Oh — that funny machine”. Another split-brain patient has the word “smile” flashed to his nonverbal right hemisphere. He obliges and forces a smile. Asked why, he explains, “This experiment is very funny”.

So much for evidence from introspective memory...

Comment by Frank_Hirsch on Explaining vs. Explaining Away · 2008-03-18T13:17:00.000Z · LW · GW

Frank Hirsch: "I don't think you can name any observations that strongly indicate (much less prove, which is essentially impossible anyway) that people have any kind of "free will" that contradicts causality-plus-randomness at the physical level."

Ian C.: More abstract ideas are proven by reference to more fundamental ones, which in turn are proven by direct observation. Seeing ourselves choose is a direct observation (albeit an introspective one). If an abstract theory (such as the whole universe being governed by billiard ball causation) contradicts a direct observation, you don't say the observation is wrong, you say the theory is.

Yikes! You are saying that because it seems to you inside your mind that you had freedom of choice, it must automagically be so? Your "observation" is that there seems to be free will. Granted! I make the same observation. But this does not in any way bear on the facts. How do you propose to lend credibility to your central tenet "If you seem to have free will, then you have free will"? To this guy it seemed he was emperor of the USA, but that didn't make it true. Also, how will you go and physically explain this free will thing? All things we know are either deterministic or random. If you plan to point at randomness and cry "Look! Free will!", we had better stop here. Or were you thinking about the pineal gland?

Comment by Frank_Hirsch on Explaining vs. Explaining Away · 2008-03-17T10:10:28.000Z · LW · GW

Nominull: I believe Eliezer would rather be called Eliezer...

Ian C.: We observe a lack of predictability at the quantum level. Do quarks have a free will? (Yup a shameless rip-off of Dougs argument, tee-hee! =) Btw. I don't think you can name any observations that strongly indicate (much less prove, which is essentially impossible anyway) that people have any kind of "free will" that contradicts causality-plus-randomness at the physical level.

Comment by Frank_Hirsch on Reductionism · 2008-03-17T09:27:07.000Z · LW · GW

I wish I knew where Reality got its computing power. Hehe, good question that one. Incidentally, I'd like to link this rather old thing just in case anyone cares to read more about reality-as-computation.

Comment by Frank_Hirsch on The Quotation is not the Referent · 2008-03-13T23:00:33.000Z · LW · GW

I know a really bad one which nearly turned my stomach: Some newspaper wrote "Survey uncovers that X's have the property Y!" (I forget the details). I read the article and it turned out that, according to some survey, most people believe that X's have the property Y. Argh!

Comment by Frank_Hirsch on Righting a Wrong Question · 2008-03-11T13:20:53.000Z · LW · GW

Frank, what does that have to do with the quality of the paper I linked?

James, everything. The paper looks very much like the book in a nutshell plus an actual experiment. What does the paper have to do with "And I know you didn't simply leave out an explanation that exists somewhere, because such understanding would probably mean a solution for the captcha problem."? I find these 13 and 12 year old papers more exciting. And here is some practical image recognition (although no general captcha) stuff.

Comment by Frank_Hirsch on Righting a Wrong Question · 2008-03-10T07:17:04.000Z · LW · GW

James Blair: I've read JH's "On Intelligence" and find him overrated. He happens to be well known, but I have yet to see his results beating other people's results. Pretty theories are fine with me, but ultimately results must count.

Comment by Frank_Hirsch on Variable Question Fallacies · 2008-03-06T09:09:32.000Z · LW · GW

Oh, and the Liar Paradox makes much more sense once we overcome our obsession about recursion: If we take the equally valid stance of viewing it as an iteration, it is easy to see that the whole problem is that the proposition does not converge; that's all there is to it.

Comment by Frank_Hirsch on Variable Question Fallacies · 2008-03-06T09:02:13.000Z · LW · GW

I think the trouble about "Have you stopped beating your wife?" is that it is not about a state but about a state transition. It asks "10?", and the answer "no" really leaves three possibilities open (including that the questionee has recently started beating his wife). The sentence structure implies a false choice between answers 10 and 11, because we are used to asking (and answering) yes/no questions about 1-bit issues while here we deal with a 2-bit issue. But you probably knew all that... =)

Comment by Frank_Hirsch on Leave a Line of Retreat · 2008-02-26T02:42:04.000Z · LW · GW

[having read the comments]

Kriti et al: I'd recommend this and this to anybody who hasn't already read it. Otherwise I have not much idea for introductory texts right now.

Comment by Frank_Hirsch on Leave a Line of Retreat · 2008-02-26T02:10:46.000Z · LW · GW

[Without having read the comments]

WTF? You say: [...] I was actually advised to post something "fun", but I'd rather not [...]

I think it was fun!

BTW could we increase the probability of people being honest by basing reward not on individual choices, but on the log-likelihood over a sample of similar choices? (For a given meaning of similar.)

Comment by Frank_Hirsch on Mutual Information, and Density in Thingspace · 2008-02-24T10:50:54.000Z · LW · GW

tcpkac: The important caveat is : 'boundaries around where concentrations of unusually high probability density lie, to the best of our knowledge and belief' . All the imperfections in categorisation in existing languages come from that limitation.

This strikes me as a rather bold statement, but "to the best of our knowledge and belief" might be fuzzy enough to make it true. Some specific factors that distort our language (and consequently our thinking) might be:

  • Probability shifts in thingspace invalidating previously useful clusterings. Natural languages need time adapt, and dictionary writers tend to be conservative.
  • Cognitive biases that distort our perception of thingspace. Very on topic here, I suppose. ^_^
  • Manipulation (intended and unintended). Humans treat articulations from other humans as evidence. That can go so far that authentic contrary evidence is explained away using confirmation bias.

Other problems in categorisation, [...] do not come from language problems in categorisation, [...] but from different types of cognitive compromise.

Well, lack of consistency in important matters seems to me to be a rather bad sign.

It would also lack words for the surprising but significant improbable phenomenon. Like genius, or albino. Then again, once you get around to saying you will have words for significant low hills of probability, the whole argument blows away.

I don't think so. Once the most significant hills have been named, we go on and name the next significant hills. We just choose longer names.

Comment by Frank_Hirsch on Entropy, and Short Codes · 2008-02-23T11:10:54.000Z · LW · GW

Okay, now let's code those factory objects! 1 bit for blue not red 1 bit for egg not cube 1 bit for furred not smooth 1 bit for flexible not hard 1 bit for opaque not translucent 1 bit for glows not dark 1 bit for vanadium not palladium

Nearly all objects we encounter code either 1111111 or 0000000. So we compress all objects into two categories and define: 1 bit for blegg (1111111) not rube (0000000). But, alas, the compression is not lossless, because there are objects which are neither perfect bleggs nor rubes: A 1111110 object will be innocently accused of containing vanadium, because it is guilty by association with the bleggs, subjected to unfair kin liability! Still, in an enviroment where our survival depends on how faithfully we can predict unobserved features of those objects we stand good chances:

Nature: "I have here an x1x1x1x object, what is at it's core?" We suspect a blegg and guess Vanadium - and with 98% probability we are right, and nature awards us a pizza and beer.

Now the evil supervillain, I-can-define-any-way-I-like-man (Icdawil-man, for short), comes by and says: "I will define my categories thus: 1 bit for regg (0101010) not blube (1010101)" While he will achieve the same compression ratio, he looses about 1/2 of the information in the process. He has failed to carve at the joint. So much the worse for Icdawil-man.

Nature: "I have here an x1x1x1x object, what is at it's core?" Icdawil-man suspects a regg, guesses Palladium, and with 98% probability starts coughing blood...

Next along comes the virtuous and humble I-refuse-to-compress-man:

Nature: "I have here an x1x1x1x object, what is at it's core?" Irtc-man refuses to speculate and is awarded a speck in his eye.

Next along comes the brainy I-have-all-probabilities-stored-here-because-I-can-man:

Nature: "I have here an x1x1x1x object, what is at it's core?" Ihapshbic-man also gets a pizza and beer, but will sooner be hungry again than we will. That's because of all the energy he needs for his humongous brain which comes in an extra handcart.

Any more contenders? =)

Comment by Frank_Hirsch on Where to Draw the Boundary? · 2008-02-21T22:38:07.000Z · LW · GW

Just a small one, because I can't hold it: You can't judge the usefulness of a definition without specifying what you want it to be useful for. And now I'm off to bed... =)

Comment by Frank_Hirsch on Arguing "By Definition" · 2008-02-21T22:17:18.000Z · LW · GW

Hi, am back from the city, and a bit sleepy. I'll try my best with my comment. =) Michael: I was not so much commenting on this specific post as on the whole series. Your example seems to me to boil down to a case of bait-and-switch. Eliezer: ,,When people start violently arguing over their communication signals while they (a) understand what each other are trying to say'' Here the problem is already at full swing, and it's the same as philosophers arguing about the "real" definition of X. As soon as you have managed to get your point across, any further insistance, or even "violent arguing" only shows lack of insight or sincerity. ,,and (b) are trying to do an inference that they could theoretically do as single players, something has gone wrong'' I see no problem about inferences as long as it's clear to everyone what the inference is about (and nobody tries to sneak a switch later).

Comment by Frank_Hirsch on Arguing "By Definition" · 2008-02-21T16:07:15.000Z · LW · GW

Ben: I think you're right, we are on the same page! =) How about "Useful definitions will still be distorted by our mental mechanisms. Malignant and careless definitions are bad no matter what."?

Comment by Frank_Hirsch on Arguing "By Definition" · 2008-02-21T15:38:53.000Z · LW · GW

Rolf: ,,What do you think of, say, philosophers' endless arguments of what the word "knowledge" really means?'' I think meh! ,,This seems to me one example where many philosophers don't seem to understand that the word doesn't have any intrinsic meaning apart from how people define it.'' Well, if they like to do so, let 'em. At least they're off the streets. =) What's worse is the kind of philosophers who flourish by sidestepping honest debate by complicating matters until nobody (including themselves) can possibly tell a left hand from a right foot anymore, and then go on to declare victory. Definitions belong to their toolset, too. But are we going to argue against knifes because the malignant can hurt others with them, and the ignorant or plain unlucky even themselves? We need them to carve the turkey, so if we want turkey slices we'll just have to operate carefully. I, for one, want to keep my knife! ,,Presumably Eliezer would ask, "for what purpose do we want to answer the question?" However, many philosophers would prefer to unconstructively argue what semantics are "correct". So my personal experience is that I don't think Eliezer's attacking a straw man here.'' He is if he is going to spill the baby with the bath. He'd have to write "Careless/malignant use of definitions is bad." not just "Definitions are bad." (which is my perception).

Comment by Frank_Hirsch on Arguing "By Definition" · 2008-02-21T11:07:47.000Z · LW · GW

Eliezer, I must admit I really don't get your problem with definitions. Or, more precisely, I can't get myself to share it. It seems to me you attack definitions mainly because they enable malignant (and/or confused) arguers to do a bait-and-switch. Without defining what is being talked about, there is no obvious switching anymore, so that seems to be your solution. But to me that is like leaving an important variable unbound, which makes the whole argument underdefined and therefore practically worthless. IMHO it is precisely because two people have a common conception of what they are talking about that they can communicate at all. Definitions help to make important key concepts sharply and clearly - uhm - defined. When someone uses a "definition" which makes little or no practical sense, just go and call 'em on that! When someone does a bait-and-switch, call 'em! But when people argue without defining what they're arguing about, what you gonna do? Apart from that, both "I can define that thing any way I want." and "It's in the dictionary." have a smell of straw-men. If someone goes "I can define that thing any way I want." then just insist on the exact same definition when they draw their conclusions - be a djinn! Don't give in to what they wish (or think) they had defined, but to what they did, and tread rickety would-be conclusions to shambles! If someone goes "It's in the dictionary.", ah well... find someone else to talk to... =)

Comment by Frank_Hirsch on The "Intuitions" Behind "Utilitarianism" · 2008-02-10T15:07:00.000Z · LW · GW

Eisegetes:
"Moral" is a category of meaning whose content we determine through social negotiations, produced by some combination of each person's inner shame/disgust/disapproval registers, and the views and attitudes expressed more generally throughout their society.

From a practical POV, without any ambitions to look under the hood, we can just draw this "ordinary language defense line", as I'd call it. Where it gets interesting from an Evolutionary Psychology POV is exactly those "inner shame/disgust/disapproval registers". The part about "social negotiations" is just so much noise mixed into the underlying signal.
Unfortunately, as I believe we have shown, there is a circularity trap here: When we try to partition our biases into categories (e.g. "moral" and "amoral"), the partitioning depends on the definition, which depends on the partitioning, etc. etc. ad nauseam. I'll try a resolution further down.

Oh, I think a large subset of moral choices are moral precisely because they do benefit our genes -- we say that someone who is a good parent is moral, not immoral, despite the genetic advantages conferred by being a good parent.

Well, this is where I used to prod people with my personal definition. I'd say that good parenting is just Evolutionary Good Sense (TM), so there's no need to muddy the water by sticking the label "moral" to it. Ordinary language does, but I think it's noise (or rather, in this case, a systematic error; more below).

I think some common denominators are altruism (favoring tribe over self, with tribe defined at various scales), virtuous motives, prudence, and compassion. Note that these are all features that relate to our role as social animals -- you could say that morality is a conceptual outgrowth of survival strategies that rely on group action (and hence, become a way to avoid collective action problems and other examples of individual rationality that are suboptimal when viewed from the group's perspective).

I think the ordinary language definition of moral is useless for Evolutionary Psychology and must either be radically redefined in this context or dropped alltogether and replaced by something new (with the benefit of avoiding a mixup with the ordinary language sense of the word).
If we take for granted that we are the product of evolutionary processes fed by random variations, we can claim that (to a first approximation) everything about us is there because it furthers its own survival. Specifically, our genetic makeup is the way it is because it tends to produce successful survival machines.
1) Personal egoism exists because it is a useful and simple approximation of gene egoism.
2) For important instances of personal egoism going against gene egoism, we have built-in exceptions (e.g. altrusim towards own children and some other social adaptions).
3) But biasing behaviour using evolutionary adaption is slow. Therefore it would be useful to provide a survival machine with a mechanism that is able to override personal egoism using culturally transmitted bias. This proclaimed mechanism is at the core of my definition of morality (and, incidentally, a reasonable source of group selection effects).
4) Traditional definitions of morality are flawed because they confuse/conflate 2 and 3 and oppose them to 1. This duality is deeply mistaken, and must be rooted out if we are to make any headway in understanding ourselves.

Btw, the fun thing about 3 is that it does not only allow us to overcome personal egoism biases (1) but also inclusive fitness biases (2). So morality is exactly that thing that allows us to laugh in the face of our selfish genes and commit truly altrustic acts.
It is an adaption to override adaptions.

Regards, Frank

Comment by Frank_Hirsch on The "Intuitions" Behind "Utilitarianism" · 2008-02-03T23:15:00.000Z · LW · GW

ZMD:
C'mon gimme a break, I said it's not satisfying!
I get your point, but I dare you to come up with a meaningful but unassailable one-line definition of morality yourself!
BTW birth control certainly IS moral, and overeating is just overdoing a beneficial adaption (i.e. eating).

Comment by Frank_Hirsch on The "Intuitions" Behind "Utilitarianism" · 2008-02-03T17:24:00.000Z · LW · GW

Eisegetes:
Well I (or you?) really maneuvered me into a tight spot here.
About those options, you made a goot point.
To the question "Which circuits are moral?", I kind of saw that one coming. If you allow me to mirror it: How do you know which decisions involve moral judgements?
I don't know of any satisfiying definition of morality. I probably must involve actions that are neither taylored for personal nor inclusive fitness. I suppose the best I can come up with is "A moral action is one which you choose (== that makes you feel good) without being likely to benefit your genes.". Morality is the effect of some adaption that's so flexible/plastic that it can be turned against itself. I admit that sounds rather like some kind of accident.
Maybe I should just give up and go back to being a moral nihilist again... there, now! See what you've made me believe! =)

Comment by Frank_Hirsch on The "Intuitions" Behind "Utilitarianism" · 2008-02-03T09:08:00.000Z · LW · GW

Eisegetes (please excuse the delay):

That's a common utilitarian assumption/axiom, but I'm not sure it's true. I think for most people, analysis stops at "this action is not wrong," and potential actions are not ranked much beyond that. [...] Thus, it is simply wrong to say that we have ordered preferences over all of those possible actions -- in fact, it would be impossible to have a unique brain state correspond to all possibilities. And remember -- we are dealing here not with all possible brain states, but with all possible states of the portion of the brain which involves itself in ethical judgments.

I don't think so. Even if only a few (or even just one) option is actually entertained, a complete ranking of all of them is implicit in your brain. If I asked you if table salt was green, you'd surely answer it wasn't. Where in your brain did you store the information that table salt is not green?
I could make your brain's implicit ordering of moral options explicit with a simple algorithm:
1. Ask for the most moral option.
2. Exclude it from the set of options.
3. While options left, goto 1.

Intersting, but I think also incomplete. To see why: ask yourself whether it makes sense for someone to ask you, following G.E. Moore, the following question:
"Yes, I understand that X is a action that I am disposed to prefer/regard favorably/etc for reasons having to do with evolutionary imperatives. Nevertheless, is it right/proper/moral to do X?"
In other words, there may well be evolutionary imperatives that drive us to engage in infidelity, murder, and even rape. Does that make those actions necessarily moral? If not, your account fails to capture a significant amount of the meaning of moral language.

That's a confusion. I was explicitly talking of "moral" circuits. Not making a distinction between moral and amoral circuits makes moral a non-concept. (Maybe it is one, but that's also beside the point.) The question "is it moral to do X" just makes no sense without this distinction. (Btw. "right/proper" might just be different beasts than "moral".)