Why Bayes? A Wise Ruling

post by Vaniver · 2013-02-25T15:52:40.847Z · LW · GW · Legacy · 117 comments

Contents

117 comments

Why is Bayes' Rule useful? Most explanations of Bayes explain the how of Bayes: they take a well-posed mathematical problem and convert given numbers to desired numbers. While Bayes is useful for calculating hard-to-estimate numbers from easy-to-estimate numbers, the quantitative use of Bayes requires the qualitative use of Bayes, which is noticing that such a problem exists. When you have a hard-to-estimate number that you could figure out from easy-to-estimate numbers, then you want to use Bayes. This mental process of testing beliefs and searching for easy experiments is the heart of practical Bayesian thinking. As an example, let us examine 1 Kings 3:16-28:

Now two prostitutes came to the king and stood before him. One of them said, “Pardon me, my lord. This woman and I live in the same house, and I had a baby while she was there with me. The third day after my child was born, this woman also had a baby. We were alone; there was no one in the house but the two of us.

“During the night this woman’s son died because she lay on him. So she got up in the middle of the night and took my son from my side while I your servant was asleep. She put him by her breast and put her dead son by my breast. The next morning, I got up to nurse my son—and he was dead! But when I looked at him closely in the morning light, I saw that it wasn’t the son I had borne."

The other woman said, “No! The living one is my son; the dead one is yours.”

But the first one insisted, “No! The dead one is yours; the living one is mine.” And so they argued before the king.

The king said, “This one says, ‘My son is alive and your son is dead,’ while that one says, ‘No! Your son is dead and mine is alive.’”

Notice that Solomon explicitly identified competing hypotheses, raising them to the level of conscious attention. When each hypothesis has a personal advocate, this is easy, but it is no less important when considering other uncertainties. Often, a problem looks clearer when you branch an uncertain variable on its possible values, even if it is as simple as saying "This is either true or not true."

Then the king said, “Bring me a sword.” So they brought a sword for the king. He then gave an order: “Cut the living child in two and give half to one and half to the other.”

The woman whose son was alive was deeply moved out of love for her son and said to the king, “Please, my lord, give her the living baby! Don’t kill him!”

But the other said, “Neither I nor you shall have him. Cut him in two!”

Then the king gave his ruling: “Give the living baby to the first woman. Do not kill him; she is his mother.”

Solomon considers the empirical consequences of the competing hypotheses, searching for a test which will favor one hypothesis over another. When considering one hypothesis alone, it is easy to find tests which are likely if that hypothesis is true. The true mother is likely to say the child is hers; the true mother is likely to be passionate about the issue. But that's not enough; we need to also estimate how likely those results are if the hypothesis is false. The false mother is equally likely to say the child is hers, and could generate equal passion. We need a test whose results significantly depend on which hypothesis is actually true.

Witnesses or DNA tests would be more likely to support the true mother than the false mother, but they aren't available. Solomon realizes that the claimant's motivations are different, and thus putting the child in danger may cause the true mother and false mother to act differently. The test works, generates a large likelihood ratio, and now his posterior firmly favors the first claimant as the true mother.

When all Israel heard the verdict the king had given, they held the king in awe, because they saw that he had wisdom from God to administer justice.

117 comments

Comments sorted by top scores.

comment by NancyLebovitz · 2013-02-25T16:11:35.260Z · LW(p) · GW(p)

It suddenly occurs to me that the first woman is the right choice for raising the child, regardless of who the birth mother is.

I wonder if Solomon had plans in mind if both women had said the same thing.

Replies from: shminux, wedrifid, Eliezer_Yudkowsky, Benja, Vaniver, army1987
comment by Shmi (shminux) · 2013-02-25T19:26:12.586Z · LW(p) · GW(p)

I wonder if Solomon had plans in mind if both women had said the same thing.

That's what the next pair of claimants did, after learning about the case. That time Solomon's decision was not wise enough to be included in the sacred texts: he sold the baby into slavery and then promptly executed both claimants. Not surprisingly, no further cases like this were brought before the king.

comment by wedrifid · 2013-02-25T16:33:30.326Z · LW(p) · GW(p)

I wonder if Solomon had plans in mind if both women had said the same thing.

Parchment, shears, rock.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2013-02-25T17:53:53.283Z · LW(p) · GW(p)

*stone

Replies from: MaoShan
comment by MaoShan · 2013-02-26T04:19:27.493Z · LW(p) · GW(p)

*carbuncle

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-25T22:43:05.292Z · LW(p) · GW(p)

This is an excellent point I should've noticed myself (though it's been long and long since I encountered the parable). Who says you own a baby just by being its genetic mother?

Albeit sufficiently young babies are plausibly not sentient.

Replies from: None, loup-vaillant, MaoShan, JulianMorrison, Larks, MugaSofer
comment by [deleted] · 2013-03-06T00:50:12.857Z · LW(p) · GW(p)

What definition of "sentient" are you using, such that young babies don't meet it?

Replies from: MugaSofer
comment by MugaSofer · 2013-03-06T14:01:43.232Z · LW(p) · GW(p)

One that also excludes animals, but includes healthy adult humans.

Replies from: None
comment by [deleted] · 2013-03-06T16:48:09.873Z · LW(p) · GW(p)

Bizarre. In lieu of a reply by Eliezer himself clarifying things, I am left to understand he thinks that some portion of humans otherwise possessing the structural and anatomical necessities for sensation don't experience anything even when all their sense organs are working fine, and that animals in general are basically just meat-automata with no inner life at all. Even when they're communicating about those inner states and have the same structural correlates of various sensations we'd expect to see, and react in ways that sure look like expression of sensation or emotion (even if you sometimes need to be familiar with their particular body language).

That feels a lot more like a strawman than anything, because it's just so obviously bollocks. If I step on my cat's tail by mistake, she doesn't yowl and run from me because "Nociceptor activation threshold met; initiate yowl-and-run subroutine." She does it because it's painful and it startled her. I know there are people who honestly believe something like that about nonhuman life across the board, but I hadn't gotten the impression Eliezer was one.

Someone clear this up for me?

Replies from: Desrtopa, None, zslastman, TheOtherDave
comment by Desrtopa · 2013-03-06T17:48:01.746Z · LW(p) · GW(p)

Sentient vs. Sapient is one of the most common word confusions in the English language. If someone says "sentient," but the context appears to suggest "sapient," they probably mean sapient.

Replies from: None
comment by [deleted] · 2013-03-06T20:38:50.283Z · LW(p) · GW(p)

The bit about that that's bothering me is "sapient" is a term of art -- it's science fiction shorthand employed with a purpose (it denotes personhood for the reader, in a field where blatantly-nonhuman but unambiguously-personlike entities are common). It divides the field of hypothetical entities into two neat, clean categories: people no matter what their substrate, appearance, anatomy or drives, and everything from animals of every sort to plants and grains of sand.

It just seems like a weird way of dividing up the world, and more of a cultural artefact than anything; a marker on the map which corresponds to nothing in the territory.

comment by [deleted] · 2013-03-06T17:30:34.493Z · LW(p) · GW(p)

People often use 'sentient' to mean 'sapient', and it may be that Eliezer intends the latter. It's at least pretty plausible that animals and very young infants are not sapient, namely not capable of judgement, and that this capacity is what would endow one with a certain autonomy.

Replies from: None, Eliut
comment by [deleted] · 2013-03-06T17:36:11.198Z · LW(p) · GW(p)

"Soul", gotcha. Binary personhood marker. Reified concept not sufficiently unpacked. Okie.

Replies from: shminux
comment by Shmi (shminux) · 2013-03-06T17:43:07.257Z · LW(p) · GW(p)

That's a rather uncharitable misinterpretation of what hen wrote, caused by anger and frustration, I'm guessing.

Replies from: None
comment by [deleted] · 2013-03-06T18:56:53.021Z · LW(p) · GW(p)

No, just the expressed befuddlement.

comment by Eliut · 2013-03-06T19:36:29.811Z · LW(p) · GW(p)

I respectfully disagree, sapience is an acquired subjective quality, therefore it is trivial to disregard. Now sentience is orders of magnitude more complex. I was going to say “inherent” to the species, but is it? Now this is supposed to be “the easy problem” go figure that.

comment by zslastman · 2013-03-06T20:58:11.004Z · LW(p) · GW(p)

1)"Nociceptor activation threshold met; initiate yowl-and-run subroutine." 2)She does it because it's painful and it startled her.

What's the difference between 1 and 2?

Replies from: None
comment by [deleted] · 2013-03-06T23:02:13.053Z · LW(p) · GW(p)

1 presumes that minimalist descriptions of superficially-visible output are all you need to reconstruct the actual drivers behind the behavior. 2 presumes that the evolutionarily-shared neural architecture and its basic components of perception, cognition and soforth are not seperated by a barrier of magical reality fluid.

Replies from: zslastman
comment by zslastman · 2013-03-06T23:17:22.828Z · LW(p) · GW(p)

Ah. If you're saying that 1) implies lesser internal machinery than 2), and that the internal machinery (cognition and soforth) is what's important, then I agree.

The problem I think is just that, to me, they both sound to me like perfectly reasonable (if vague) descriptions of complex, sentient human pain. It seemed like you were saying nociceptors and subroutines were incapable of producing pain and startlement.

Replies from: None
comment by [deleted] · 2013-03-06T23:27:05.625Z · LW(p) · GW(p)

The problem I think is just that, to me, they both sound to me like perfectly reasonable (if vague) descriptions of complex, sentient human pain.

1 sounds to me like an attempt to capture output in the form of a flowchart. It's like trying to describe the flocking behavior of birds by reference to the Boids cellular automaton -- and insisting not that there are similar principles at work in how the birds go about solving the problem of flocking, but that birds literally run an instance of Boids in their heads and that's all there is to their flocking behavior.

comment by TheOtherDave · 2013-03-06T17:18:01.161Z · LW(p) · GW(p)

I agree that "Eliezer believes animals and nonpathological infants are just meat-automata who don't actually possess the mental states they communicating about" is a strawman. I'm not really sure what remains to be cleared up. Can you clarify the question?

Replies from: None
comment by [deleted] · 2013-03-06T17:32:03.852Z · LW(p) · GW(p)

Basically what I asked Eliezer: What sense of the word "sentient" is he using, such that babies plausibly don't qualify? My de facto read of the term and a little digging around Google show two basic senses:

-Possessing sensory experiences (I'm pretty sure insects and even worms do that)
-SF/F writer's term for "assume this fictional entity is a person" (akin to "sapient"; it's a binary personhood marker, or a secularized soul -- it tells the reader to react accordingly to this character's experiences and behavior)

The latter, applied to the real world, sounds rather more like "soul" than anything coherent and obvious. The former, denied in babies, sounds bizarre and obviously untrue. So...I'm missing something, and I'd like to know what it is.

Replies from: None, shminux, nshepperd, TheOtherDave, MugaSofer
comment by [deleted] · 2013-03-06T18:34:51.699Z · LW(p) · GW(p)

Maybe the best way to approach this question is backwards. I assume you believe that people (at least) have some moral worth such that they ought not be owned, whimsically destroyed, etc. I also assume you believe that stones (at least) have no moral worth and can be owned, whimsically destroyed, etc. without any immediate moral consequences. So 1) tell me where you think the line is (even if its a very fuzzy, circumstantial one) and 2) tell me in virtue of what something has or lacks such moral worth.

...or 3) toss out my questions and tell me how you think it goes on your own terms.

Replies from: None, MugaSofer
comment by [deleted] · 2013-03-06T21:17:04.276Z · LW(p) · GW(p)

I assume you believe that people (at least) have some moral worth such that they ought not be owned, whimsically destroyed, etc

Essentially. I don't consider it a fact-about-the-world per se, but that captures my alief pretty well.

I also assume you believe that stones (at least) have no moral worth and can be owned, whimsically destroyed, etc without any immediate moral consequences.

Eh. Actually I have some squick to cavalier destruction or disruption of inanimate objects, but they don't register out as the same thing. So we'll go with that.

...or 3) toss out my questions and tell me how you think it goes on your own terms.

To what extent does an entity respond dynamically to both present and historical conditions in terms impacts on its health, wellbeing, emotional and perceptual experiences, social interactions and so on? To what extent is it capable of experiencing pain and suffering? To what extent does modifying my behavior in response to these things constitute a negative burden on myself or others? To what extent do present circumstances bear on all those things?

Those aren't so much terms in an equation as independent axes of variance. There are probably some I haven't listed. They define the shape of the space; the actual answer to your question is lurking somewhere in there.

Replies from: None
comment by [deleted] · 2013-03-06T21:50:27.040Z · LW(p) · GW(p)

Thanks, that's helpful. Given what you've said, I doubt you and EY would disagree on much. EY says in his metaethics sequence that moral facts and categories like 'moral worth' or 'autonomy' are derived properties. In other words, they don't refer to anything fundamental about the world, but supervene on some complex set of fundamental facts. Given that that's his view, I think he was just using 'sentience' as a shorthand for something like what you've written: note that many of the considerations you describe are importantly related to a capacity for complex experiences.

Replies from: None
comment by [deleted] · 2013-03-06T23:04:51.742Z · LW(p) · GW(p)

note that many of the considerations you describe are importantly related to a capacity for complex experiences.

Except I've interacted with bugs in ways that satisfied that criterion (and that did parse out as morally-good), so clearly the devil's in the details. If Eliezer suspects young children may reliably not qualify, and I suspect that insects may at least occasionally qualify, we're clearly drawing very different lines and have very different underlying assumptions about reality.

comment by MugaSofer · 2013-03-24T18:18:37.038Z · LW(p) · GW(p)

I assume you believe that people (at least) have some moral worth such that they ought not be owned, whimsically destroyed, etc. I also assume you believe that stones (at least) have no moral worth and can be owned, whimsically destroyed, etc. without any immediate moral consequences. So 1) tell me where you think the line is (even if its a very fuzzy, circumstantial one)

What makes you think there's a line? I care more about killing (or torturing) a dog than a stone, but less so than a human. Pulling the wings off flies provokes a similar, if weaker, reaction. A continuum might complicate the math slightly, but ...

comment by Shmi (shminux) · 2013-03-06T18:23:11.854Z · LW(p) · GW(p)

"Self-aware" is one soul-free interpretation of sentient/sapient, often experimentally measured by the mirror test. By that metric, humans are not sentient until well into the second year, and most species we would consider non-sentient fail it. Of course, treating non-self-aware human babies as non-sentient animals is quite problematic. Peter Singer is one of the few people brave enough to tread into this topic.

Replies from: None
comment by [deleted] · 2013-03-06T20:59:19.163Z · LW(p) · GW(p)

The mirror test is interesting for sure, especially in a cross-species context. However, I'm far from convinced about the straightforward reading of "the expected response indicates the subject has an internal map of oneself." Since you read the Wikipedia article down that far, you could also scroll down to the "Criticisms" section and see a variety of objections to that.

Moreover, when asked to choose between the interpretation that the test isn't sufficient for its stated purpose, and the interpretation that six-year olds in Fiji aren't self-aware I rather suspect the former is more likely.

Besides all that, even if we assume self-awareness is the thing you seem to be making of it, I'm not clear how that would draw moral-worth line so neatly between humans (or some humans) and literally everything else. From a consequentialist perspective, if I assume that dogs or rats can experience pain and suffering, it seems weird to neglect them from my utility function on the basis they don't jump through that particular (ambiguous, methodologically-questionable) experimental hoop.

Replies from: shminux
comment by Shmi (shminux) · 2013-03-06T21:48:20.956Z · LW(p) · GW(p)

Oh, I agree that the mirror test is quite imperfect. The practical issue is how to draw a Schelling somewhere sensible. Clearly mosquitoes can be treated as non-sentient, clearly most humans cannot be. Treating human fetuses and some mammals as non-sentient is rather controversial. Just "experiencing pain" is probably too wide a net for moral worth, as nociceptors are present in most animals, including the aforementioned mosquito. Suffering is probably a more restrictive term, but I am not aware of a measurable definition of it. It is also probably sometimes too narrow, as most of us would find it immoral to harm people who do not experience suffering due to a mental or a physical issue, like pain insensitivity or asymbolia.

Replies from: None
comment by [deleted] · 2013-03-06T23:22:09.478Z · LW(p) · GW(p)

Clearly mosquitoes can be treated as non-sentient,

Disagree that it's clear. I've had interactions with insects that I could only parse as "interaction between two sentient beings, although there's a wide gulf of expectation and sensation and emotion and so forth which pushes it right up to the edges of that category." I've not had many interactions with mosquitos beyond "You try to suck my blood because you're hungry and I'm a warm, CO2-breathing blood source in your vicinity", but I assume that there's something it feels like to be a mosquito, that it has a little mosquito mind that might not be very flexible or impressive when weighted against a human one, but it's there, it's what the mosquito uses to navigate its environment and organize its behavior intelligibly, and all of its searching for mates and blood and a nice place to lay eggs is felt as a drive... that in short it's not just a tiny little bloodsucking p-zombie. That doesn't mean I accord it much moral weight either -- I won't shed any tears over it if I should smash it while reflexively brushing it aside, even though I'm aware arthropods have nociception and, complex capacity for emotional suffering or not, they still feel pain and I prefer not to inflict that needlessly (or without a safeword).

But I couldn't agree it isn't sentient, that it's just squishy clockwork.

Just "experiencing pain" is probably too wide a net for moral worth, as nociceptors are present in most animals, including the aforementioned mosquito.

It seems to me that the problem you're really trying to solve is how to sort the world into neat piles marked "okay to inflict my desires on regardless of consequences" and "not okay to do that to." Which is probably me just stating the obvious, but the reason I call attention to it is I literally don't get that. The universe just is not so tidy; personhood or whatever word you wish to use is not just one thing, and the things that make it up seem to behave such that the question is less like "Is this a car or not?" and more like "Is this car worth 50,000 dollars, to me, at this time?"

Suffering is probably a more restrictive term, but I am not aware of a measurable definition of it.

That is ever the problem -- you can't even technically demonstrate without lots of inference that your best friend or your mother really suffer. This is why I don't like drawing binary boundaries on that basis.

It is also probably sometimes too narrow, as most of us would find it immoral to harm people who do not experience suffering due to a mental or a physical issue, like pain insensitivity or asymbolia.

Though strangely enough, plenty of LWers seem to consider many disorders with similarly pervasive consequences for experience to result in "lives barely worth living..."

Replies from: shminux, wedrifid
comment by Shmi (shminux) · 2013-03-06T23:49:24.680Z · LW(p) · GW(p)

My (but not necessarily yours) concern with all this is a version of the repugnant conclusion: if you assign some moral worth to mosquitoes or bacteria, and you allow for non-asymptotic accumulation based on the number of specimen, then there is some number of bacteria whose moral worth is at least one human. If you don't allow for accumulation, then there is no difference between killing one mosquito and 3^^^3 of them. If you impose asymptotic accumulation (no amount of mosquitoes have moral worth equal to that of one human, or one cat), then the goalpost simply shifts to a different lifeform (how many cats are worth a human?). Imposing an artificial Schelling fence at least provides some solution, though far from universal. Thus I'm OK with ignoring suffering or moral worth of some lifeforms. I would not approve of needlessly torturing them, but mostly because of the anguish it causes humans like you.

You seem to suggest that there is more than one dimension to moral worth, but, just like with utility function or with deontological ethics, eventually it comes down to making a decision, and all your dimensions converge into one.

Replies from: None
comment by [deleted] · 2013-03-07T00:06:09.339Z · LW(p) · GW(p)

My (but not necessarily yours) concern with all this is a version of the repugnant conclusion: if you assign some moral worth to mosquitoes or bacteria, and you allow for non-asymptotic accumulation based on the number of specimen, then there is some number of bacteria whose moral worth is at least one human.

Sure, that registers -- if there were a thriving microbial ecosystem on Mars, I'd consider it immoral to wipe it out utterly simply for the sake of one human being. Though I think my function-per-individual is more complicated than that; wiping it out because that one human is a hypochondriac is more-wrong in my perception than wiping it out because, let's say, that one human is an astronaut stranded in some sort of weird microbial mat, and the only way to release them before they die is to let loose an earthly extremophile which will, as a consequence, propagate across Mars and destroy all remaining holdouts of the local biosphere. That latter is very much more a tossup, such that I don't view other humans going 'Duh, save the human!' as exactly committing an atrocity or compounded the wrong. Sometimes reality just presents you with situations that are not ideal, or where there is no good choice. No-win situations happen, unsatisfying resolutions and all. That doesn't mean do nothing; it just means trying to set up my ethical and moral framework to make it impossible feels silly.

Imposing an artificial Schelling fence at least provides some solution, though far from universal.

To be honest, that's all this debate really seems to be to me -- where do we set that fence? And I'm convinced that the decision point is more cultural and personal than anything, such that the resulting discussion does not usefully generalize.

You seem to suggest that there is more than one dimension to moral worth, but, just like with utility function or with deontological ethics, eventually it comes down to making a decision, and all your dimensions converge into one.

And once I do, even if my decision was as rational as it can be under the circumstances and I've identified a set of priorities most folks would applaud in principle, there's still the potential for regrets and no-win situations. While a moral system that genuinely solved that problem would please me greatly, I see no sign that you've stumbled upon it here.

comment by wedrifid · 2013-03-07T03:12:06.448Z · LW(p) · GW(p)

I've had interactions with insects that I could only parse as "interaction between two sentient beings

Why stop there? Humans have also had interactions with lightning that they could only parse as interactions between two sentient beings!

comment by nshepperd · 2013-03-06T22:52:44.130Z · LW(p) · GW(p)

-Possessing sensory experiences (I'm pretty sure insects and even worms do that)

Are you claiming that insects and worms possess functioning sense-organs, or that they possess subjective experience of the resulting sense-data? I find the latter somewhat unlikely wrt insects and worms. Regarding babies, it doesn't seem "obviously untrue" to me that babies lack subjective experience. Though, nor does it seem obviously true.

Replies from: None
comment by [deleted] · 2013-03-06T22:56:32.934Z · LW(p) · GW(p)

Are you claiming that insects and worms possess functioning sense-organs, or that they possess subjective experience of the resulting sense-data?

I'm trying to figure out why you think there's a difference between the two, at least when dealing with anything possessing a nervous system.

Replies from: nshepperd
comment by nshepperd · 2013-03-06T23:03:50.673Z · LW(p) · GW(p)

A nervous system is just a lump of matter, the same as any other. Another object with functioning sense-organs is my laptop, yet I wouldn't say my laptop possesses subjective experience.

Replies from: None, shminux
comment by [deleted] · 2013-03-06T23:29:50.535Z · LW(p) · GW(p)

A nervous system is just a lump of matter, the same as any other.

So you will have no objection to me replacing your brain with an intricately-carved wooden replica, then?

Another object with functioning sense-organs is my laptop, yet I wouldn't say my laptop possesses subjective experience.

  1. How would you know if it did?

  2. If you don't think a nervous system is relevant there, I'm curious to know what you think is behind you having subjective experiences, and if you believe in p-zombies. Your laptop doesn't organize that sense input and integrate it into a complex system. But even simple organisms do that.

Replies from: nshepperd
comment by nshepperd · 2013-03-07T00:29:03.345Z · LW(p) · GW(p)

Your response suggests you do understand the distinction between possessing sensory information and subjective experience of the same. As such, I suppose my job here is complete. But nevertheless:

The important thing is not the composition of an object, but its functionality. An intricately-carved wooden machine that correctly carried out the functionality of my brain would be a fine replacement, even if it lacks the elan vital neural matter supposedly has.

My laptop doesn't have subjective experience. You do. An elephant most likely does. What about Watson?. The robots in those robot soccer competitions? Or BigDog?

My opinion on zombies is lw standard.

comment by Shmi (shminux) · 2013-03-06T23:11:07.540Z · LW(p) · GW(p)

I wouldn't say my laptop possesses subjective experience

How would you know if it did?

comment by TheOtherDave · 2013-03-06T18:40:28.368Z · LW(p) · GW(p)

Ah! I understand, now. Thanks for clarifying.

I mostly understand "sentient" as most people use the term as the second meaning. Eliezer in particular seems to use "sentient" and "person" pretty much interchangeably here, for example, without really defining either, so I understand him to use the word similarly.

The latter, applied to the real world, sounds rather more like "soul" than anything coherent and obvious.

Were I inclined to turn this assertion into a question, it would probably be something like "what properties does a typical adult have that a typical 1-year-old lacks which makes it more OK to kill the latter than the former?"

Is that the question you're asking?

Replies from: None
comment by [deleted] · 2013-03-07T00:18:05.860Z · LW(p) · GW(p)

Were I inclined to turn this assertion into a question, it would probably be something like "what properties does a typical adult have that a typical 1-year-old lacks which makes it more OK to kill the latter than the former?" Is that the question you're asking?

More or less, yeah.

comment by MugaSofer · 2013-03-24T18:13:36.584Z · LW(p) · GW(p)

-SF/F writer's term for "assume this fictional entity is a person" (akin to "sapient"; it's a binary personhood marker, or a secularized soul -- it tells the reader to react accordingly to this character's experiences and behavior)

I realize you seem to have deleted your account, but: this.

comment by loup-vaillant · 2013-02-26T18:02:21.513Z · LW(p) · GW(p)

Albeit sufficiently young babies are plausibly not sentient.

My super-villain side just got slapped by my censors before it could formulate any way to exploit this. I'm still pondering whether this is a good thing.

Replies from: DaFranker
comment by DaFranker · 2013-02-26T18:41:22.638Z · LW(p) · GW(p)

Hmm. I'm not sure I have the same censors.

My super-villain side went on to try to devise a way to emulate the Rai Stones economy using an abstract exchange of not-yet-sentient babies and various related opportunity costs, before realizing that even my super-villain side is not good enough at economics to conjure efficient economic systems out of thin air like that while making sure that they benefit him.

Certainly, however, my super-villain side did fall back on the secondary, less-desirable option of lending resources and medical assistance to pregnant mothers, such as to have legal ownership claim on the nonsentient babies in order to re-sell them for services or work or money to said mothers afterwards.

Does it sound like a good thing or a bad thing that I can think of this without flinching?

Replies from: loup-vaillant
comment by MaoShan · 2013-02-26T04:17:31.845Z · LW(p) · GW(p)

Given the wording of the story, both women were in the practice of sleeping directly next to their babies. The other woman didn't roll over her baby because she was wicked, she rolled over her baby because it was next to her while she slept. They left out the part where the "good mother" rolled over her own baby two weeks later and everyone just threw up their hands and declared "What can we do, these things just happen, ya' know?"

Replies from: Vaniver, ESRogs
comment by Vaniver · 2013-02-26T15:56:11.884Z · LW(p) · GW(p)

They left out the part where the "good mother" rolled over her own baby two weeks later and everyone just threw up their hands and declared "What can we do, these things just happen, ya' know?"

Co-sleeping is controversial, not one-sided. It seems that co-sleeping increases the risk of smothering but decreases the risk of SIDS, leading to a net decrease in infant mortality. Always be wary of The Seen and The Unseen.

Replies from: nshepperd, Bill_McGrath, None, MaoShan
comment by nshepperd · 2013-02-27T22:14:45.148Z · LW(p) · GW(p)

On the other hand, the majority of related studies seem to be observational, rather than interventional, so it's quite possible that both co-sleeping and observed "effects" are the result of some third factor, such as the attitude of the parent. For example, it's likely that a parent who chooses to co-sleep is more well-disposed toward the infant, and is therefore far less likely to kill it deliberately (infanticide), thus making up some unknown decrease in the overall frequency of "SIDS".

Replies from: Vaniver
comment by Vaniver · 2013-02-27T22:39:10.779Z · LW(p) · GW(p)

For example, it's likely that a parent who chooses to co-sleep is more well-disposed toward the infant, and is therefore far less likely to kill it deliberately (infanticide), thus making up some unknown decrease in the overall frequency of "SIDS".

Indeed; this also probably explains some of the benefit of room-sharing.

comment by Bill_McGrath · 2013-02-27T17:21:41.597Z · LW(p) · GW(p)

How much is the decrease? I imagine that the effect of being responsible for your child's death by smothering is probably a lot more upsetting and mentally damaging than that of having a child die from SIDS. Maybe that's lessened by knowing the above information; but most people don't.

Replies from: Vaniver
comment by Vaniver · 2013-02-27T19:03:57.893Z · LW(p) · GW(p)

How much is the decrease?

It's hard to get solid numbers. Roomsharing (which is recommended) decreases SIDS rates by half, which will be the majority of the benefit of a transition from own-room sleeping to cosleeping. It also seems like the overwhelming majority of smothering deaths deal involve other known risk factors, like smoking or drug use by the mother. It's also frequently recommended against the infant sleeping with the father or siblings (by both sides). Epidemiological studies have the issue that cosleeping is officially discouraged.

If you're adding in psychological factors, though, there's some suggesting that cosleeping is good for the infant / their later development.

As may be unsurprising to the cynic, much research on infant sleep is funded by crib manufacturers. My read of the issue is that cosleeping was recommended against because of the known danger of smothering and the social benefit of parental independence from the infant, and that more information is slowly coming to light that the infant is better off cosleeping with the mother, except when other risks are present.

comment by [deleted] · 2013-03-07T00:25:44.159Z · LW(p) · GW(p)

If you co-sleep intelligently, it's not even much of an issue. There's lots of devices, both modern and ancient, you can use to keep the child within reach but at no risk of rolling over them.

comment by MaoShan · 2013-02-27T02:29:52.933Z · LW(p) · GW(p)

I expected that. My own opinion is that if it is necessary for some reason, it's a good idea, but personally I'd rather be possibly, indirectly, and one instance of a poorly understood syndrome responsible for my baby's death than actually being the one that crushed him.

It seems that sleeping separately very drastically decreases your chances of personally killing your baby in your sleep.

Replies from: RobbBB, Swimmer963
comment by Rob Bensinger (RobbBB) · 2013-02-27T02:57:12.464Z · LW(p) · GW(p)

Such are your desires, then, at the object level. But do you also desire that they be your desires? Are you satisfied with being the sort of person who cares more about avoiding guilt and personal responsibility than about the actual survival and well-being of his/her child? Or would you change your preferences, if you could?

Replies from: MaoShan
comment by MaoShan · 2013-02-28T02:39:06.603Z · LW(p) · GW(p)

My desires concerning what my desires should be are also determined by my desires, so your question is not valid, it's a recursive loop. You are first assuming that I care about anything at all, secondly assuming that I experience guilt at all, and thirdly that I would care about my children. As it turns out, you are correct on all three assumptions, just keep in mind that those are not always givens among humans.

What I was saying was that in the two situations (my child dies due to SIDS), and (my child dies due to me rolling over onto him), in the first situation not only could I trick myself into believing it wasn't my fault, it's also completely possible that it really wasn't my fault, and that it had some other cause; in the second situation, there's really no question, and a very concrete way to prevent it.

To answer your unasked question, I still do not alieve that keeping my child a safe distance away while sleeping but showing love and care at all other times increases her chance of SIDS. If I was to be shown conclusive research of cause and effect between them, I would reverse my current opinion, mos' def.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-02-28T21:32:16.726Z · LW(p) · GW(p)

Your second-order desires are fixed by your desires as a whole, trivially. But they aren't fixed by your first-order desires. So it makes sense for me to ask whether you harbor a second-order desire to change your first-order desires in this case, or whether you are reflectively satisfied with your first-order desires.

Consider the alcoholic who desires to stop craving alcohol (a second-order desire), but who continues to drink alcohol (because his first-order desires are stronger than his desire-desires). Presumably your first-order desires are currently defeating your second-order ones, else you'd have already switched first-order desires. But it doesn't follow from this that your second-order desires are nonexistent!

Perhaps, for instance, your second-order desire is strong enough that if you could simply push a button to forever effortlessly change your first-order desires, you would do so; but your second-order desire isn't so strong that you'll change first-order desires by willpower alone, without having a magic button to press. This, I think, is an extremely common situation humans find themselves in. So I was curious whether you were satisfied or unsatisfied with your current first-order priorities.

I still do not alieve that keeping my child a safe distance away while sleeping but showing love and care at all other times increases her chance of SIDS. If I was to be shown conclusive research of cause and effect between them, I would reverse my current opinion, mos' def.

So it's not really the case that you'd prioritize psychological-guilt-avoidance over SIDS-avoidance? In that case the question is less interesting, since it's just a matter of how well you can think yourself into the hypothetical in which you have to choose between, say, increasing your child's odds of surviving by 1% and the cost of, say, increasing your guilt-if-the-child-does-die by 200%.

Replies from: MaoShan
comment by MaoShan · 2013-03-01T02:50:31.509Z · LW(p) · GW(p)

In that case the question is less interesting, since it's just a matter of how well you can think yourself into the hypothetical in which you have to choose between, say, increasing your child's odds of surviving by 1% and the cost of, say, increasing your guilt-if-the-child-does-die by 200%.

I guess, but in real life I don't sit down with a calculator to figure that out; I'd settle for some definitive research.

Your second-order desires are fixed by your desires as a whole, trivially. But they aren't fixed by your first-order desires. So it makes sense for me to ask whether you harbor a second-order desire to change your first-order desires in this case, or whether you are reflectively satisfied with your first-order desires.

[all that quote], trivially. What I am saying is that even my "own" desires and the goals that I think are right are only what they are because of my biology and upbringing. If I seek to "debug" myself, it's still only according to a value system that is adapted to perpetuate our DNA. So to answer truthfully, I am NOT satisfied with my first-order desires, in fact I am not satisfied with being trapped in a human body, from which the first-order desires are spawned.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-02-27T03:36:13.158Z · LW(p) · GW(p)

It seems that sleeping separately very drastically decreases your chances of personally killing your baby in your sleep.

In the story, maybe. I think nowadays you can get specially designed cribs that sort of merge onto the bed, so you're co-sleeping but can't roll onto your baby–see http://www.armsreach.com/

Replies from: None, MaoShan
comment by [deleted] · 2013-03-07T00:31:26.616Z · LW(p) · GW(p)

I'm involved in a local Native American community and one of the medicine elders I know often makes a sort of device for families with infant children, especially ones with colic or other sleep-disrupting conditions. It's kind of a cradle-sling type thing you hang securely above your own bed; if kiddo's crying but otherwise okay you can just reach up and rock them, and they're otherwise within reach. I've seen replicas of the pre-contact version, and even made of birchbark and hung from the rafters of a lodge with sinew it's evidently still quite sturdy and safe; like, you'd have to knock over the house for it to be an issue. These days, using modern materials, they're even safer. So this goes back quite a long way.

comment by MaoShan · 2013-02-28T02:43:48.068Z · LW(p) · GW(p)

Then I still blame the mother in the story for not building one of those!

That is pretty neat, I wholeheartedly endorse using those, just in case. In the unlikely event that I produce more biological offspring, I will make use of that knowledge.

comment by ESRogs · 2013-03-01T05:19:34.531Z · LW(p) · GW(p)

She's not seen as evil because she inadvertently killed her baby, she's seen as evil because she stole the other woman's baby and assented to killing it. Right?

Replies from: MaoShan
comment by MaoShan · 2013-03-02T01:15:51.811Z · LW(p) · GW(p)

It was a property dispute, not a measurement of righteousness. The story served to illustrate Solomon's wisdom; spiritual judgment of the women was not an issue. As for my opinion, I see both of them as stupid, and only evil to the degree that stupidity influences evil.

Replies from: ESRogs
comment by ESRogs · 2013-03-02T11:18:23.309Z · LW(p) · GW(p)

Ah, I interpreted your comment as a response to the supposed judgment that the mother whose child died was wicked. That would seem to have been my b.

comment by JulianMorrison · 2013-02-26T23:13:17.093Z · LW(p) · GW(p)

Thwarted+joy beats desolation+schadenfreude as a utility win even if they were dividing a teddy bear.

comment by Larks · 2013-02-26T23:37:12.524Z · LW(p) · GW(p)

Who says you own a baby just by being its genetic mother?

Susan Okin's attempted reductio ad adsurdum of Robert Nozick says that. Though admittedly she did think that undergoing the pregnancy, not just being the genetic mother, was required.

comment by MugaSofer · 2013-03-05T21:55:06.214Z · LW(p) · GW(p)

Albeit sufficiently young babies are plausibly not sentient.

This is why I reject binary "sentient/nonsentient" criteria for moral worth. If mentally subnormal adults or small children are worthless, then you have followed simplicity off a cliff.

In my expert opinion.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-03-06T00:43:46.845Z · LW(p) · GW(p)

You seem to equate "nonperson" with "worthless" here. Do you do that advisedly, or carelessly? And if the former, can you summarize your reasons for considering nonpersons worthless?

[ETA: the parent has been edited after this comment was written.]

Replies from: MugaSofer
comment by MugaSofer · 2013-03-06T10:07:04.821Z · LW(p) · GW(p)

Excellent point. Edited.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-03-06T13:51:04.088Z · LW(p) · GW(p)

Fair enough.

Which raises the question: do you actually know anyone who considers small children worthless, or are you just bracketing here?

I mean, I know lots of people who consider small children (and various precursors to small children) to have less value than other things they value... indeed, I don't know anyone who doesn't, although there are certainly disagreements about what clears that bar and what doesn't. But that needn't involve walking off any cliffs... that's just what it means to live in a world where we sometimes have to choose among things of value.

Replies from: MugaSofer
comment by MugaSofer · 2013-03-06T14:21:44.297Z · LW(p) · GW(p)

Well, worthless is a mild exaggeration, but Eliezer has argued that eating babies is justified if they're young enough. Infanticide (or "post-natal abortion") is approved of by a small but real minority. I have yet to encounter anyone who thinks toddlers are equivalent to animals (who doesn't use this to argue for animals' rights) but I assume they exist as a minority of a minority. But if they can talk, most people are convinced. (This does not apply to sign language, for some reason.)

Does that answer your question?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-03-06T14:57:30.095Z · LW(p) · GW(p)

Does that answer your question?

I'm not sure.

What I get from your answer is that you believe there exist people who support killing children if they're young enough, though you haven't talked to any of them about the parameters of that support, and you infer from that position that they value young children less than they ought to, which is what you meant by considering young children "worthless" in the first place.

That is, as I currently understand you. your original sentiment can be rephrased "If you value small children less than you ought to, you have followed simplicity off a cliff" and you believe Eliezer values small children less than he ought to, or at the very least has made arguments from which one could infer that, and that other unnamed people do too..

Have I understood you correctly?

Replies from: MugaSofer
comment by MugaSofer · 2013-03-06T15:24:58.512Z · LW(p) · GW(p)

Pretty much.

There are some moral theories that sound simple and reasonable in the abstract ("maximize happiness", for example) but in reality do not encompass the full range of human value. There are two possible responses to this; you can either examine the evidence and conclude you missed something, or you can decide your theory is self-evidently true and everyone else must be biased, and bite the bullet

Of course, everyone sometimes is biased, and some bullets should be bitten. But when you start advocating forcible wireheading (or eating babies) you should at least reexamine the evidence.

Eliezer may be right. But I predict he hasn't examined binary personhood ... ever? Recently, at any rate.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-03-06T16:45:26.247Z · LW(p) · GW(p)

OK.

With respect to Eliezer in particular, it would greatly surprise me if your disagreement with him was actually about complexity of value as you seem to suggest here, or about unexamined notions of binary personhood. That said, my preference is to let you have your argument with him with him, rather than trying to have your argument with him with me.

With respect to your general point, I'm all in favor of re-examining evidence when it leads me to unexpected conclusions. But as you say, some bullets should be bitten... sometimes it turns out that habitual beliefs are unjustified, and re-examining evidence leads me to reject them with greater confidence.

For my own part, I probably value human infants less than you think I ought to... though it's hard to be sure, since I'm not exactly sure where you draw the line.

Just to put a line in the sand for calibration: for at least 99.99999% of children aged 2 years or younger, and a randomly chosen adult, I would easily endorse killing any 10 of the former to save the latter (probably larger numbers as well, but with more difficulty), and I don't think I've walked off any cliffs in the process.

Replies from: MugaSofer
comment by MugaSofer · 2013-03-24T19:43:00.168Z · LW(p) · GW(p)

Oh, I daresay I value infants more than most people think I ought to. That's the problem with consistency :(

Still, I think it's fair to say that binary personhood has a problem with the fact that most people seem to care about things on a sliding scale, and it's probably not just bias.

Anyway, seems like this point has been quite thoughrily clarified...

comment by Benya (Benja) · 2013-02-26T03:18:54.714Z · LW(p) · GW(p)

Brecht wrote a play based on the Solomon story where the the birth mother only wants the child because she can't inherit without him. The judge has a circle of chalk drawn and says the two women are to simultaneously try to pull the child from it; if they tear him in half, they will each get their part. The adoptive mother lets go, and he deems her the true mother.

comment by Vaniver · 2013-02-25T16:38:54.921Z · LW(p) · GW(p)

It suddenly occurs to me that the first woman is the right choice for raising the child, regardless of who the birth mother is.

Indeed; I strongly suspect Solomon had that in mind, but wanted to keep the post as short as possible.

I wonder if Solomon had plans in mind if both women had said the same thing.

Quite possibly. I also wonder if it would depend on what they both said- if both volunteered to retract their claim, then as wedrifid suggests lots were commonly used to show the will of God. If both reacted spitefully, then...

comment by A1987dM (army1987) · 2013-02-26T13:31:28.333Z · LW(p) · GW(p)

Indeed; what kind of person answers like the second mother? (Well, there's three millennia's worth of mindware gap between me and her, but still...)

Replies from: Vaniver, Richard_Kennaway
comment by Vaniver · 2013-02-26T22:00:31.969Z · LW(p) · GW(p)

You're familiar with the empirical work on ultimatum games, right? It is common for people to prefer to get nothing equitably than to accept an inequitable split where they are worse off.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-27T10:18:57.298Z · LW(p) · GW(p)

Mmm, yeah; I hadn't thought of that.

comment by Richard_Kennaway · 2013-02-26T14:31:05.132Z · LW(p) · GW(p)

what kind of person answers like the second mother?

One who was invented for the purpose of the story.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-26T17:27:12.505Z · LW(p) · GW(p)

Well, yeah, but... For readers to think “wow, that Solomon guy was so wise!” rather than “that's supposed to be a joke, right?”, the characters would have to have at least some amount of plausibility in their cultural context. (Then again, the Bible wasn't the place where one'd expect to find jokes in the first place.)

Replies from: EHeller, IlyaShpitser, Richard_Kennaway
comment by EHeller · 2013-02-26T19:01:50.000Z · LW(p) · GW(p)

(Then again, the Bible wasn't the place where one'd expect to find jokes in the first place.)

Perhaps not long form narrative jokes, but the bible is actually loaded with humorous word play (puns, double entendre, etc). Unfortunately, pretty much all of it requires a pretty decent understanding of biblical Hebrew. I often wonder if biblical literalists would take such a hard line if they realized the writer was often writing for wordplay as much as for conveying a message.

comment by IlyaShpitser · 2013-02-28T23:08:06.016Z · LW(p) · GW(p)

As I said elsewhere, these sorts of stories (old testament Chuck Norris stories!) aren't about humor. It's "yay Solomon!"

comment by Richard_Kennaway · 2013-02-26T18:01:28.286Z · LW(p) · GW(p)

Well, yeah, but... For readers to think “wow, that Solomon guy was so wise!” rather than “that's supposed to be a joke, right?”, the characters would have to have at least some amount of plausibility in their cultural context.

Like the plausibility of these stories?

It's a story about Solomon's wisdom. Whether it actually happened is not really the point.

comment by Desrtopa · 2013-02-28T02:21:34.829Z · LW(p) · GW(p)

The Solomon story has always bugged me as being the sort of thing a not-wise person would come up with as an example of wisdom. There are too many ways it could have gone wrong.

I have my own preferred take on the story, and what else that sort of solution might imply. In that version, it ends with

And because he was the king, beheld by his subjects with awe and terror, the women did not protest his judgment.

And nobody ever bothered the king with domestic disputes again.

Replies from: Eliut
comment by Eliut · 2013-03-06T20:51:24.554Z · LW(p) · GW(p)

I think it is remarkable how obviously childish the style of the “bible” quotes is when compared to the deliberately arcane “wording” of the OP.

I agree with you, I also fail to see any level of sophistication in the bible. If anything it is at the same level of “Go god Go” (Must add a disclaimer here: English is not my native language so if I say something stupid it is because I am Mexican)

comment by Vladimir_Nesov · 2013-02-25T16:58:03.589Z · LW(p) · GW(p)

I often notice how people use arguments that fail to distinguish the hypotheses under discussion. For example, someone gives an argument that favors their hypothesis, but it also happens to favor the opponent's hypothesis to about the same degree. Interpreting arguments in terms of the likelihood ratio they provide seems like an easy-to-use heuristic that fixes such errors.

Replies from: wedrifid
comment by wedrifid · 2013-02-25T17:03:11.038Z · LW(p) · GW(p)

I often notice how people use arguments that fail to distinguish the hypotheses under discussion. For example, someone gives an argument that favors their hypothesis, but it also happens to favor the opponent's hypothesis to about the same degree.

Do you have any examples to share? (Not that I don't believe you. People routinely use arguments that support the opposite position to the one they intend. Arguments that support both equally are bound to occur in between...)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-02-25T17:21:28.328Z · LW(p) · GW(p)

Sorry, I have bad memory for details of this sort, only remember the abstract observation, which is recurrent enough that I have a cached phrase to identify and point out such situations ("This doesn't distinguish the alternatives!"). Could make up some examples, but I don't think it's useful for clarification in this case, and it won't provide further evidence for the existence of the issue.

Replies from: Kawoomba, Luke_A_Somers, John_Maxwell_IV
comment by Kawoomba · 2013-02-25T17:41:58.190Z · LW(p) · GW(p)

Since the elements of the empty set satisfy arbitrary properties, all the examples you provided are technically evidence in favor for your observation. Also, against it.

Replies from: DaFranker
comment by DaFranker · 2013-02-26T15:57:56.513Z · LW(p) · GW(p)

Heh <3

It's hard to find this kind of humor anywhere else than LW and XKCD.

Replies from: shminux
comment by Shmi (shminux) · 2013-02-27T00:58:51.222Z · LW(p) · GW(p)

Actually, SMBC comics tends to be better than either.

Replies from: Manfred
comment by Manfred · 2013-02-27T01:38:18.945Z · LW(p) · GW(p)

We have some XKCD fans here, I see.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-27T12:57:24.829Z · LW(p) · GW(p)

Well, I think it used to be way better than it is now.

comment by Luke_A_Somers · 2013-02-25T19:24:08.675Z · LW(p) · GW(p)

If the woman who lost hadn't been so comprehensively messed up in the head, you would've had an example in the OP. I wonder if there was a similar test more likely to succeed.

comment by John_Maxwell (John_Maxwell_IV) · 2013-02-27T08:00:34.277Z · LW(p) · GW(p)

I have a theory that everyone does this, and it's a way for our brains to save space somehow. Just keep track of the rate at which things tend to occur instead of recording and cataloging every experience.

comment by Stuart_Armstrong · 2013-02-28T12:09:36.321Z · LW(p) · GW(p)

The analogy seems a bit tortuous... Bayes wasn't needed to understand the story, and seeing the story in the light of Bayes doesn't seem to add any new understanding - at least, in my opinion.

comment by Kawoomba · 2013-02-25T17:11:00.211Z · LW(p) · GW(p)

What if they had both two-boxed? Solomega has to keep his credibility ...

comment by Vaniver · 2013-02-25T15:56:35.804Z · LW(p) · GW(p)

What other stories do you know that show this sort of qualitative Bayesian thinking?

I strongly suspect there are other stories where this sort of interpretation seems natural, but as the memory of this story and its interpretation floated into my memory unbidden, I am not sure where to look for others.

Replies from: novalis, jooyous
comment by novalis · 2013-02-26T04:03:54.477Z · LW(p) · GW(p)

The Boy Who Cried Wolf is a pretty good example of updating on new information, I guess.

But it seems sort of pointless to attempt to find old stories that show the superiority of a supposedly new way of thinking. If the way of thinking is so new, then why should we expect to find stories about it? And if we do, what does that say about the superiority of the method (that is, that it was known N years ago but didn't take over the world)? Perhaps this is too cynical?

Replies from: Vaniver, TeMPOraL, MaoShan
comment by Vaniver · 2013-02-26T16:08:29.764Z · LW(p) · GW(p)

The Boy Who Cried Wolf is a pretty good example of updating on new information, I guess.

Agreed, but the primary lesson of that story is "guard your reputation if you want to be believed." The reverse story--"don't waste your time on liars"--probably shouldn't end with there actually being a wolf, as one should not expect listeners to understand the sometimes subtle separation between good decision-making and good consequences.

But it seems sort of pointless to attempt to find old stories that show the superiority of a supposedly new way of thinking.

New stories are useful too.

I also wouldn't call rationality a new way of thinking, any more than I would call science a new way of thinking. Both are active fields of research and development. Both have transformative milestones, such that you might want to call science before X 'protoscience' instead of 'science', but only in the same way that modern science is 'protoscience' because Y hasn't happened yet.

It's also worth noting that the research and development often makes old ideas more precise. People ran empirical tests before they knew what empiricism was. Similarly, we should expect to see people acting cleverly before a systematic way to act cleverly was developed.

And if we do, what does that say about the superiority of the method (that is, that it was known N years ago but didn't take over the world)?

A meme's reproductive success and its desirability for its host can differ significantly.

Replies from: radical_negative_one
comment by radical_negative_one · 2013-02-26T23:17:19.003Z · LW(p) · GW(p)

The reverse story--"don't waste your time on liars"--probably shouldn't end with there actually being a wolf, as one should not expect listeners to understand the sometimes subtle separation between good decision-making and good consequences.

The lesson of the story (for the townspeople), is that when your test (the boy) turns out to be unreliable, you should devise a new test (replace him with somebody who doesn't lie).

comment by TeMPOraL · 2013-02-26T10:19:03.128Z · LW(p) · GW(p)

If the way of thinking is so new, then why should we expect to find stories about it?

To quote from the guy this story was about, "there is nothing new under the sun". At least nothing directly related to our wetware. So we should expect that every now and then people stumbled upon a "good way of thinking", and when they did, the results were good. They might just not manage to identify what exactly made the method good, and to replicate it.

Also, as MaoShan said, this is kind of Proto-Bayes, 101 thinking. What we now have is this, but systematically improved over many iterations.

(that is, that it was known N years ago but didn't take over the world)?

"Taking over the world" is a complex mix of effectiveness, popularity, luck and cultular factors. You can see this a lot in the domain of programming languages. With ways of thinking it is even more difficult, because - as opposed to programming languages - most people don't learn them explicitly and don't evaluate them based on results/"features".

comment by MaoShan · 2013-02-26T04:23:05.605Z · LW(p) · GW(p)

No, as you can see by the amount of objections, you are not too cynical. It's closer to a sort of Proto-Bayes, stories like this show that that kind of thinking can turn out wise solutions; Bayesian thinking as it is understood now is more refined.

comment by jooyous · 2013-02-25T19:09:03.785Z · LW(p) · GW(p)

My brother had a swing dance unit in middle school and he said everyone he talked to was whining and saying it was going to be awful. I asked him if he thought everyone actually believed it was going to be awful or if they were just saying that because thought it would be uncool to be reasonable and not whiny. We hypothesized that people in the uncool camp would be more likely to make fun of him if he said that he didn't think swing dancing was going to be that bad. Also maybe they'd be less likely to be convinced because normally people who think something's going to be awful accept reassurance that it's not.

I think we don't have results yet. Also his "everyone" is probably only ~10 people.

comment by novalis · 2013-02-25T19:58:17.923Z · LW(p) · GW(p)

Solomon wasn't actually using Bayes here.

The prior here (A has stolen B's baby) is actually quite low. It just doesn't happen very often. Of course, Solomon actually has to consider some extra evidence (B has accused A of stealing her baby). Solomon (by your account) doesn't consider these things at all.

Solomon's analysis only considered the likelihood given a single test.

Replies from: Vaniver, Bugmaster
comment by Vaniver · 2013-02-25T21:32:08.148Z · LW(p) · GW(p)

The prior here (A has stolen B's baby) is actually quite low.

Irrelevant, because it is certain that one of them attempted to steal the other's baby: the question is whether it was by a midnight baby-swap, or by bearing false witness. What's your prior for the likelihood of attempting by each method conditioned on that an attempt was made? (Note that it could even be a conjunction- when the baby-swap fails, rush to the court and claim that she attempted a baby-swap!)

Replies from: novalis
comment by novalis · 2013-02-25T21:45:33.989Z · LW(p) · GW(p)

It could also be an error; maybe B was so blinded by grief that she refused to believe that her own baby had died. (But again, not really the point; the point is that the article has nothing whatsoever to do with Bayes).

Replies from: Vaniver
comment by Vaniver · 2013-02-25T22:01:59.711Z · LW(p) · GW(p)

It could also be an error; maybe B was so blinded by grief that she refused to believe that her own baby had died.

I'm wrapping that into 'false witness,' since intentions aren't particularly important about the truth of events.

the point is that the article has nothing whatsoever to do with Bayes

Would you care to expand on this point? The story obviously predates Bayes, and so doesn't use any of the terminology or explicitly show the process, but it seems to me like a good example of when and how Bayesian thinking would be useful, and if I'm missing something it would probably be rather useful to know.

Replies from: novalis
comment by novalis · 2013-02-25T22:26:59.860Z · LW(p) · GW(p)

the point is that the article has nothing whatsoever to do with Bayes

Would you care to expand on this point?

Bayes goes like this: P(H|E) = P(E|H)*P(H)/P(E). Here, Solomon considers P(E|H) (and P(~E|H)) -- but he doesn't consider P(H) at all. In short, he could easily be a frequentist and use the same method and come to the same conclusion.

Replies from: Vaniver
comment by Vaniver · 2013-02-26T15:42:06.814Z · LW(p) · GW(p)

I interpreted it as:

P(First Woman)=.5; P(Second Woman)=1-P(First Woman)=.5.

(This is a simplifying assumption, since those aren't actually exhaustive and mutually exclusive.)

He then decides to test Reaction, since he expects that P(Reaction|First Woman) and P(Reaction|~First Woman) are significantly different. The test works, and then he calculates P(First Woman|Reaction) easily.

In short, he could easily be a frequentist and use the same method and come to the same conclusion.

I don't see the algebra of Bayes as particularly important. Most people shouldn't trust themselves to do algebra correctly without a calculator when important things are on the line, and many practical applications require Bayes nets that are large enough that it is wise to seek computer assistance in navigating them.

To the extent that there is a difference between Bayesians and Frequentists, it's a disagreement about interpretations, not math. It's not like Frequentists disagree with P(H|E) = P(E|H)*P(H)/P(E), or have sworn not to use it!

Part of what I want to do with this post (and any other stories that people can find) is to highlight the qualitative side of Bayes. Someone who understands the algebra but doesn't notice when their life presents them with opportunities to use it is not getting as much out of Bayes as they could.

What I would call the three main components of Bayes are explicitly considering hypotheses, explicitly searching for tests with high likelihood ratios, rather than just high likelihoods, and explicitly incorporating prior information. I'm content with examples that show off only one of those components.

Replies from: novalis
comment by novalis · 2013-02-26T23:03:04.137Z · LW(p) · GW(p)

To the extent that there is a difference between Bayesians and Frequentists, it's a disagreement about interpretations, not math. It's not like Frequentists disagree with P(H|E) = P(E|H)*P(H)/P(E), or have sworn not to use it!

There are at least two meanings to the Bayesian/frequentist debate; one is a disagreement about methods (or at least a different set of tools), and the other is a disagreement about the deeper meaning of probability. This is an article about methods, not meaning. The major difference is that Bayesian methods make the prior explicit. The p-value is, perhaps, the quintessential frequentist statistic. Here, we can easily imagine Solomon publishing his paper in the Ancient Journal of Statistical Law and citing a p-value < 0.001 -- but without knowing the actual P(defendant), we don't know how many times he made the correct decision (in terms of the facts; as noted in another thread, from a child-welfare perspective, the decision was probably correct regardless).

What I would call the three main components of Bayes are explicitly considering hypotheses, explicitly searching for tests with high likelihood ratios, rather than just high likelihoods, and explicitly incorporating prior information. I'm content with examples that show off only one of those components.

Frequentists also care about high likelihood ratios.

comment by Bugmaster · 2013-02-25T20:08:52.353Z · LW(p) · GW(p)

The prior here (A has stolen B's baby) is actually quite low. It just doesn't happen very often.

I know I'm nitpicking, but is the prior really that low in Solomon's case ? In our modern times, things like that almost never happen, but Solomon was living in Old Testament times (metaphorically speaking, seeing as the Solomon we're talking about here is just a character in a book). And the Old testament makes few legal distinctions between children and other kinds of property. Stealing them would still be a big deal, but hardly improbable.

Replies from: novalis
comment by novalis · 2013-02-25T21:16:47.044Z · LW(p) · GW(p)

OK, then maybe the prior is high. So what? The point is that Solomon didn't consider it. I'm not saying his test was useless or his decision was wrong. I'm saying that the word "Bayes" is being used as an applause light rather than for its meaning!

Replies from: Bugmaster
comment by Bugmaster · 2013-02-25T21:22:48.855Z · LW(p) · GW(p)

Yeah, that's why I said I was merely nitpicking.

comment by tRuth · 2013-02-25T17:22:55.575Z · LW(p) · GW(p)

Bayes is what the territory feels like from the inside.