A rational unfalsifyable believe

post by Arielgenesis · 2016-07-25T02:15:07.807Z · LW · GW · Legacy · 47 comments
A rational unfalsifyable believe

I'm trying to argue that it is possible for someone rational to hold on to a believe that is unfalsifyable and remain rational.

There are three people in a room. Adam, Cain, and Able. Able was murdered. Adam and Cain was taken into police custody. The investigation was thorough but it remains inconclusive. The technology was not advanced enough to produce conclusive evidence. The arguments are basically you did it, no, you did it.

Adam has a wife, her name is Eve. Eve believed that Adam is innocent. She believed so because she has known Adam very well and the Adam that she knew, wouldn't commit murder. She uses Adam's character and her personal relationship with him as evidence.

Cain, trying to defend himself, asked Eve. "What does it take for her to change her believe". She replied, "show me the video recording, then I would believe". But there was no video recording. Then she said, "show me any other evidence that is as strong as a video recording". But there was no such evidence as well.

Cain pointed out, "the evidence that you use for your believe is personal relationship and his character. Then if there are evidence against his character, would you change your mind?"

After some thinking and reflection, she finally said. "Yes, if it could be proven that I have been deceived all these years, then I will believe otherwise."

All of Adam's artifact were gathered, collected and analysed. The search was so thorough, there could never be any new evidence about what Adam had did before the custody that could be presented in the future. All points to Adam good character.

Eve was happy. Cain was not. Then he took one step further. He proposed, "Eve, people could change. If Adam change in the future into man of bad character, would you be convinced that he could have been the murderer?"

"Yes, if Adam changed, then I would believe that it is possible for Adam to be the murderer." Eve said. 

Unfortunately, Adam died the next day. Cain said to Eve, "how do you propose that your belief about Adam's innocence be falsified now?"

"It cannot be falsified now." Eve replied. 

"Then you must be irrational."

 

47 comments

Comments sorted by top scores.

comment by MrMind · 2016-07-25T07:22:08.324Z · LW(p) · GW(p)

The fact that Adam committed a crime is not unfalsifiable, it's simply unfalsfied. There's just not enough probability weight for her to change her mind, she even admitted that with evidence strong enough she would otherwise change her mind.
Eve is being rational in retaining her current prior in the lack of evidence: it's not that she is assigning 0 to the probability of Adam being the killer, it's just that in the face of uncertainty there's no reason to update.
On the other hand I don't see how you could do this to uphold the belief in God: absence of evidence is evidence of absence.

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-25T17:41:39.742Z · LW(p) · GW(p)

not unfalsifiable, it's simply unfalsfied

I am trying to make a situation where a belief is (1) unfalsified, (2) unfalsifiable, and (3) has a lack of evidence. How should I change the story such that all 3 conditions are fulfilled. And in that case, would then Eve be irrational?

Replies from: MrMind, Lumifer
comment by MrMind · 2016-07-26T07:28:24.370Z · LW(p) · GW(p)

In a Bayesian framework, the one and only way to make a belief unfalsifiable is to put its probability at 1.
Indeed, since Bayesian update is at the root about logics and not about physics: even if you don't have any technological mean whatsoever to recover an evidence, and will never have, if it's logically possible to falsify a theory, then it's falsifiable.
On the other side, once a belief acquires a probability of 1, then it's set to true in the model and later no amount of evidence can change this status.
Unfortunately for your example, it means that unfalsifiability and lack of evidence, even an extreme one, are orthogonal concern.

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-27T03:10:34.053Z · LW(p) · GW(p)

unfalsifiability and lack of evidence, even an extreme one, are orthogonal concern.

That is a very novel concept for me. I understand what you are trying to say, but I am struggling to see if it is true.

Can you give me few examples where something is "physically unfalsifiable" but "logically falsifiable" and the distinction is of great import?

Replies from: MrMind
comment by MrMind · 2016-07-27T07:12:46.274Z · LW(p) · GW(p)

I understand what you are trying to say, but I am struggling to see if it is true.

It's a straightforward corollary of Bayes theorem: if P(A) = 1 (or P(A) = 0), no amount of later updating can change this value. No matter what strong contrary evidence is presented.
This is indeed a simple model of a hardcore theist: he has already set P(god(s)) to true, so he is willing to dig himself a hole of unlimited depth to account for the evidence that oppose the existence of a divinity.

As for some example, Russel's teapot is a good choice: a teapot orbiting a distant sun in other galaxy. Is it falsifiable? With our current and future technology, probably not.
Is it logically falsifiable: yes! Even if you assign a very low probability to its existence, an alien species could just transport us there and show us that there's such a teapot.
On the other hand, as I mentioned earlier, if we had put P(teapot) = 0, then we will never accept the teapot existence, even in the face of space travelling aliens that show us that the thing is actually there.

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-28T06:08:09.723Z · LW(p) · GW(p)

I see... I have been using unfalsifiability and lack of evidence as a synonym. The title should have read: a rational believe without evidence

Thank You.

Replies from: MrMind
comment by MrMind · 2016-08-04T08:22:19.567Z · LW(p) · GW(p)

That's a difficult one to achieve.

Rationality is about how to process evidence to change one's prior, it has very little to say about what belief you start with, besides the fact that it must be expressible with classical logic.
To complicate the matter, Bayesian evidence works in such a way that if you classify something as evidence, then it means that its absence will lower the probability of the assertion it is supporting.

To have a belief that is both rational and unsupported, you must start with a model that is at one time compatible with background information, whose support is difficult to obtain and is a better fit than competing models, who might even have easier to obtain evidence.
A tough challenge!

comment by Lumifer · 2016-07-25T19:29:49.732Z · LW(p) · GW(p)

I am trying to make a situation where a belief is (1) unfalsified, (2) unfalsifiable, and (3) has a lack of evidence.

Would Russell's teapot qualify? If you want to make it unfalsifiable, you you can move it to another galaxy and specify that the statement is true in a narrow time frame, say, for the next five minutes.

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-27T02:36:04.272Z · LW(p) · GW(p)

Would Russell's teapot qualify

Yes exactly! The issue with that is the irrelevance of it. It is of no great import to anyone (except the teapot church, which I think is a bad satire of religion. The amount of suspension of disbelieve the narrative require is beyond me). On the other hand, Adam's innocence is relevant, meaningful and important to Eve (I hope this is obvious from the narrative).

Moreover, since people are assumed to be innocent until proven guilty, in the eye of many laws, the burden of proof argument from Russell's teapot is not applicable here.

In this twist of Russell's teapot, I think it is rational for Eve to maintain her belief. And that her belief is relevant and the burden of proof is not upon her. And by extension, this argument could be used by theist. But I know that my reasoning is not impeccable, so here I am Less Wrong.

comment by Liam Goddard · 2019-10-25T11:26:48.388Z · LW(p) · GW(p)

I definitely think that “Adam did not kill him” would be an accurate and rational belief, but there still would technically be some evidence that could convince her otherwise (such as by using a time machine.) Therefore the probability she should hold of that belief should not be quite 100%, though very close. But another important part of unfalsifiable is that she COULD have been convinced otherwise, that had there been different evidence she would have thought Adam to be the killer. The most important thing here is that beliefs are probabilistic. It is quite possible for a perfect Bayesian to believe something and to think it possible that they could encounter evidence which persuades them otherwise. Eve should hold a high, but not 100%, probability that Adam was innocent. I don’t see how any of this could apply to theism, though, since theism isn’t founded on much evidence.

comment by ThoughtSpeed · 2016-07-30T18:18:12.567Z · LW(p) · GW(p)

I think the names you chose were quite distracting from the problem, at least for me. See paragraphs 4-6 in this article for why: http://lesswrong.com/lw/gw/politics_is_the_mindkiller/

comment by Pimgd · 2016-07-25T10:26:39.465Z · LW(p) · GW(p)

Eve is irrational. But that's because she has suddenly forgotten her earlier statements. "show me the video recording, then I would believe". If there is evidence, then the belief could be falsified. That's what Eve should have said.

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-25T17:29:30.539Z · LW(p) · GW(p)

The idea of the story is that there are no evidence. Because I think, in real life, sometimes, there are important and relevant things with no evidence. In this case, Adam's innocence is important and relevant to Eve (for emotional and social reasons I presume), but there is no, and there will never be, evidence. Given that, saying: "If there is evidence, then the belief could be falsified." is a kind of cheating because producing new evidence is not possible anymore.

Replies from: Pimgd, g_pepper, buybuydandavis
comment by Pimgd · 2016-07-26T09:36:07.757Z · LW(p) · GW(p)

because producing new evidence is not possible anymore.

Okay...

So, say it turns out that, well, Eve is irrational. Somehow.

Now what? Do we go "neener-neener" at her? What's the point? What's the use that you could get out of labeling this behavior irrational?

Suppose Adam dies and is cryo-frozen. During Eve's life, there will be no resuscitation of Adam. Sometime afterward, however, Omega will arrive, deem the problem interesting and simulate Adam via really really really advanced technology.

Turns out he didn't do it.

Is she now rational because, well, turns out she was right after all? Well, no, because getting the right answer for the wrong reasons is not the rational way to go about things (in general, it might help in specific cases if you need to get the answer right but don't care how).

....

Actually, let me just skip over a few paragraphs I was going to write and skip to the end.

You cannot have 100% confidence interval. Because then your belief is set in stone and it cannot change. You can have a googleplex nines if you want, but not 100% confidence.

Fallacy of argument from probability (if it can happen then it must happen) aside; How is it rational to discard a belief you are holding on shaky evidence if you think with near absolute certainty that no more evidence will arrive, ever? What will you do when there is more evidence? (Hint: Meeting Adam's mother at the funeral and hearing childhood stories about what a nice kid he was is more evidence for his character, albeit very weak evidence - and so are studies that show that certain demographics of the timeperiod that Adam lived in had certain characteristics) You gotta update! (I don't think that fallacy I mentioned applies; if it does, we can fix it with big numbers; if you are to hold this belief everywhere, then... the probabilities go up as it turns from "in this situation" to "in at least one of all these situations")

So to toss a belief aside because you think there will be no more evidence is the wrong action to me. You can park a belief. That is to take no action. Maintain status quo. No change in input is no change in output. But you do NOT clear the belief.

Let me put up a strawman - I'll leave it up to others to see if there's something harder underneath - if you hold this action - "I think there will be no more evidence, and I am not very confident either way, so I will discard the output" to be the rightful action to take, how do you prevent yourself from getting boiled like a frog in a pan (yes, that's a false story - still, I intend the metaphorical meaning: how do you stop yourself from discarding every bit of evidence that comes your way, because you "know" there to be no more evidence?)

In my opinion, to do as you say weakens or even destroys the gradual "update" mechanism. This leads to less effective beliefs, and thus is irrational.


Were we to now look at the 3 questions, I'd answer..

Again, Eve is irrational because she says it cannot be falsified. If we let Eve say "I still think he didn't do it because of his character, and I will keep believing this until I see evidence to the contrary - and if such evidence doesn't exist, I will keep believing this forever" - then yes, Eve is rational.

The second question, yes via this specific example. Here it can, thus it can.

Yes, it can be extended to belief in God. Provided we restrict "God" to a REALLY TINY thing. As in, gee, a couple thousand years ago, something truly fantastic happened - it was God! I saw it with my own eyes! You can keep believing there was, at that point in time, an entity causing this fantastic thing. Until you get other evidence, which may never happen. What you CANNOT do is say, "hey, maybe this 'God' that caused this one fantastic thing is also responsible for creating the universe and making my neighbor win the lottery and my aunt get cancer and ..." That's unloading a huge complexity on an earlier belief without paying appropriate penalties.

You don't only need evidence that the fantastical events were caused, you also need evidence they were caused by the same thing if you wish to attribute them to that same thing.

Replies from: None, Arielgenesis
comment by [deleted] · 2016-07-28T11:36:56.386Z · LW(p) · GW(p)

You don't only need evidence that the fantastical events were caused, you also need evidence they were caused by the same thing if you wish to attribute them to that same thing.

Assume I observe X, Y, Z and form three hypotheses

  • A: All of X, Y, Z had causes
  • B: All of X, Y, Z had different causes
  • C: All of X, Y, Z had the same cause

A obviously has highest probability since it includes B and C as special cases. However, which one of B and C do you think should get complexity penalty over the others?

In you story:

Yes, it can be extended to belief in God. Provided we restrict "God" to a REALLY TINY thing. As in, gee, a couple thousand years ago, something truly fantastic happened - it was God! I saw it with my own eyes! You can keep believing there was, at that point in time, an entity causing this fantastic thing. Until you get other evidence, which may never happen. What you CANNOT do is say, "hey, maybe this 'God' that caused this one fantastic thing is also responsible for creating the universe and making my neighbor win the lottery and my aunt get cancer and ..." That's unloading a huge complexity on an earlier belief without paying appropriate penalties.

The relevant comparison is: Given that God did X, what is the probability that God also did Y and Z, verses God did not do those things.

P(God did Y, Z | God did X) = P(God did X,Y, Z) / P(God did X)

v.s.

P(God did not do Y, Z | God did X) = P(God did X, and something other than God did Y, Z) / P(God did X)

I am uncertain about how to correctly apply complexity penalty, but I do believe that the multi explanation model "God did X, and something other than God did Y, Z" should get complexity penalty over the sing explanation model "God did X, Y, Z".

The belief "God caused some tiny thing, a couple thousand years ago", should correlated with the belief "God did this big thing right now". This is why I firmly believe that God did not cause some tiny thing, a couple of thousand years ago.

Replies from: Pimgd
comment by Pimgd · 2016-07-28T15:55:57.827Z · LW(p) · GW(p)

Phrased like this, I see what you're getting at; but in my mind, I was describing extraordinary, but different events. Say, miracle cures and miracle plagues or whatever. A whole bunch of locusts and your aunt being cured of cancer most likely have different causes. In that case, you first have to postulate an entity which can summon a bunch of locusts. The actual summoning need not be magical or spontaneous in nature, only their appearance. So keeping a bunch of locusts hidden away whilst feeding them (somehow), before releasing them like a plague, would do.

This SAME entity then also needs the ability to cure cancer. To me, adding abilities like this incurs complexity penalties on a pretty big scale. Especially when you start adding other stuff and start scaling this influence over time (same entity responsible for actions many thousands of years ago and events now)

Replies from: Lumifer
comment by Lumifer · 2016-07-28T17:33:24.756Z · LW(p) · GW(p)

This SAME entity then also needs the ability to cure cancer. To me, adding abilities like this incurs complexity penalties on a pretty big scale.

This says that if you are, say, an Inca ruler and you hear about Spanish conquistadors, the fact that they can ride weird beasts AND shoot fire out of metal sticks AND do a lot of other supernatural-looking stuff implies that you should disbelieve their existence -- probably not a good idea.

In general terms, the complexity penalties you're are talking about are justified only if these different abilities are unrelated. But if, instead, all of them have a common cause (e.g. massive technological superiority), the penalties no longer apply.

Replies from: Pimgd
comment by Pimgd · 2016-07-29T07:20:10.908Z · LW(p) · GW(p)

I see.

comment by Arielgenesis · 2016-07-27T03:24:57.176Z · LW(p) · GW(p)

Thank you for the reply.

My personal answer to the 3 questions is 3 yes. But I am not confident of my own reasoning, that's why I'm here, looking for confirmation. So, thank you for the confirmation.

If we let Eve say "I still think he didn't do it because of his character, and I will keep believing this until I see evidence to the contrary - and if such evidence doesn't exist, I will keep believing this forever" - then yes, Eve is rational

That is exactly what I meant her to say. I just thought I could simplify it, but apparently I lose important points along the way.

Yes, it can be extended to belief in God. Provided we restrict "God" to a REALLY TINY thing.

I am a theist, but I am appalled by the lack of rational apologetic, the abundance of poor ones, and the disinterest to develop a good one. So here I am, making baby steps.

Replies from: Pimgd
comment by Pimgd · 2016-07-27T08:00:01.667Z · LW(p) · GW(p)

The point is that these days... and I think in the days before that, AND the days before that... ... Okay, so basically since forever, "God" has been such a loaded concept...

If you ask people where God is, some of them will tell you that "God is in everything and anything" (or something to that tune). Now, these people don't have to be right (or wrong!) but that's ... a rather broad definition to me.

One can imagine God as an entity. Like, I dunno, a space alien from an alternative universe (don't ask how that universe was created; I don't know, this is a story and not an explanation). With super advanced technology. So if we then ask "did God create the world" and we (somehow...?) went back in time and saw that, hey, this space alien was somewhere else at the time and, no, the planet formed via other means, then you'd have a definitive answer to that question.

But there are other definitions. God are the mechanics of the universe. So, what you'd call the laws of physics, no, that's just God. That's how God keeps everything going. Why, then, yes, God did create the world! But only because current scientific understanding says "we think physics did it" and then you say "Physics is God".

Anyway, if you want a sane, useful, rational answer to your third question then you must define God. I personally treated God as 1 entity in my earlier answer, which leads to the problem of having to connect events to the same entity (which, when you know very little about that entity, is pretty hard). (If you didn't connect events to that same entity then something else must have caused it, in which case you have multiple probable causes for fantastic events, and you might as well call them Gods individually?)


I don't quite grasp what you mean with the last bit...

I am a theist, but I am appalled by the lack of rational apologetic, the abundance of poor ones, and the disinterest to develop a good one. So here I am, making baby steps.

Could you clarify?

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-28T06:04:58.688Z · LW(p) · GW(p)

God is a messy concept. As a theist, I am leaning more towards the Calvinistic Christianity. Defining God is very problematic because, by definition, it is something, which in it's fullness, is beyond human comprehension.

Could you clarify?

Since ancient time, there are many arguments for and against God (and the many versions of it). Lately, the arguments against God has developed to a very sophisticated extend and the theist is lagging very far behind and there doesn't seem to be any interest in catching up.

Replies from: None, Pimgd
comment by [deleted] · 2016-07-28T12:00:18.324Z · LW(p) · GW(p)

It is a very interesting quest you have taken on. As an atheist, I am always interested in hearing good arguments in favour of God.

Why don't you start by answering: Why are you a theist? You have looked at all the evidence available to you, and arrived at a posterior where P(God exists) >> P(God does not exist). Explain your reasoning to us. If your reasoning is good enough for you, why would it not be good enough for me?

Replies from: entirelyuseless, Arielgenesis
comment by entirelyuseless · 2016-07-29T15:08:04.553Z · LW(p) · GW(p)

"Explain your reasoning to us. If your reasoning is good enough for you, why would it not be good enough for me?"

Christians will sometimes ask me this, trying to get me to explain why I no longer think that Christianity is true.

And it has a very good answer. There really are good reasons why my reasoning is good enough for me, and would not be good enough for them. Basically, they want me to give a few short arguments which they will, quite rightly, dismiss as unconvincing. I fully understand why they dismiss them as unconvincing. It is because "a few short arguments," no matter what they are, will in fact be unconvincing. I understand that, because I would have dismissed them as unconvincing myself in the past, and I fully understand why I would have done that, and it would have been quite reasonable.

But my reasoning is good enough for me, because I have thought about these things for years, considering not just a few short arguments, but many, many many arguments, and replies to replies, and replies to replies to replies, and so on. So I understand how things stand overall, and this "how things stand overall" cannot be communicated in a few short arguments.

In that way, to the degree that "If your reasoning is good enough..." is rhetorical, and implies that if you are not convinced, they should not be convinced either, it is a fallacy.

comment by Arielgenesis · 2016-07-29T03:17:11.889Z · LW(p) · GW(p)

Why are you a theist?

This is very poorly formulated. But there are 2 foundations in my logic. First is, that I am leaning towards presuppositionalism (https://en.wikipedia.org/wiki/Presuppositional_apologetics). The only way to build a 'map', first of all, is to take a list of presuppositions for granted. I am also interested in that (see my post on http://lesswrong.com/lw/nsm/open_thread_jul_25_jul_31_2016/). The idea is that a school could have a non-contradicting collection of self-referential statement that covers the epistemology and axiology and another school have another distinct collection. And due to the expensiveness of computation and lack of information, both maps are equally good and predicting what should and should not happen ("and also what is actually happening and why", what scientist, not rationalist, cares about).

The other part is, the basis of this post, personal experience. All of my personal life experience, up until this point, "arrived at a posterior where P(God exists) >> P(God does not exist)" exactly in the same way Eve arrived at hers in this OP.

Now I do realize that is very crude and not at all solid, not even presentable. But since you asked, there you go.

comment by Pimgd · 2016-07-28T10:51:35.428Z · LW(p) · GW(p)

Which is why I use labels such as "an entity" which may or may not be "omniscient" or "omnipotent". You can describe God in terms of labels; If I had a car, and had to describe it, I could say parts of it were made from leather, parts of it were made from metals, parts of it were made from rubber, looking at it gives a grey sensation, but there is also red and white and black...

If God really can do anything and everything then everything is evidence of and evidence against God and you have 0 reason to update on any of the beliefs surrounding God. Which is, once again, why you don't tie 100% probability to things. That includes statements of the nature "God caused this".

comment by g_pepper · 2016-07-25T18:54:14.114Z · LW(p) · GW(p)

The idea of the story is that there are no evidence.

But in the OP, you said:

she has known Adam very well and the Adam that she knew, wouldn't commit murder. She uses Adam's character and her personal relationship with him as evidence.

It seems to me that Adam's character as observed by Eve is evidence. Not irrefutable evidence, but evidence all the same. It seems to me that, baring evidence of Adam's guilt or evidence that Adam's character had recently changed, Eve is rational for beleiving Adam to be innocent on the basis of that evidence.

Cain provided no such evidence, so Eve is rational in her belief.

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-27T02:21:52.952Z · LW(p) · GW(p)

is evidence. Not irrefutable evidence

Yes, that's exactly what I had in mind.

The idea of the story is that there are no evidence.

What I meant was that there are no possibility of new evidence.

I also think that Eve is rational. But I'm not sure if I am correct. Thank you for the confirmation.

comment by buybuydandavis · 2016-07-27T12:30:53.167Z · LW(p) · GW(p)

because producing new evidence is not possible anymore.

How do you claim to know that?

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-28T05:56:43.917Z · LW(p) · GW(p)

Well... That's part of the story. I'm sure there is a term for it, but I don't know what. Something that the story gives and you accept it as fact.

Replies from: buybuydandavis
comment by buybuydandavis · 2016-07-28T12:06:52.566Z · LW(p) · GW(p)

That kind of knowledge is not part of the human condition. By making it a presupposition of your story, you render your hypothetical inapplicable to actual human life.

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-29T03:25:50.176Z · LW(p) · GW(p)

I will have to copy paste my answer to your other comment:

Yes I could. I chose not to. It is a balance between suspension of disbelieve and narrative simplicity. Moreover, I am not sure how much credence should I put on recent cosmological theories that they will not be updated the future, making my narrative set up obsolete. I also do not want to burden my reader with familiarity of cosmological theories.

Am I not allowed to use such narrative technique to simplify my story and deliver my point? Yes I know it is out of touch with the human condition but I was hoping it would not strain my audiences' suspension of disbelieve.

Replies from: buybuydandavis
comment by buybuydandavis · 2016-07-29T21:23:14.063Z · LW(p) · GW(p)

The problem is that the unrealistic simplification acts precisely on the factor you're trying to analyze - falsifiability. If you relax the unrealistic assumption, the point you're trying to make about falsifiabilty no longer holds.

comment by Dagon · 2016-07-25T03:11:36.007Z · LW(p) · GW(p)

This belief pays no rent. It's unfalsifiable precisely because it's irrelevant - there is no prediction that Eve can make which would give different outcomes based on Adam's past behavior. The belief just doesn't matter.

Separately, if she assigns 0.0 probability to anything, she's probably not actually as rational as she claims.

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-25T04:01:22.093Z · LW(p) · GW(p)

What if we were to take one step back and Adam didn't die. Eve claims that her believe pays rent because it could be falsified if Adam changed in character. In this scenario, I suppose that you would agree to say that Eve is still rational.

Now, I cannot formulate my arguments properly at the moment, but I think it is weird that Adam's death make Eve's belief irrational, as per:

So I do not believe a spaceship blips out of existence when it crosses the cosmological horizon of our expanding universe, even though the spaceship's existence has no further experimental consequences for me.

http://lesswrong.com/lw/ss/no_logical_positivist_i/

Replies from: Dagon
comment by Dagon · 2016-07-25T16:08:52.017Z · LW(p) · GW(p)

I think you're focusing too much on the label "rational", and not enough on the actual effect of beliefs.

I'll admit I'm closer to logical positivism than is Eliezer, but even if you make the argument (which you haven't) that the model of the universe is simpler (in the Kolmogorov complexity sense) by believing Adam killed Able, it's still not important. Unless you're making predictions and taking actions based on a belief (or on beliefs influenced by that belief), it's neither rational nor irrational, it's irrelevant.

Now, a somewhat more complicated example, where Eve has to judge Cain's likelihood of murdering her, and thinks the circumstances of the locked room in the past are relevant to her future, there are definite predictions she should be making. Her confidence in Adam's innocence implies Cain's guilt, and she should be concerned.

It's still the case that she cannot possibly have enough evidence for her confidence to be 1.00.

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-25T17:22:56.704Z · LW(p) · GW(p)

Thank you, that was a very nice extension to the story. I should have included the scenario to make her belief relevant. I agree with you, assigning 100% probability is irrational in her case. But, if she is not rationally literate enough to express herself in fuzzy, non-binary way, I think she would maintain rationality through saying "Ceteris paribus, I prefer to be not locked in the same room with Cain because I believe he is a murder because I believe Adam was innocent" (ignoring ad hominem)

I was under the impression that the golden standard for rationality is falsifiability. However, I now understand that Eve is rational despite unfalsifiablity, because she remained Bayesian.

Replies from: Dagon
comment by Dagon · 2016-07-25T21:20:11.408Z · LW(p) · GW(p)

I'm still deeply troubled by the focus on labels "rational" and now "Bayesian", rather than "winning", "predicting", or "correct".

For epistemic rationality, focus on truth rather than rationality: do these beliefs map to actual contingent states of the universe? Especially for human-granularity beliefs, Bayesian reasoning is really difficult, because it's unlikely for you to know your priors in any precise way.

For instrumental rationality, focus on decisions: are the actions I'm taking based on these beliefs likely to improve my future experiences?

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-27T03:07:05.251Z · LW(p) · GW(p)

human-granularity

I don't understand what does it mean, even after a google search, so please enlighten me.

For epistemic rationality

I think so. I think she has exhausted all the possible avenue to reach the truth. So she is epistemically rational. Do you agree?

For instrumental rationality

Now this is confusing to me as well. Let us forget about the extension for the moment and focus solely on the narrative as presented in the OP. I am not familiar how does value and rationality goes together, but, I think there is nothing wrong if her value is "Adam's innocence" and that it is inherently valuable, and end to it self. Am my making any mistake in my train of thought?

Replies from: Dagon
comment by Dagon · 2016-07-27T14:10:11.078Z · LW(p) · GW(p)

By human-granularity, I mean beliefs about macro states that can be analyzed and manipulated by human thought and expressed in reasonable amounts (say, less than a few hundred pages of text) of human language. As contrasted with pure analytic beliefs about the state of the universe expressed numerically.

For instrumental rationality, what goals are furthered by her knowing the truth of this fact? Presuming that if Adam is innocent, she wants to believe that Adam is innocent and if Adam is guilty, she wants to believe Adam is guilty, why does she want to be correct (beyond "I like being right")? What decision will she make based on it?

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-28T06:14:27.910Z · LW(p) · GW(p)

why does she want to be correct (beyond "I like being right")?

I think that's it. "I like knowing that the person I love is innocent." Which implies that Adam is not lying to her and "I like being in healthy, fulfilling and genuine marital relationship"

Replies from: Dagon
comment by Dagon · 2016-07-28T14:05:08.590Z · LW(p) · GW(p)

That's a reason to want him to be innocent, not a reason to want to know the truth. What's her motivation for the necessary second part of the litany: "if Adam is guilty, I want to believe that Adam is guilty"?

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-29T03:19:33.336Z · LW(p) · GW(p)

genuine marital relationship

"If Adam is guilty, then the relationship was not genuine." Am I on the right track? or did I misunderstood your question?

Replies from: Dagon
comment by Dagon · 2016-07-29T16:03:30.613Z · LW(p) · GW(p)

That just moves it up a level. If she is rational, she'll say "if our relationship was genuine, I want to believe it was genuine. If our relationship was not genuine, I want to believe it was not genuine".

The OP and most of the discussion has missed the fundamental premise of rationality: truth-seeking. The question is not "is Eve rational", but "is Eve's belief (including acknowledgement of uncertainty) correct"?

comment by ike · 2016-07-25T02:42:10.268Z · LW(p) · GW(p)

Can believing an unfalsifyable believe be rational?

Sure, see http://lesswrong.com/lw/ss/no_logical_positivist_i/

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-25T03:57:41.509Z · LW(p) · GW(p)

Thank your the link. I just read the article. It is exactly what I had mind, but my mind works better with narrative.

What I am wondering is if a theist could use this as a foundation of their arguments and remain rational.

comment by buybuydandavis · 2016-07-27T12:29:03.516Z · LW(p) · GW(p)

The search was so thorough, there could never be any new evidence about what Adam had did before the custody that could be presented in the future.

Belief in absolute, dogmatic claims on the lack of evidentiary value of possible future observations leads to unfalsifiable conclusions.

Eve is irrational to conclude that her inability to conceive of a possible future observation to change her mind means that it is impossible for such an observation to happen.

As an aside, I believe you can make a more sciency argument with recent cosmological theories. There is something about a future state of the universe where all our current evidence for the Big Bang would cease to be observable, and all we could observe was our own galaxy.

Replies from: Arielgenesis
comment by Arielgenesis · 2016-07-28T04:45:23.046Z · LW(p) · GW(p)

you can make a more sciency argument with recent cosmological theories

Yes I could. I chose not to. It is a balance between suspension of disbelieve and narrative simplicity. Moreover, I am not sure how much credence should I put on recent cosmological theories that they will not be updated the future, making my narrative set up obsolete. I also do not want to burden my reader with familiarity of cosmological theories.