Falsifiable and non-Falsifiable Ideas
post by shaih · 2013-02-19T02:24:41.584Z · LW · GW · Legacy · 40 commentsContents
40 comments
I have been talking to some people (few specific people I thought would benefit and appreciate it) in my dorm and teaching them rationality. I have been thinking of skills that should be taught first and it made me think about what skill is most important to me as a rationalist.
I decided to start with the question “What does it mean to be able to test something with an experiment?” which could also mean “What does it mean to be falsifiable?”
To help my point I brought up the thought experiment with a dragon in Carl Sagan’s garage which is as follows
Carl: There is a dragon in my garage
Me: I thought dragons only existed in legends and I want to see for myself
Carl: Sure follow me and have a look
Me: I don’t see a dragon in there
Carl: My dragon is invisible
Me: Let me throw some flour in so I can see where the dragon is by the disruption of the flour
Carl: My dragon is incorporeal
And so on
The answer that I was trying to bring about was along the lines that if something could be tested by an experiment then it must have at least one different effect if it were true than if it were false. Further if something had at least one effect different if it were true than if it was false then I could at least in theory test it with an experiment.
This led me to the statement:
If something cannot at least in theory be tested by experiment then it has no effect on the world and lacks meaning from a truth stand point therefore rational standpoint.
Anthony (the person I was talking to at the time) started his counter argument with any object in a thought experiment cannot be tested for but still has a meaning.
So I revised my statement any object that if brought into the real world cannot be tested for has no meaning. Under the assumption that if an object could not be tested for in the real world it also has no effect on anything in the thought experiment. i.e. the story with the dragon would have gone the same way independent of its truth values if it were in the real world.
Then the discussion continued into could it be rational to have a belief that could not even in theory be tested. It became interesting when Anthony gave the argument that if believing in a dragon in your garage gave you happiness and the world would be the same either way besides the happiness combined with the principle that rationality is the art of systematized winning it is clearly rational to believe in the dragon.
I responded with truth trumps happiness and believing the dragon would force you to believe the false belief which is not worth the amount of happiness received by believing it. Even further I argued that it would in fact be a false belief because p(world) > p(world)p(impermeable invisible dragon) which is a simple occum’s razor argument.
My intended direction for this argument with Anthony from this point was to apply these points to theology but we ran out of time and we have not had time again to talk so that may be a future post.
Today however Shminux pointed out to me that I held beliefs that were themselves non-falsifiable. I realized then that it might be rational to believe non-falsifiable things for two reasons (I’m sure there’s more but these are the main one’s I can think of please comment your own)
1) The belief has a beauty to it that flows with falsifiable beliefs and makes known facts fit more perfectly. (this is very dangerous and should not be used lightly because it focuses to closely on opinion)
2) You believe that the belief will someday allow you to make an original theory which will be falsifiable.
Both of these reasons if not used very carefully will allow false beliefs. As such I myself decided that if a belief or new theory sufficiently meets these conditions enough to make me want to believe them I should put them into a special category of my thoughts (perhaps conjectures). This category should be below beliefs in power but still held as how the world works and anything in this category should always strive to leave it, meaning that I should always strive to make any non-falsifiable conjecture no longer be a conjecture through making it a belief or disproving it.
Note: This is my first post so as well as discussing the post, critiques simply to the writing are deeply welcomed in PM to me.
40 comments
Comments sorted by top scores.
comment by kpreid · 2013-02-19T04:18:12.436Z · LW(p) · GW(p)
If something cannot at least in theory be tested by experiment then it has no effect on the world and lacks meaning from a truth stand point therefore rational standpoint.
Better version: …then it has no effect on the world and therefore is not useful to have information about.
As to the rest of your post, I will make a general observation: you are speaking as if epistemic rationality is a terminal value. There's nothing wrong with that (insofar as nobody can say someone else's utility function is wrong) but you might want to think about whether that is what you really want.
The alternative is to allow epistemic rationality to arise from instrumental rationality: obtaining truth is useful insofar as it improves your plans to obtain what you actually want.
Replies from: shaih↑ comment by shaih · 2013-02-19T04:51:06.551Z · LW(p) · GW(p)
Good point, I often find myself torn between epistemic rationality as a terminal value and its alternative. My thoughts are learning how to treat truth as the highest goal would be more useful to my career in physics and would be better for the world then if I currently steered to my closer less important values.
Replies from: Qiaochu_Yuan, OrphanWilde↑ comment by Qiaochu_Yuan · 2013-02-19T05:05:34.344Z · LW(p) · GW(p)
So treating truth as the highest goal serves your other, even higher goals?
What behaviors are encapsulated by the statement that you're treating truth as the highest goal, and why can't you just execute those behaviors anyway?
Replies from: Decius↑ comment by OrphanWilde · 2013-02-19T15:52:29.312Z · LW(p) · GW(p)
If you're justifying a terminal value, it's not your real terminal value.
Replies from: wedrifidcomment by Qiaochu_Yuan · 2013-02-19T04:43:43.457Z · LW(p) · GW(p)
This post is somewhat confused. I would recommend that you finish reading the Sequences before making a future post.
any object in a thought experiment cannot be tested for but still has a meaning.
One way to think about what is accomplished when you perform a thought experiment is that you are performing an experiment where the subject is your brain. The goal is to figure out what your brain thinks will happen, and statements about such things are falsifiable statements about brains.
Anthony gave the argument that if believing in a dragon in your garage gave you happiness and the world would be the same either way besides the happiness combined with the principle that rationality is the art of systematized winning it is clearly rational to believe in the dragon.
The world is not the same either way because the dragon-believer is not the same either way. If the dragon-believer actually believes that there's a dragon in her garage (as opposed to believing in her belief that she has a dragon in her garage), that belief can affect how she makes other decisions. Truths are entangled and lies are contagious.
I responded with truth trumps happiness
Why?
The belief has a beauty to it that flows with falsifiable beliefs and makes known facts fit more perfectly. (this is very dangerous and should not be used lightly because it focuses to closely on opinion)
Can you give some examples of beliefs with this property?
You believe that the belief will someday allow you to make an original theory which will be falsifiable.
Why call it a belief instead of an idea, then? (And why the emphasis on originality?)
Replies from: lukeprog, Decius, shaih↑ comment by lukeprog · 2013-02-19T11:00:45.449Z · LW(p) · GW(p)
This post is somewhat confused. I would recommend that you finish reading the Sequences before making a future post.
Or at the very least, read Eliezer's new epistemology sequence, which directly addresses the questions at the heart of the OP.
↑ comment by Decius · 2013-02-19T07:35:10.226Z · LW(p) · GW(p)
The purpose of a thought experiment is to make a prediction about a real experiment. The thought experiment is as real as any other abstract object or mental process, and the prediction it makes is as real as a prediction made by any means.
And if believing a belief which is known to be false results is a higher output on your utility function, you have a nonstandard utility function. Rationalists who have radically different utility functions are very dangerous things.
↑ comment by shaih · 2013-02-19T05:13:04.794Z · LW(p) · GW(p)
This post is somewhat confused. I would recommend that you finish reading the Sequences before making a future post.
I agree that I am putting a post here prematurely but I thought the criticism on some of my ideas would be worth it so I could fix things before they were ingrained. So thanks for the criticism.
I responded with truth trumps happiness Why?
Break of quotes
I often find myself torn between epistemic rationality as a terminal value and its alternative. My thoughts are learning how to treat truth as the highest goal would be more useful to my career in physics and would be better for the world then if I currently steered to my closer less important values.
^from the comment below
I believe that putting truth first will help me as a Physicist
Can you give some examples of beliefs with this property?
Most of the beautiful theories I know of at this point are those found in mathematics and not Physics (this is due to my math education being much greater than my physics one even though my intended career is in physics) and I don't think qualify as proper examples in this circumstance. The best I could come up with by five minutes on the clock is Einsteins theory of relativity which before experimental predictions were obtained was held as beautiful and correct.
Why call it a belief instead of an idea, then?
I wanted to call it a belief instead of an idea because when i think of examples such as Timeless physics I believe its actually how the world works and it seems much more meaningful than an idea that does not color my perspective on the world. This however may simply be over definition of Idea.
And why the emphasis on originality?
You're right, it doesn't necessarily have to be original. I was thinking along the lines that it is much harder to think of an original theory and this is a goal of mine so I had it in mind while writing this.
Replies from: beoShaffer, Qiaochu_Yuan, ChristianKl↑ comment by beoShaffer · 2013-02-19T06:13:24.692Z · LW(p) · GW(p)
I'd quite seriously like to know if you've read Making Beliefs Pay Rent it and the Mysterious Answers to Mysterious Questions sequence in general seem quite relevant I wouldn't expect you to write your post as it is now if you'd read them.
Replies from: shaih↑ comment by Qiaochu_Yuan · 2013-02-19T05:18:54.331Z · LW(p) · GW(p)
I believe that putting truth first will help me as a Physicist
Why do you want to be a physicist? (Also, first relative to what?)
The best I could come up with by five minutes on the clock is Einsteins theory of relativity which before experimental predictions were obtained was held as beautiful and correct.
In what sense was relativity non-falsifiable at the time that Einstein described it?
Replies from: shaih↑ comment by shaih · 2013-02-19T05:44:38.483Z · LW(p) · GW(p)
Why do you want to be a physicist?
I learned of Quantum mechanics when I was younger and I grew curious of it because it was mysterious. Now Quantum mechanics is not mysterious but the way the world works is and I am still deeply curious about it.
In what sense was relativity non-falsifiable at the time that Einstein described it?
It was falsifiable but I was thinking that it was still extraordinarily beautiful
also the quote
Replies from: Qiaochu_YuanIn 1919, Sir Arthur Eddington led expeditions to Brazil and to the island of Principe, aiming to observe solar eclipses and thereby test an experimental prediction of Einstein's novel theory of General Relativity. A journalist asked Einstein what he would do if Eddington's observations failed to match his theory. Einstein famously replied: "Then I would feel sorry for the good Lord. The theory is correct."
↑ comment by Qiaochu_Yuan · 2013-02-19T05:51:51.941Z · LW(p) · GW(p)
I learned of Quantum mechanics when I was younger and I grew curious of it because it was mysterious. Now Quantum mechanics is not mysterious but the way the world works is and I am still deeply curious about it.
So... would you say that it makes you happy when your curiosity is satisfied?
When you said that "truth trumps happiness," it sounded like you were saying "in general, truth trumps happiness." If the reason you personally value truth is because you think it will help you as a physicist, and the reason you want to be a physicist is because you are curious about physics, then you don't have a reason to value truth which applies in general. Why should other people, who are not necessarily interested in physics or particularly curious about things, value truth above happiness?
It was falsifiable but I was thinking that it was still extraordinarily beautiful
Right, but you were giving a reason why you would have a belief that is non-falsifiable, and this is not an example of such a belief. Einstein defying the data is not Einstein thinking that relativity wasn't falsifiable, it's Einstein thinking that relativity wasn't falsifiable by just one experimental result.
Replies from: shaih↑ comment by shaih · 2013-02-19T06:20:12.589Z · LW(p) · GW(p)
Thinking about truth vs. happiness I believe that I think if given a decision of truth or happiness it is already to late for me to fully accept happiness. In short thinking about the decision made the decision. On top of this I am to curious to avoid thinking about certain topics of which i would be faced with this (not) decision so I will always embrace truth over happiness.
What I will now have to think on is given a friend who aspires to be more rational yet, not a scientist or somebody similar, and i find a thought pattern that is giving false but enjoyable results, should I intervene?
As to Einstein I was not saying how his belief was unfalsifiable but my thought process with out my conscious knowledge probably thought that Eisenstein's theory was evidence for p(truth/beauty) being higher. If so I realize that this is only weak evidence.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-19T06:29:00.390Z · LW(p) · GW(p)
I believe that I think if given a decision of truth or happiness it is already to late for me to fully accept happiness.
Why do you believe that? Even given that you believe this is currently true, do you think this is something you should change about yourself, and if not, why?
(I'm teasing you to some extent. What I regard to be the answers to many of the questions I'm asking can be found in the Sequences.)
As to Einstein I was not saying how his belief was unfalsifiable
I think you've lost track of why we were talking about Einstein. In the original post, you listed two reasons to believe non-falsifiable things. I asked you to give an example of the first one. Maybe it wasn't sufficiently clear that I was asking for an example which wasn't falsifiable, in which case I apologize, but I was (after all, that's why it came up in the first place). Relativity is falsifiable. A heuristic that beautiful things tend to be true is also falsifiable.
Replies from: shaih↑ comment by shaih · 2013-02-19T06:38:34.008Z · LW(p) · GW(p)
(I'm teasing you to some extent. What I regard to be the answers to many of the questions I'm asking can be found in the Sequences.)
I know the answers to most of these questions can be found in the sequences because I read them. However the sequences include quite a bit of information and it is clear not all or probably even most made it into the way I think. You asking me these questions is extremely helpful to me filling in those gaps and I appreciate it.
Why do you believe that? Even given that you believe this is currently true, do you think this is something you should change about yourself, and if not, why?
I believe that because I do not have the mental discipline required to both know a belief is false and still gain happiness from that belief. It is possible that I could change this about myself but I don't see myself ever learning the discipline required to lie to myself (if doublethink is actually possible). Its also possible to go the other-way and say that something injured my brain and brought my intelligence to a level that I could no longer see why i should think one way instead of another, or not being able to see the truth vs. happiness decision which would let me pick happiness without lying to myself.
I think that most of two is based off of that heuristic which allows you to gain evidence for the claim even though it remains unfalsafiable and only weak evidence.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-19T07:10:03.655Z · LW(p) · GW(p)
You asking me these questions is extremely helpful to me filling in those gaps and I appreciate it.
Glad to hear that. I was afraid I might be being a little too harsh.
I believe that because I do not have the mental discipline required to both know a belief is false and still gain happiness from that belief.
I guess I should clarify what I was trying to say. If you optimize for truth and not happiness, you will seek out a whole bunch of truths whether or not you expect that knowing those truths will make you happier. If you optimize for happiness and not truth, you'll only seek truths that will help make you happier. I'm not asking you to consider explicitly lying to yourself, which is in some sense hard, but I'm asking you to consider the implications of optimizing for truth vs. optimizing for happiness.
Whether or not you do, most people do not optimize for truth. Do you think this is a good thing or a bad thing, and in either case, why?
I think that most of two is based off of that heuristic which allows you to gain evidence for the claim even though it remains unfalsafiable and only weak evidence.
What is this referring to?
Replies from: shaih↑ comment by shaih · 2013-02-19T07:28:04.647Z · LW(p) · GW(p)
What is this referring to?
I think you've lost track of why we were talking about Einstein. In the original post, you listed two reasons to believe non-falsifiable things. I asked you to give an example of the first one. Maybe it wasn't sufficiently clear that I was asking for an example which wasn't falsifiable, in which case I apologize, but I was (after all, that's why it came up in the first place). Relativity is falsifiable. A heuristic that beautiful things tend to be true is also falsifiable.
quote break
Whether or not you do, most people do not optimize for truth. Do you think this is a good thing or a bad thing, and in either case, why?
Perhaps it would be easier for me to replace the word happiness with Awesomeness in which case I could see the argument that optimizing for awesomeness would let me seek out ways to make the world more awesome and would allow specific circumstances of what i consider awesome to be to govern which truths to seek out. In this way I can understand optimizing for awesomeness.
I think it is a good thing most people do not optimize for truth because if it were so I don't think the resulting world would be awesome. It would be a world where many people were less happy even though it would also probably be a world with more scientific advances.
I suppose that if anyone were to optimize for truth it would be a minority who wanted to advance science further to make the general population more happy while the scientist themselves were not always. Even in this case I could understand the argument that they were optimizing awesomeness not truth because they thought the resulting world would be more awesome.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-19T07:43:15.361Z · LW(p) · GW(p)
I still don't see how anything you've said about Einstein is relevant to the original question I asked, which was for an example of a belief that you thought was beautiful, non-falsifiable, and worth holding.
I think it is a good thing most people do not optimize for truth because if it were so I don't think the resulting world would be awesome. It would be a world where many people were less happy even though it would also probably be a world with more scientific advances.
Cool. So we agree now that truth does not trump awesomeness? (Somewhat tangential comment: science is not the only way to seek out truth. I also have in mind things like finding out whether you were adopted.)
Replies from: shaih↑ comment by shaih · 2013-02-19T07:49:16.628Z · LW(p) · GW(p)
You're right Einstien was not relevant to your original question. I brought him up because I did not understand the question until
I think you've lost track of why we were talking about Einstein. In the original post, you listed two reasons to believe non-falsifiable things. I asked you to give an example of the first one. Maybe it wasn't sufficiently clear that I was asking for an example which wasn't falsifiable, in which case I apologize, but I was (after all, that's why it came up in the first place). Relativity is falsifiable. A heuristic that beautiful things tend to be true is also falsifiable.
Thanks for leading me to the conclusion truth does not trump awesomeness and yes I now agree with this.
I also have in mind things like finding out whether you were adopted
Good point
↑ comment by ChristianKl · 2013-02-19T15:44:19.933Z · LW(p) · GW(p)
I believe that putting truth first will help me as a Physicist
Do you think that there are some professional Physicist who put truth first and others who don't? Do you believe that those who put truth first perform better.
What evidence do you see in the world that this is true?
comment by Viliam_Bur · 2013-02-19T09:48:43.597Z · LW(p) · GW(p)
In given situation I would be more curious that if the dragon is invisible, incorporeal, immune to all tests ... then what exactly made Carl believe there is a dragon?
Yeah, I know, there are many possible explanations. Perhaps the dragon can become visible and corporeal when the dragon wants to; and it happened when only Carl was near, because unlike me, he is a nice and humble person or whatever, so the dragon likes him. Alternatively, Carl derived the dragon's existence from the philosophical first principles, and if I disagree with him, here are hundred books with thousand pages each, and I am welcome to show exactly where the authors made a mistake. :(
comment by Manfred · 2013-02-19T04:17:23.145Z · LW(p) · GW(p)
I responded with truth trumps happiness and believing the dragon would force you to believe the false belief which is not worth the amount of happiness received by believing it
In the future, I hope you notice this sort of situation and respond by getting curious and engaging with the other person, rather than attempting to win the argument.
Today however Shminux pointed out to me that I held beliefs that were themselves non-falsifiable.
In fact, it's rather worse :) The negation of an unfalsifiable belief is also unfalsifiable - you unfalsifiably believe that Carl's garage does not have an immaterial dragon in it. Even if you make an observation, e.g. you throw a ball to measure the gravitational acceleration, you have an unfalsifiable belief that you have not just hallucinated the whole thing.
Replies from: Decius, shaih↑ comment by Decius · 2013-02-19T07:41:57.514Z · LW(p) · GW(p)
At some point you devolve into declaring all beliefs unfalsifiable, because of the unfalsifiable belief that you exist (what different observations would you expect to you didn't exist?) and the complementary unfalsifiable belief that you don't exist (suppose you existed; what observations would be different?)
↑ comment by shaih · 2013-02-19T04:46:29.641Z · LW(p) · GW(p)
In fact, it's rather worse :) The negation of an unfalsifiable belief is also unfalsifiable - you unfalsifiably believe that Carl's garage does not have an immaterial dragon in it. Even if you make an observation, e.g. you throw a ball to measure the gravitational acceleration, you have an unfalsifiable belief that you have not just hallucinated the whole thing.
As a general principle it would seem that the negation of an unfalsifiable belief is better then the falsifiable one. Meaning that the unfalsifiable belief has a much larger number of worlds in which it is true then the falsifiable one.
For example: There are many more possible ways that Carl does not have a immaterial dragon in his garage than possible ways that he does.
I think a good way to think about this meaning which unfalsifiable belief to hold is the evidence that brought it out of the original hypothesis space. In this way Timeless physics has a higher ratio of probability (p(timeless physics)/p(not timeless physics)) then an immaterial dragon.
However it is a warning flag to me when someone brings up that
you have an unfalsifiable belief that you have not just hallucinated the whole thing.
because of the negligible probability of this belief and giving it power in an argument would both be an example of Scope Insensitivity as well as preventing any useful work being done
Never the less it reminded me that i should be thinking in terms of probability to unfalsifiable beliefs rather then simply the fact that there unfalsifiable. maybe i should revise conjectures to unfalsifiable beliefs that are within a certain probability margin. say p=.8 to p=.2. I would still separate them from higher beliefs because simply labeling them with a probability is still not intuitive enough for myself not confuse them with scope insensitivity.
comment by Shmi (shminux) · 2013-02-19T19:45:45.277Z · LW(p) · GW(p)
It became interesting when Anthony gave the argument that if believing in a dragon in your garage gave you happiness and the world would be the same either way besides the happiness combined with the principle that rationality is the art of systematized winning it is clearly rational to believe in the dragon.
It would indeed makes sense to believe in the dragon, if you had examined all other alternatives you could think of and found that this belief gives you most happiness, and if happiness is your goal (it's the issue of hedons vs utilons).
You may also want to consider the reasons why the +dragon world makes you happier. For example, if you dig deeper, you might find that there is an experience somewhere you are not at all fond of, like the unresolved pain of learning years ago that God/Santa/Tooth Fairy is not real, after all, and consider a visit to your therapist. Maybe once this issue is resolved, you will no longer derive happiness from imagining that a +dragon world is real. Whether killing cheap happiness this way is an instrumentally rational thing to do is a different story, and there is a fair bit about it in the Sequences and in HPMOR (poor Draco, I feel for him).
comment by TimS · 2013-02-19T15:39:44.176Z · LW(p) · GW(p)
Keep in mind that "falsifiability" is not a scientific concept, it is a philosophy-of-science concept. Specifically, Popper articulated the concept in order to divide Science from pseudo-scientific theories masquerading as scientific. In other words, Popper was worried that theories like Marxist History and Freudian Psychology were latching on to the halo effect and portraying themselves as worthy of the same serious consideration as Science without actually being scientific.
Thus, there's no particular reason to desire that a belief be falsifiable. Popper's project was simply to define Science such that only falsifiable statements and theories qualified. It turns out that scientific theories are much better at making future predictions than non-scientific theories, and we have philosophy-of-science reasons why we think this is so. But falsifiability is a definition to clarify thought, not a virtue to be aspired towards.
comment by JQuinton · 2013-02-20T19:01:06.642Z · LW(p) · GW(p)
An unfalsifiable idea can still be true. To reject it outright just because it's unfalsifiable would require infinite certainty
comment by Borealis · 2013-02-20T06:59:34.954Z · LW(p) · GW(p)
I would be interested to hear your arguments that Truth > Happiness. I think it is kind of hard to simply state that without backing it up with reasons.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-20T07:12:27.527Z · LW(p) · GW(p)
shaih and I discussed this in the responses to my comment here and he updated away from this conclusion. (It is very easy to simply state conclusions without backing them up with reasons. I think you meant to say that it is not particularly persuasive.)
Replies from: Borealis↑ comment by Borealis · 2013-02-20T07:16:33.519Z · LW(p) · GW(p)
Thank you.
Yes that is what I was implying and you seem to have been successful in deriving that so I don't really see the point in suggesting that I should have explained it differently.
Replies from: shaih↑ comment by shaih · 2013-02-20T08:08:09.592Z · LW(p) · GW(p)
I also would like to point out that Anthony didn't disagree with me when i said it and accepted that assumption. When I can i'm going to use the arguments that Qiaochu_Yuan had and go back to talk with him to see if he will update as well.
comment by ChristianKl · 2013-02-19T15:39:48.357Z · LW(p) · GW(p)
I responded with truth trumps happiness and believing the dragon would force you to believe the false belief which is not worth the amount of happiness received by believing it.
You miss the point. If you say that the belief in the dragon is false, than you are saying that it's falsifiable. It bad to confuse claims that aren't falsifible with claims that are false. The two are very different.
The important thing isn't to shun non-falsifiable beliefs. The important thing is to know which of your beliefs are falsifiable and which aren't.
Replies from: DaFranker↑ comment by DaFranker · 2013-02-19T16:55:20.416Z · LW(p) · GW(p)
The important thing isn't to shun non-falsifiable beliefs. The important thing is to know which of your beliefs are falsifiable and which aren't.
I thought a belief that isn't even in-principle falsifiable was essentially a floating belief not entangled to reality about something epiphenomenal that you couldn't statistically ever have correctly guessed? Like, say, zombies or dragons in garages?
Replies from: Vladimir_Nesov, ChristianKl↑ comment by Vladimir_Nesov · 2013-02-19T17:55:51.930Z · LW(p) · GW(p)
The issue is with the mode of "shunning": a meaningless belief shouldn't be seen as false, it should be seen as meaningless. The opposite of a meaningless belief is not true.
(Also, "unfalsifiable", narrowly construed, is not the same thing as meaningless. There might be theoretical conclusions that are morally relevant, but can't be tested other than by examining the theoretical argument.)
Replies from: DaFranker↑ comment by DaFranker · 2013-02-19T18:05:05.931Z · LW(p) · GW(p)
Ah, thanks, all good points. Guess I was lumping together the whole unfalsifiability + meaninglessness cluster/region.
Likewise, when I thought "the opposite of a meaningless belief", it turns out I was really thinking "the opposite of the implied assumption that this belief is meaningful", which is obviously true if the belief is known to be meaningless... (because IME that's what arguments usually end up being about)
↑ comment by ChristianKl · 2013-02-19T18:33:49.141Z · LW(p) · GW(p)
I thought a belief that isn't even in-principle falsifiable was essentially a floating belief not entangled to reality about something epiphenomenal that you couldn't statistically ever have correctly guessed?
There are statements that are neither correct nor incorrect. "A ; This statement is false" would be one example.
Another statement would be "B : I know that this statement is false." From my own perspective A and B are both statement to which I can't attach the label true or false. For me it would be a mistake to believe that A or B are true or that they are false.
There's also another class of beliefs: You have a bunch of beliefs that you learned when you were three years old and younger about how you have a mind and of how other have minds. You believe that there's something that can be meaningfully called "you" that can be happy or sad. You believe that you are a worthwhile human being whose life has meaning.
Those beliefs are central to act as a sane human being in the world but they might not be falsifiable true. It very difficult to go after the beliefs that you learned in your first three years of life as they are deeply ingrained in the way you deal with the world.
Someone who believes that their life has meaning usually can't give you a p value for that claim. It's a belief that they hold for their own emotional health. They don't really need to examine that belief critically. It's okay to hold beliefs that way if you know that you do.