Too Much Effort | Too Little Evidence
post by Erfeyah · 2017-01-24T12:37:33.436Z · LW · GW · Legacy · 64 commentsContents
64 comments
I would like to explore certain kinds of experiential knowledge that appear to me to be difficult to investigate rationally as the rational attitude itself might be the cause of a reluctance to explore. If this is already covered in one of the articles on the site please refer me to it.
In this thought experiment we will use the example of lucid dreaming. Lucid dreaming is a state in which a person realises they are dreaming while they are dreaming. The subtleties of the state are not relevant to this discussion.
Circumstances
[1] We will assume the experiment takes place at a time where the existence of the experience of lucid dreaming hasn't been scientifically proven yet. We will also assume that a proof is not possible in the current state of technological or methodological development.
[2] Person A has a (true) belief on the existence of lucid dreaming that is based on his personal experience of the state.
[3] He is trying to communicate the existence of lucid dreaming to someone else. Let us call him person B.
[4] Actually becoming lucid in a dream is quite a complex process that requires among other things1:
[4.1] Expending large amounts of effort.
[4.2] Following guidelines and exercises that appear strange.
[4.3] A time investment of significant length.
In the described circumstances we have an internal experience that has not be scientifically proven but is nevertheless true. We know this in our time through scientific studies but B does not know it in his world. Person B would have to actually believe in the existence of lucid dreaming and trust A to guide him through the process. But since there is no sufficient evidence to support the claim of A, the required effort is significantly large and the methods appear strange to those not understanding the state how can B rationally decide to expend the effort?
Proposed Conclusion
[5] People focusing on rational assessment can be mislead when dealing with experiential knowledge that is not yet scientifically proven, is not easily testable and has no obvious external function but is, nevertheless, experientially accessible.
1 Even if you disagree with the level of difficulty or the steps required please accept [4] and its sub-headings as being accurate for the duration of the argument.
64 comments
Comments sorted by top scores.
comment by Kindly · 2017-01-25T05:35:48.850Z · LW(p) · GW(p)
Rational assessment can be misleading when dealing with experiential knowledge that is not yet scientifically proven, has no obvious external function but is, nevertheless, experientially accessible.
So, uh, is the typical claim that has an equal lack of scientific evidence true, or false? (Maybe if we condition on how difficult it is to prove.)
If true - then the rational assessment would be to believe such claims, and not wait for them to be scientifically proven.
If false - then the rational assessment would be to disbelieve such claims. But for most such claims, this is the right thing to do! It's true that person A has actually got hold of a true claim that there's no evidence for. But there's many more people making false claims with equal evidence; why should B believe A, and not believe those other people?
(More precisely, we'd want to do a cost-benefit analysis of believing/disbelieving a true claim vs. a comparably difficult-to-test false claim.)
Replies from: Erfeyah, ProofOfLogic↑ comment by Erfeyah · 2017-01-25T11:37:56.944Z · LW(p) · GW(p)
So, uh, is the typical claim that has an equal lack of scientific evidence true, or false?
[5.1] As ProofOfLogic indicates with his example of shamanistic scammers the space of claims about subjective experiences is saturated with demonstrably false claims.
[5.2] This actually causes us to adjust and have a rule of ignoring all strange sounding claims that require subjective evidence (except if it is trivial to test).
You are right that if the claim is true an idealised rational assessment should be to believe the claim. But how do you make a rational assessment when you lack evidence?
(More precisely, we'd want to do a cost-benefit analysis of believing/disbelieving a true claim vs. a comparably difficult-to-test false claim.)
When lacking evidence, the testing process is difficult, weird and lengthy - and in light of the 'saturation' mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.
Replies from: Kindly↑ comment by Kindly · 2017-01-25T17:40:17.394Z · LW(p) · GW(p)
When lacking evidence, the testing process is difficult, weird and lengthy - and in light of the 'saturation' mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.
And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.
From the point of view of someone who has a true claim but doesn't have evidence for it and can't easily convince someone else, you're right that this approach is frustrating. But if I were to relax my standards, the odds are that I wouldn't start with your true claim, but start working my way through a bunch of other false claims instead.
Evidence, in the general sense of "some way of filtering out the false claims", can take on many forms. For example, I can choose to try out lucid dreaming, not because I've found scientific evidence that it works, but because it's presented to me by someone from a community with a good track record of finding weird things that work. Or maybe the person explaining lucid dreaming to me is scrupulously honest and knows me very well, so that when they tell me "this is a real effect and has effects you'll find worth the cost of trying it out", I believe them.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-25T18:34:12.329Z · LW(p) · GW(p)
And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.
From the point of view of someone who has a true claim but doesn't have evidence for it and can't easily convince someone else, you're right that this approach is frustrating. But if I were to relax my standards, the odds are that I wouldn't start with your true claim, but start working my way through a bunch of other false claims instead.
Exactly, that is why I am pointing towards the problem. Based on our rational approach we are at a disadvantage for discovering these truths. I want to use this post as a reference to the issue as it can become important in other subjects.
I can choose to try out lucid dreaming, not because I've found scientific evidence that it works, but because it's presented to me by someone from a community with a good track record of finding weird things that work. Or maybe the person explaining lucid dreaming to me is scrupulously honest and knows me very well, so that when they tell me "this is a real effect and has effects you'll find worth the cost of trying it out", I believe them.
Yes, that is the other way in. Trust and respect. Unfortunately, I feel we tend to surround ourselves with people that are similar to us and thus selecting our acquaintances in the same way we select ideas to focus on. In my experience (which is not necessarily indicative), people tend to just blank out unfamiliar information or consider it a bit of an eccentricity. In addition, as stated, if a subject requires substantial effort before you can confirm its validity it becomes exponentially harder to communicate even in these circumstances.
Replies from: Kindly, ProofOfLogic↑ comment by Kindly · 2017-01-25T22:02:06.081Z · LW(p) · GW(p)
Based on our rational approach we are at a disadvantage for discovering these truths.
Is that a bad thing?
Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets. That puts them at a disadvantage for winning the lottery. But it gives than an overall advantage in having more money, so I don't see it as a problem.
The situation you're describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you're at a disadvantage in knowing about unlikely-but-true facts that have yet to become mainstream. But you're also not paying the opportunity cost of trying out many unlikely ideas, most of which don't pan out. Overall, you're better off, because you have more time to pursue more promising ways to satisfy your goals.
(And if you're not better off overall, there's a different problem. Are you consistently underestimating how useful unlikely fringe beliefs that take lots of effort to test might be, if they were true? Then yes, that's a problem that can be solved by trying out more fringe beliefs that take lots of effort to test. But it's a separate problem from the problem of "you don't try things that look like they aren't worth the opportunity cost.")
Replies from: TheAncientGeek, Erfeyah↑ comment by TheAncientGeek · 2017-01-28T13:45:37.095Z · LW(p) · GW(p)
Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets.
Whereas as someone who understands advanced probability, particularly the value/utility distinction, might.
The situation you're describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you're at a disadvantage in knowing about unlikely-but-true facts that have yet to become mainstream. But you're also not paying the opportunity cost of trying out many unlikely ideas, most of which don't pan out. Overall, you're better off, because you have more time to pursue more promising ways to satisfy your goals.
So long as you can put a ceiling on possible benefits.
↑ comment by Erfeyah · 2017-01-25T22:47:11.190Z · LW(p) · GW(p)
I propose that it is a bad thing.
Your assessment makes the assumption that the knowledge that we are missing is "not that important". Since we do not know what the knowledge we are missing is, its significance could range from insignificant to essential. We are not at a the point where we can make that distinction so we better start realising and working on the problem. That is my position.
To my eyes your further analysis makes the assumption that the only strategy we can follow would be to randomly try out beliefs. Although I have not formulated a solution (I am currently just describing the problem), I can already see much more efficient ways of navigating the space. I will post when I have something more developed to say about this.
Replies from: dranorter, Jiro↑ comment by dranorter · 2017-01-26T00:17:48.114Z · LW(p) · GW(p)
Your assessment makes the assumption that the knowledge that we are missing is "not that important".
Better to call it a rational estimate than an assumption.
It is perfectly rational to say to onesself "but if I refuse to look into anything which takes a lot of effort to get any evidence for, then I will probably miss out." We can put math to that sentiment and use it to help decide how much time to spend investigating unlikely claims. Solutions along these lines are sometimes called "taking the outside view".
To my eyes your further analysis makes the assumption that the only strategy we can follow would be to randomly try out beliefs.
For the sake of engaging with your points 1 thru 5, ProofOfLogic, Kindly, et al. are supposing the existence of a class of claims for which there exists roughly the same amount of evidence pro and con as exists for lucid dreaming. This includes how much we trust the person making the claim, how well the claim itself fits with our existing beliefs, how simple the claim is (ie, Occam's Razor), how many other people make similar claims, and any other information we might get our hands on. So the assumption for the sake of argument is that these claims look just about equally plausible once everything we know or even suspect is taken into account.
It seems very reasonable to conclude that the best one can do in such a case is choose randomly, if one does in fact want to test out some claim within the class.
But suggestions as to what else might be counted as evidence are certainly welcome.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-26T01:15:49.322Z · LW(p) · GW(p)
That is actually very clear :) Thanks. As I was saying to ProofOfLogic this post is about the identification of the difficult space which I think we are all in agreement. The way you explain it I see why you would suggest that choosing at random is the best rational strategy. I would prefer to explore associated topics in a different post so we keep this one self contained (and because I have to think about it!).
Thanks for engaging!
↑ comment by Jiro · 2017-01-25T23:33:44.885Z · LW(p) · GW(p)
But the knowledge that you miss by wasting your time on things with bad evidence instead of spending your time on something else with good evidence could also range from insignificant to essential. And since it has good evidence, more such things are likely to pan out.
Replies from: TheAncientGeek, Erfeyah↑ comment by TheAncientGeek · 2017-01-29T11:37:50.126Z · LW(p) · GW(p)
But the knowledge that you miss by wasting your time on things with bad evidence instead of spending your time on something else with good evidence could also range from insignificant to essential.
Assuming everything is instrumental, and that your goals/values themselves aren;t going to be changed by any subjective experience.
Replies from: Jiro↑ comment by Jiro · 2017-01-30T05:51:40.155Z · LW(p) · GW(p)
I think I should be more explicit: Saying that ignoring bad evidence could lead you miss things "ranging from insignificant to essential"
1) is worded in a lopsided way that emphasizes "essential" too much--almost everything you'll miss is insignificant, with the essential things being vanishingly rare.
2) Is special pleading--many activities could get you to miss things "ranging from insignificant to essential", including ignoring bad evidence, ignoring claims because they are fraudulent, or ignoring the scientific theories of a 6 year old, and nobody bothers mentioning them.
3) is probably being said because the speaker really wants to treat his bad evidence as good evidence, and is rationalizing it by saying "even bad evidence could have essential knowledge behind it sometimes".
↑ comment by Erfeyah · 2017-01-26T00:12:39.874Z · LW(p) · GW(p)
I am not proposing wasting time with bad evidence. I am just pointing towards a problem that creates a space of difficult to discover truths. The strategy about dealing with this is for another post. This post is concerned with the identification of the issue.
Replies from: Jiro↑ comment by Jiro · 2017-01-28T08:18:01.150Z · LW(p) · GW(p)
I am not proposing wasting time with bad evidence.
Yes you are. You say that if you believe bad evidence, you may end up believing something true that ranges from insignificant to essential.
But any belief with any evidence could range from insignificant to essential. And you aren't mentioning them.
So you must think there's something special about beliefs based on bad evidence, that gives you a reason to mention them.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-28T14:08:07.020Z · LW(p) · GW(p)
Yes you are. You say that if you believe bad evidence, you may end up believing something true that ranges from insignificant to essential.
This is correct. But you are conflating the identification of the issue with an action strategy that I haven't suggested. Also do not forget that I am talking about truths that are experientially verifiable not just believed in.
But any belief with any evidence could range from insignificant to essential. And you aren't mentioning them.
Of course. If there is evidence a rational approach will lead us to the conclusion that it is worth exploring the belief. I think the LW community is perfectly aware of that kind of assesment.
So you must think there's something special about beliefs based on bad evidence, that gives you a reason to mention them.
I think there is something special about truths for which the verification is experientially available, but for which there is currently no evidence.
↑ comment by ProofOfLogic · 2017-01-26T00:09:58.430Z · LW(p) · GW(p)
Based on our rational approach we are at a disadvantage for discovering these truths.
As I argued, assigning accurate (perhaps low, perhaps high) probabilities to the truth of such claims (of the general category which lucid dreaming falls into) does not make it harder -- not even a little harder -- to discover the truth about lucid dreaming. What makes it hard is the large number of similar but bogus claims to sift through, as well as the difficulty of lucid dreaming itself. Assigning an appropriate probability based on past experience with these sorts of claims only helps us because it allows us to make good decisions about how much of our time to spend investigating such claims.
What you seem to be missing (maybe?) is that we need to have a general policy which we can be satisfied with in "situations of this kind". You're saying that what we should really do is trust our friend who is telling us about lucid dreaming (and, in fact, I agree with that policy). But if it's rational for us to ascribe a really low probability (I don't think it is), that's because we see a lot of similar claims to this which turn out to be false. We can still try a lot of these things, with an experimental attitude, if the payoff of finding a true claim balances well against the number of false claims we expect to sift through in the process. However, we probably don't have the attention of looking at all such cases, which means we may miss lucid dreaming by accident. But this is not a flaw in the strategy; this is just a difficulty of the situation.
I'm frustrated because it seems like you are misunderstanding a part of the response Kindly and I are making, but you're doing a pretty good job of engaging with our replies and trying to sift out what you think and where you start disagreeing with our arguments. I'm just not quite sure yet where the gap between our views is.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-26T01:02:28.461Z · LW(p) · GW(p)
I don't think there is a gap. I am pointing towards a difficulty. If you are acknowledging the difficulty (which you are) then we are in agreement. I am not sure why it feels like a disagreement, Don't forget that at the start you had a reason for disagreeing which was my erroneous use of the word rationality. I have now corrected that so maybe we are arguing from the momentum of our first disagreement :P
Replies from: ProofOfLogic↑ comment by ProofOfLogic · 2017-01-26T09:16:43.261Z · LW(p) · GW(p)
so maybe we are arguing from the momentum of our first disagreement :P
I think so, sorry!
↑ comment by ProofOfLogic · 2017-01-25T07:46:01.756Z · LW(p) · GW(p)
We also have to take into account priors in an individual situation. So, for example, maybe I have found that shamanistic scammers who lie about things related to dreams are pretty common. Then it would make sense for me to apply a special-case rule to disbelieve strange-sounding dream-related claims, even if I tend to believe similarly surprising claims in other contexts (where my priors point to people's honesty).
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-25T12:01:10.312Z · LW(p) · GW(p)
Lucid dreaming is actually an interesting one where I always have to start an introduction to it with "It has been scientifically proven, I am not crazy or part of a cult.". Even then I sometimes get sceptical responses. In most cases, I don't think I would be able to communicate it at all if it had not been scientifically proven. Meditation is another one that is highly stigmatised because of its associations with (in many cases demonstrably) crazy claims and.. "thrown out with the bath water" though this seems to be slowly changing as more and more studies are showing its benefits.
These are two instances where scientific evidence has surfaced so it is easy to talk about. They are good as indicative examples. The post is about experiences that (assuming they exist) have not yet enter the area discoverable by our current scientific tools.
Replies from: ProofOfLogic↑ comment by ProofOfLogic · 2017-01-25T20:09:44.339Z · LW(p) · GW(p)
You must move in much more skeptical circles than me. I've never encountered someone who even "rolled to disbelieve" when told about lucid dreaming (at least not visibly), even among aspiring rationalists; people just seem to accept that it's a thing. But it might be that most of them already heard about it from other sources.
comment by Jiro · 2017-01-25T23:28:25.681Z · LW(p) · GW(p)
This whole question is reminiscent of "suppose psychic powers really did exist, but depended on the beliefs of the psychic in such a way that skeptics just couldn't reproduce them in a laboratory? Wouldn't it look like the way psychics are now, and how could you ever discover this?"
When you ask this, you're basically asking "what if it isn't fake, but it looks just like a fake?"
If something looks just like a fake, the reasonable conclusion is that it's a fake. In a hypothetical where real things look like fakes, this conclusion will of course be wrong. There's no way to avoid this, because it's always possible for some really improbable scenario to produce evidence that leads you to the wrong conclusion.
So in a hypothetical where lucid dreaming hasn't been proven, and can't be proven, you should conclude that it's not worth spending time on lucid dreaming. It's no different from concluding that you shouldn't try out each proposed perpetual motion machine. This conclusion would be wrong, but the bizarre hypothetical in which lucid dreaming is real but looks just like a fake forces that conclusion.
Replies from: RomeoStevens, Erfeyah↑ comment by RomeoStevens · 2017-01-28T09:01:54.013Z · LW(p) · GW(p)
in a hypothetical where lucid dreaming hasn't been proven, and can't be proven, you should conclude that it's not worth spending time on lucid dreaming.
this seems false. The provability would just be one dimension of a cost-benefit analysis. If lucid dreaming were high value and low cost to test, but unproveable, you'd likely go ahead and test. Likewise with psychic tests in the real world. Grab a pack of cards and test with a friend. Takes ~1 minute.
↑ comment by Erfeyah · 2017-01-26T00:26:57.497Z · LW(p) · GW(p)
Sorry to repeat myself but I am just pointing to a space of truth. Your example is your reason that you are biased towards the exploration of the space. I understand that and think it is logical of you to do so. I am just juxtaposing this with the example of lucid dreaming or meditation where there was something to find out and wonder what amount of that space we are missing because of our bias. That is all.
Replies from: Jiro↑ comment by Jiro · 2017-01-28T08:02:15.052Z · LW(p) · GW(p)
But you could always be missing something. Something can be true even though all the evidence looks bad. But something can also be true if there is no evidence (either good or bad). Something can be true even if the only evidence is fraudulent. Something can be true even if the only evidence for it is that you asked your 6 year old child for an example of science and he told you it.
Why don't you wonder about any of them being true, instead of just wondering about the case where wishful thinking comes into play?
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-28T14:20:06.915Z · LW(p) · GW(p)
Why don't you wonder about any of them being true, instead of just wondering about the case where wishful thinking comes into play?
Could you point me to where I only wondered about "the case where wishful thinking comes into play"?
I am getting the sense that you are arguing against propositions I haven't put forward. Could you phrase my argument in your own words so we can clarify?
comment by entirelyuseless · 2017-01-24T14:33:38.342Z · LW(p) · GW(p)
"But since there is no rational evidence to support the claim of A"
A's claim is immediate evidence, since A is more likely to make the claim if they actually have the experience than if they do not.
[That said, 4 is in fact false -- in all of my dreams, I always know that I am dreaming, and I never invested any kind of effort whatsoever.]
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-24T15:21:24.656Z · LW(p) · GW(p)
A's claim is immediate evidence, since A is more likely to make the claim if they actually have the experience than if they do not.
It is indeed anecdotal evidence. But since we live in a world where people constantly offer anecdotal evidence to support their claims of unexplained subjective experiences rational people tend to ignore them. But (if my reasoning is correct) the fact is that a real method can work before there is enough evidence to support it. My post attempts to bring to our attention that this will make it really hard to discover certain experiences assuming that they exist.
[That said, 4 is in fact false -- in all of my dreams, I always know that I am dreaming, and I never invested any kind of effort whatsoever.]
Yes, that is why I included footnote 1. I think my statements are true for most people but it is not a perfect example. Nevertheless, I feel the example is accurate enough to communicate the underlying argument.
Replies from: entirelyuseless, ProofOfLogic↑ comment by entirelyuseless · 2017-01-25T13:51:02.945Z · LW(p) · GW(p)
"But since we live in a world where people constantly offer anecdotal evidence to support their claims of unexplained subjective experiences rational people tend to ignore them."
Stanley Jaki tells this story:
Laplace shouted, "We have had enough such myths," when his fellow academician Marc-Auguste Pictet urged, in the full hearing of the Académie des Sciences, that attention be given to the report about a huge meteor shower that fell at L'Aigle, near Paris, on April 26, 1803.
I presume this would be an example of Laplace being rational and ignoring this evidence, in your view. In my view, it shows that people trying to be rational sometimes fail to be rational, and one case of this is by ignoring weak evidence, when weak evidence is still evidence. Obviously you do not assume that everything is necessarily correct: but the alternative is to take it as weak evidence, rather than ignoring it.
Replies from: Erfeyah↑ comment by ProofOfLogic · 2017-01-25T07:54:41.768Z · LW(p) · GW(p)
But (if my reasoning is correct) the fact is that a real method can work before there is enough evidence to support it. My post attempts to bring to our attention that this will make it really hard to discover certain experiences assuming that they exist.
Discounting the evidence doesn't actually make it any harder for us to discover those experiences. If we don't want to lose out on such things, then we should try some practices which we assign low probability, to see which ones work. Assigning low probability isn't what makes this hard -- what makes this hard is the large number of similarly-goofy-sounding things which we have to choose from, not knowing which ones will work. Assigning a more accurate probability just allows us to make a more accurate cost-benefit analysis in choosing how much of our time to spend on such things. The actual amount of effort it takes to achieve the results (in cases where results are real) doesn't change with the level of rationality of our beliefs.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-25T12:14:01.375Z · LW(p) · GW(p)
I think I see what you are saying.
I am phrasing the problem as an issue with rationality when I should have been phrasing it as a type of bias that tends to affect people with a rationality focus. Identifying the bias should allow us to choose a strategy which will in effect be the more rational approach.
Did I understand you correctly?
P.S: I edited the opening paragraph and conclusion to address yours and entirelyuseless' valid criticism.
Replies from: ProofOfLogic↑ comment by ProofOfLogic · 2017-01-25T20:01:45.197Z · LW(p) · GW(p)
Yes, I think that's right. Especially among those who identify as "skeptics", who see rationality/science as mostly heightened standards of evidence (and therefore lowered standards of disbelief), there can be a tendency to mistake "I have to assign this a low probability for now" for "I am obligated to ignore this due to lack of evidence".
The Bayesian system of rationality rejects "rationality-as-heightened-standard-of-evidence", instead accepting everything as some degree of evidence but requiring us to quantify those degrees. Another important distinction which bears on this point is "assuming is not believing", discussed on Black Belt Bayesian. I can't link to the individual post for some reason, but it's short, so here it is quoted in full:
Replies from: ErfeyahAssuming Is Not Believing
Suppose I’m participating in a game show. I know that the host will spin a big wheel of misfortune with numbers 1-100 on it, and if it ends on 100, he will open a hatch in the ceiling over my head and dangerously heavy rocks will fall out. (This is a Japanese game show I guess.) For $1 he lets me rent a helmet for the duration of the show, if I so choose.
Do I rent the helmet? Yes. Do I believe that rocks will fall? No. Do I assume that rocks will fall? Yes, but if that doesn’t mean I believe it, then what does it mean? It means that my actions are much more similar (maybe identical) to the actions I’d take if I believed rocks would definitely fall, than to the actions I’d take if I believed rocks would definitely not fall.
So assuming and believing (at least as I’d use the words) are two quite different things. It’s true that the more you believe P the more you should assume P, but it’s also true that the more your actions matter given P, the more you should assume P. All of this could be put into math.
Hopefully nothing shocking here, but I’ve seen it confuse people.
With some stretching you can see the assumptions made by mathematicians in the same way. When you assume, with the intent to disprove it, that there is a largest prime number, you don’t believe there is a largest prime number, but you do act like you believe it. If you believed it you’d try to figure out the consequences too. It’s been argued that scientists disagree among themselves more than Aumann’s agreement theorem condones as rational, and it’s been pointed out that if they didn’t, they wouldn’t be as motivated to explore their own new theories; if so, you could say that the problem is that humans aren’t good enough at disbelieving-but-assuming.
↑ comment by Erfeyah · 2017-01-25T22:15:31.742Z · LW(p) · GW(p)
The Bayesian system of rationality rejects "rationality-as-heightened-standard-of-evidence", instead accepting everything as some degree of evidence but requiring us to quantify those degrees. Another important distinction which bears on this point is "assuming is not believing"
I do like the flexibility of the Bayesian system of rationality and the "assuming is not believing" example you gave. But I do not (at the moment) see how it is any more efficient in the occasion where the evidence is not clearly quantified or is simply really weak.
There seems to me to be a dependence of any system of rational analysis to the current stage of quantifiable evidence. In other words, rational analysis can go up to where science currently is. But there is an experimental space that is open for exploration without convincing intellectual evidence. Navigating this space is a bit of a puzzle..
But this is a half baked thought. I will post when I can express it clearly.
Replies from: ProofOfLogic↑ comment by ProofOfLogic · 2017-01-25T23:42:30.102Z · LW(p) · GW(p)
That's related to Science Doesn't Trust Your Rationality.
What I'd say is this:
Personally, I find the lucid-dreaming example rather absurd, because I tend to believe a friend who claims they've had a mental experience. I might not agree with their analysis of their mental experience; for example, if they say they've talked to God in a dream, then I would tend to suspect them of mis-interpreting their experience. I do tend to believe that they're honestly trying to convey an experience they had, though. And it's plausible (though far from certain) that the steps which they took in order to get that experience will also work for me.
So, I can imagine a skeptic who brushes off a friend's report of lucid dreaming as "unscientific", but I have no sympathy for it. My model of the skeptic is: they have the crazy view that observations made by someone who has a phd, works at a university, and publishes in an academic journal are of a different kind than observations made by other people. Perhaps the lucid-dreaming studies have some interesting MRI scans to show differences in brain activity (I haven't read them), but they must still rely on descriptions of internal experience which come from human beings in order to establish the basic facts about lucid dreams, right? In no sense is the skeptic's inability to go beyond the current state of science "rational"; in fact, it strikes me as rather irrational.
This is an especially easy mistake for non-Bayesian rationalists to make because they lack a notion of degrees of belief. There must be a set of trusted beliefs, and a process for beliefs to go from untrusted to trusted. It's natural for this process to involve the experimental method and peer review. But this kind of naive scientism only makes sense for a consumer of science. If scientists used the kind of "rationality" described in your post, they would never do the experiments to determine whether lucid dreaming is a real thing, because the argument in your post concludes that you can't rationally commit time and effort to testing uncertain hypotheses. So this kind of naive scientific-rationalism is somewhat self-contradictory.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-26T00:50:01.616Z · LW(p) · GW(p)
Yes, that makes sense. I don't think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments. I am using it as a label for a social group and you are using it as an approach to knowledge. Needless to say this is my mistake as the whole point of this post is about improving the rational approach by becoming aware of what I think as a difficult space of truth.
If scientists used the kind of "rationality" described in your post, they would never do the experiments to determine whether lucid dreaming is a real thing, because the argument in your post concludes that you can't rationally commit time and effort to testing uncertain hypotheses. So this kind of naive scientific-rationalism is somewhat self-contradictory.
That, I feel, is not accurate. Don't forget that my example assumes a world before the means to experimentally verify lucid dreaming were available. The people that in the end tested lucid dreaming were the lucid dreamers themselves. This will inevitably happen for all knowledge that can be verified. It will happen by the people that have it. I am talking about the knowledge that is currently unverifiable (except through experience).
Replies from: ChristianKl, ProofOfLogic↑ comment by ChristianKl · 2017-01-26T12:10:54.677Z · LW(p) · GW(p)
I am using it as a label for a social group and you are using it as an approach to knowledge.
The problem is that you point to a social group that quite different from the LW crowd that calls itself rationalists.
Let's look at my local LW dojo. I know my own skills from first hand experience so they are not a good example, but there are two other people who profess to have very nonstandard mental skills (I think both are as awesome as Lucid Dreaming). I'm not aware of either of those skills having been described in academia. Yet nobody of the aspiring rationalists in our dojo doubts them when they speak about those skills.
We don't know how any of those skills could be taught to other people. For one of them we tried to do a few exercises and teaching it doesn't work. The other is likely even more complex so that we aren't able to even create exercises to teach it.
In neither of those cases the lack of academic description of the skills is any concern to us because we trust us to be honest to each other.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-26T13:00:41.794Z · LW(p) · GW(p)
The problem is that you point to a social group that quite different from the LW crowd that calls itself rationalists.
Indeed, I am very happy to learn that and I have internally adjusted my use of the world 'rationalist' to what the community suggests and demonstrates through behaviour. I might slip into using it wrongly from time to time (out of habit) but I trust the community will correct me when that happens.
Now of course, inevitably my next question has to be: What are your two skills? :)
Replies from: ChristianKl↑ comment by ChristianKl · 2017-01-26T13:16:54.421Z · LW(p) · GW(p)
One is the ability to create a mental alert that pops up at a specific environment. "When entering the supermarket, I will think of the milk". That's a quite useful skill.
The second is mental multithreading.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-26T13:34:06.322Z · LW(p) · GW(p)
the ability to create a mental alert that pops up at a specific environment.
Ah, yes. I tried to implement this kind of trigger myself with some success using mnemonic techniques. I visualise a strong reminder image at a location that is then triggered when I am there. I find this works kind of ok when I first attach the image by visualising it while being in the space but this might be because my visualisation skills are poor. Is that what you are doing or have a you find a different way to attach the trigger? I would love to be able to do that reliably!
The second is mental multithreading.
I can see this happening in a subconscious level - the brain is 'multithreading' anyway - but, as far as I can tell through observing myself, the conscious stream is always one. Except if you manage to encode different threads in different senses (like imagining a scene with its sound, vision encoding different things) but I can not see how that is possible. How do you experience the skill?
[I know we are off topic but this is really interesting. If you have a thread discussing these do point me to it.]
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2017-01-26T20:05:36.545Z · LW(p) · GW(p)
Is that what you are doing or have a you find a different way to attach the trigger? I would love to be able to do that reliably!
Nobody in our group managed to copy the skill and make it work for them. From the description of it, it seems like in addition visualizing it's important to add other senses. For the person who makes it work, I think smell is always included.
How do you experience the skill?
As I said above, it's a skill that another person has and I don't have any way to copy it. https://www.facebook.com/mqrius/posts/10154824172856168?pnref=story has a write-up of details.
Replies from: Erfeyah↑ comment by Lumifer · 2017-01-26T16:53:30.056Z · LW(p) · GW(p)
but, as far as I can tell through observing myself, the conscious stream is always one
Nope. There is profession -- simultaneous interpretation. Basically you translate someone speaking naturally: the speaker doesn't pause and you speak in a different language simultaneously with listening to the speaker. Typically you're a sentence behind. This requires having two two separate threads in your consciousness.
It's a skill that needs to be trained, not a default ability, though.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-26T17:49:23.552Z · LW(p) · GW(p)
I am not sure that is necessarily simultaneous. Attention can be on the speaker with the response being produced and delivered automatically through lots and lots of practice. This is what I observe of myself during music improvisation. The automatic part can even have creative variations.
Another example would be to try to read a paragraph that is new to you while at the same time having a proper discussion, versus reading the paragraph while you sing a song you already know by heart. You can do the second thing because the delivery of the song is automatic but not the first because both processes deal with novel input/output.
Replies from: Lumifer↑ comment by Lumifer · 2017-01-26T18:02:48.773Z · LW(p) · GW(p)
It is pretty simultaneous because you can't afford to let any thread fall back to "automatically" for more than a second or two. It is also a recognizable sensation of having and managing two threads in your mind. You do have some main focus which flickers between the two threads depending on which needs attention more, but both stay continuous and coherent.
It is actually hard effort to maintain the two threads and not lose one of them.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-26T18:36:31.784Z · LW(p) · GW(p)
You do have some main focus which flickers between the two threads depending on which needs attention more
That is what I observe and I consider this focus to be attention. Of course it could be that I just lack the ability. If you have any kind of exercise/experiment that I can try in order to experience it please share! As long as it isn't too much effort! (Just kidding :P)
Replies from: Lumifer↑ comment by Lumifer · 2017-01-26T18:56:34.541Z · LW(p) · GW(p)
That is what I observe and I consider this focus to be attention.
But the thing is, the focus does not switch completely, it just leans. It's like you're standing and shifting your weight from one foot to another, but still you never stand on one foot, you merely adjust the distribution of weight. And it takes explicit effort to keep the two threads coherent, you never "let go" of one completely.
As far as I know, the ability isn't "natural" (or is rare) -- it needs to be developed and trained.
As to exercises, not sure. There are classes which teach simultaneous interpreting, but you probably need to be bilingual to start with.
Replies from: Erfeyah↑ comment by ProofOfLogic · 2017-01-26T09:14:44.708Z · LW(p) · GW(p)
The people that in the end tested lucid dreaming were the lucid dreamers themselves.
Ah, right. I agree that invalidates my argument there.
Yes, that makes sense. I don't think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments.
Ok. (I think I might have also been inferring a larger disagreement than actually existed due to failing to keep in mind the order in which you made certain replies.)
comment by MrMind · 2017-01-26T15:40:25.033Z · LW(p) · GW(p)
. But since there is no sufficient evidence to support the claim of A, the required effort is significantly large and the methods appear strange to those not understanding the state how can B rationally decide to expend the effort?
The correct but unfortunately unsatisfying answer is that it all depends on your prior information (aka your model of the world). If A asserts something, and you think A is trustworthy, then this constitutes an evidence towards lucid dreaming, while if you think A is prone to lying, than his/her assertion is evidence against.
Given this, you can decide to expend the effort with the usual expected value maximization (with the usual Pascal's wager caveat: lucid dreaming must not have an extremely large utility compared to the rest of your alternatives).
↑ comment by Erfeyah · 2017-01-26T16:40:00.408Z · LW(p) · GW(p)
Yes trust seems to be central here. Which brings the question of rationally judging who to trust into focus. Not an easy problem, to say the least...
Replies from: Dagon↑ comment by Dagon · 2017-01-28T15:29:15.740Z · LW(p) · GW(p)
MrMind's point is not "trust is central", but "prior beliefs are central". What person B believes about the world and has learned through many years of observation and analysis does and should color their reaction to A's claim.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-28T16:16:28.163Z · LW(p) · GW(p)
Beliefs that B has about the world contain, as a subset, the beliefs that B has about A and his/her reliability. Since the argument in my post assumes that there is currently no sufficient evidence for the experience, B has to judge based on this subset for which we use the word 'trust'.
comment by Slider · 2017-01-26T12:05:34.276Z · LW(p) · GW(p)
Powerful particle accelerators are not trivial things to produce at will. They cost a lot of money. They are not produced because the hypothesis seems strong but because the topic is important.
Someone who isn't willing to replicate the particle colliding has only the word of the one that has personally done so. Aren't you asking on how can you do science without bothering to do experiments?
Which reseach fields are funded more is more of a political and value question rather than a question of rationality. In your situation the effort expected should be pretty stable and capped but the value of the knowledge is vague. Even if you would be 100% sure that a conclusive negative or positive would be produced if the effort would have been expended it would have been an open question was it cost-effective.
For example suppose that I want you to fund my reseach with $100 dollars on the mating habits of spiders (results of which you would then get). Is this a utilon negative or positive deal? Contrast this with a reserach that would have a 50% chance of lowering the cost of producing a product from $90 to $80 and then the problem was how much one could afford to pay for it (haggle over it).
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-26T12:48:40.735Z · LW(p) · GW(p)
The example of a particle accelerator is not analogous to mine. Knowledge of physics has come about from scientific enquiry and there is an abundance of extremely strong evidence (experimental and mathematical) that suggests the experiment makes sense. So, I think in this case to say:
Someone who isn't willing to replicate the particle colliding has only the word of the one that has personally done so.
is not an accurate statement.
Also, I don't see the relevance of attaching money value to the research and using a utility argument. This way of thinking will make this a political question. I am talking about a space of knowledge and its characteristics. I see the value of knowledge as inherent and based on its significance in the whole system. If you have a different conception of knowledge you will make a different decision on whether to pursue the space or not. But this post is only about pointing that the space exists.
Replies from: Slider↑ comment by Slider · 2017-08-28T17:44:51.786Z · LW(p) · GW(p)
"there is an abundance of extreme strong evidence (experimental and mathematical)" means that we find the story that somebody actually performed some kind of interaction with the universe to hold the belief (very) plausible. Contrast this with faked results where somebody types up a scientific paper but actually forges or doesn't do the experiments claimed. One of the main methods of "busting" it is doing it ourselfs ie replication.
There are crises where research communities spend time and effort on the assumption that a scientific paper holds true. We could say that if this "fandom" does not involve replication then their methodology is something other than science and thus they are not scientists. However the enthusiasim or the time spent on the paper by itbelief doesn't make it any more scientific or reliable. If the "founding paper" and derivative concepts and systems are forgeries it taints the whole body of activity to be epistemologically of low quality even if the "derivating steps" (work done after the forged paper) were logically sound.
However "what knowledge you should be a fan of?" is not a scientific question. Given all the hypotheses out these there is no right or wrong choice what to research. The focus is more that if you decide to target some field it would be proper to produce knowledge rather than nonsense. "If I knew exactly what I was doing it would not be called research, now would it?". There can be no before the fact guarantees what the outcome will be.
Asking whether you should bother to figure something out totally misses the epistemology of it. Someone who sees inherent value in knowledge will see moderate pain to attain it. But typically this is not the only relevant motivator. In the limit where this is the only motivation there is never a question of "hey we could do this to try figure out X" that would be answered in the "don't do it" variety. Such a system of morales could for example think that whether it is moral to ever have leisure as that time could be used for more research or how much more than fullitme one should be a researcher and how much one can push overtime to to not risk being burned out and not being able to function as a researcher in the future.
A hybrid model that has other interests than knowledge can and will often say "it would be nice to know but its too expensive - that knowledge is not worth the 3 lives lost those resources could be alternatively used for to save", "no we can't do human expeirments as the information is not worth the suffering of the test subjects" (Nazis get here a science boost as they are not slowed down by ethics boards). However sometimes it is legit to enter patients into a double blind experiement where 50% of patients will go untreated (or with only placebo) as the knowledge attained can then be used to for the benefit of other patients ("suffering" metric comes as net positive despite having positive and negative components). So large and important tool knowledge gains can offset other considerations. But asking the reverse of "how small a knowledge gain can be assumed to be overridden by other considerations" can't not really be answered without knowledge of what would override it. Or we can safely say that there is no knowledge small enough that it wouldn't be valuable if obtainable freely.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-08-30T10:52:49.533Z · LW(p) · GW(p)
I do not object to what you are saying. You are describing the rational approach. But my point is completely different. It is outside rational justification. What I tried to show in my original post, is that that there is a space of experiential knowledge, that a rational assessment such as the one in your comment can not reach. You can think of it as the space of inquiry being constrained by our current scientific knowledge. To follow on my original example, if someone suggested to a rational thinker, 250 years ago, to try lucid dreaming, he would not do it cause their cost/benefit analysis will indicate that it was not worth their time. Today, this would not be a problem cause the evidence for lucid dreaming is rationally acceptable. It logically follows that there is a space of experiential knowledge for which, at certain times, rationality is a disadvantage.
Hope that helps :)
Replies from: Slider↑ comment by Slider · 2017-08-31T09:56:41.978Z · LW(p) · GW(p)
So seeing many white swans makes you less prepared for black swans than someone who has seen 0 swans?
I do think that someone who seriously understands the difference between inductive and deducative reasoning won't be completely cut out, but I get that in practise this will be so.
It has less to do with rationality and more to do with "stuff that I believe now". If you believe in something it will mean you will disbelieve in something else.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-08-31T11:23:06.224Z · LW(p) · GW(p)
So seeing many white swans makes you less prepared for black swans than someone who has seen 0 swans?
Yes, but it is even more complex than that as it is pointing to the communication of private experiences. Let's say that you see the swans as one category but someone tells you that there are some swans which he calls 'black' and are different. You look closely but you only see white swans. Then he tells you that this is because your perception is not trained correctly. You ask for evidence and he says that he doesn't have any but he can tell you the steps for seeing the difference.
The steps have to do with a series of exercises that you have to do every day for a year. You then have to look at the swans at a certain time of the day and from a certain angle so that the light is just right. You look at the person and wonder if they are crazy. They, on the other hand are in a position where they do not know how to convince you. You rationally decide that the evidence is just not enough to justify the time investment.
After a number of years that person comes up with a way to prove that the result is genuine and also demonstrates how the difference has consequences in the way the swans are organized. You suddenly have enough rational evidence to try! You do and you get the expected results.
The knowledge was available before the evidence crossed the threshold of rational justification.
Replies from: Slider↑ comment by Slider · 2017-09-02T17:16:23.065Z · LW(p) · GW(p)
I guess with swans you can just say "go look at swans in africa" which gives a recipe for public experience that would reproduce the category-boundary.
It is the case with seaguls that they are mostly white but males and female have different ultraviolet patterns. Here someone who doesn't see ultraviolet can easily tell the classes apart but someone who sees only 3 colors will be nearly impossible. Then if throught some kind of (convulted) training you could make your eye see ultraviolet (people with and without all the natural optics respond slightly differently to ultraviolet light sources so there is a theorethical chance some kind of extreme alteration of the eyes could yield it to clearly recognisable levels).
Now ultraviolet cameras can be produces and those pretty much produce public experiences (ie the output can be read by a 3 color person too). Now I am wondering whether the difference between "private sensors" and constructed instruments is merely that with constructed instruments we do have a theory how they work but with "black box sensors" we might only know how to (re)produce them but don't know how they actually work. However it would seem that sentences like "this and this kind of machine will classify X into two distinct groups Y and Z" would be interesting challenges to your theory of experiment setting and would warrant research. That is any kind of theory that doesn't believe "in the training" would have to claim something other on what the classification would be (that all X would be marked Y, that the groups would not be distinct, that the classifier would inconsistently label the same X Y one time and Z the next time). But I guess those are only of indrect interest if the direct interest is whether groups Y and Z can be established at all.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-09-02T20:42:46.476Z · LW(p) · GW(p)
Hehe, I didn't mean it that literal just trying to get the idea across :)
Nevertheless, your analysis is correct for the case where alternative ways of confirmation are available. There is of course the possibility that at the current stage of technological development the knowledge is only accessible through experience like in my lucid dreaming example in the original post.
comment by ChristianKl · 2017-01-26T11:30:12.697Z · LW(p) · GW(p)
But since there is no rational evidence to support the claim of A
I'm not sure the word "rational" is the best word to pick in that sentence. The fact that there are myths of Zeus is Bayesianevidence that Zeus exists. It isn't strong evidence but it's evidence in the Bayesian sense. On LW the notions of rational and Bayesian are linked together. If you mean something different with rational than is usually meant in this place you should explain your notion.
"Academic" might be a word that fits better.
But since there is no rational evidence to support the claim of A, the required effort is significantly large and the methods appear strange to those not understanding the state how can B rationally decide to expend the effort?
It's basically about trust. Does Bob trust Alice when she tells him about Lucid dreaming? It's not that different from other issues of trust. If you lend someone a book, do you trust them to give it back? There is no academic evidence on which you can rely to know whether they will give the book back.
You trust people by analyzing their track record. You can also compare them to other people that you know to establish whether you should trust them.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-01-26T12:24:51.655Z · LW(p) · GW(p)
Yes, I see what you are saying. What about this rephrasing:
But since there is no sufficient evidence to support the claim of A, the required effort is significantly large and the methods appear strange to those not understanding the state, how can B rationally decide to expend the effort?
I wonder if trust is the only strategy there and, if so, what are some good strategies for choosing who to trust. I will have to ponder this further but is there some material on LW on trust. I am curious to see the perspective of the community on this.
Thanks!
Replies from: ChristianKl↑ comment by ChristianKl · 2017-01-26T12:45:10.977Z · LW(p) · GW(p)
I wrote http://lesswrong.com/r/discussion/lw/oe0/predictionbased_medicine_pbm/ to suggest a change on a broader level.
When it comes to the personal level I'm however at the moment not aware of a good LW post that speaks explicitly about trust.