Against Modest Epistemology
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2017-11-14T20:40:52.681Z · LW · GW · 48 commentsContents
i. ii. iii. None 48 comments
Follow-up to: Blind Empiricism
Modest epistemology doesn’t need to reflect a skepticism about causal models as such. It can manifest instead as a wariness about putting weight down on one’s own causal models, as opposed to others'.
In 1976, Robert Aumann demonstrated that two ideal Bayesian reasoners with the same priors cannot have common knowledge of a disagreement. Tyler Cowen and Robin Hanson have extended this result, establishing that even under various weaker assumptions, something has to go wrong in order for two agents with the same priors to get stuck in a disagreement.1 If you and a trusted peer don’t converge on identical beliefs once you have a full understanding of one another’s positions, at least one of you must be making some kind of mistake.
If we were fully rational (and fully honest), then we would always eventually reach consensus on questions of fact. To become more rational, then, shouldn’t we set aside our claims to special knowledge or insight and modestly profess that, really, we’re all in the same boat?
When I’m trying to sort out questions like these, I often find it useful to start with a related question: “If I were building a brain from scratch, would I have it act this way?”
If I were building a brain and I expected it to have some non-fatal flaws in its cognitive algorithms, I expect that I would have it spend some of its time using those flawed reasoning algorithms to think about the world; and I would have it spend some of its time using those same flawed reasoning algorithms to better understand its reasoning algorithms. I would have the brain spend most of its time on object-level problems, while spending some time trying to build better meta-level models of its own cognition and how its cognition relates to its apparent success or failure on object-level problems.
If the thinker is dealing with a foreign cognitive system, I would want the thinker to try to model the other agent’s thinking and predict the degree of accuracy this system will have. However, the thinker should also record the empirical outcomes, and notice if the other agent’s accuracy is more or less than expected. If particular agents are more often correct than its model predicts, the system should recalibrate its estimates so that it won’t be predictably mistaken in a known direction.
In other words, I would want the brain to reason about brains in pretty much the same way it reasons about other things in the world. And in practice, I suspect that the way I think, and the way I’d advise people in the real world to think, works very much like that:
-
Try to spend most of your time thinking about the object level. If you’re spending more of your time thinking about your own reasoning ability and competence than you spend thinking about Japan’s interest rates and NGDP, or competing omega-6 vs. omega-3 metabolic pathways, you’re taking your eye off the ball.
-
Less than a majority of the time: Think about how reliable authorities seem to be and should be expected to be, and how reliable you are—using your own brain to think about the reliability and failure modes of brains, since that’s what you’ve got. Try to be evenhanded in how you evaluate your own brain’s specific failures versus the specific failures of other brains.2 While doing this, take your own meta-reasoning at face value.
-
… and then next, theoretically, should come the meta-meta level, considered yet more rarely. But I don’t think it’s necessary to develop special skills for meta-meta reasoning. You just apply the skills you already learned on the meta level to correct your own brain, and go on applying them while you happen to be meta-reasoning about who should be trusted, about degrees of reliability, and so on. Anything you’ve already learned about reasoning should automatically be applied to how you reason about meta-reasoning.3
-
Consider whether someone else might be a better meta-reasoner than you, and hence that it might not be wise to take your own meta-reasoning at face value when disagreeing with them, if you have been given strong local evidence to this effect.
That probably sounded terribly abstract, but in practice it means that everything plays out in what I’d consider to be the obvious intuitive fashion.
i.
Once upon a time, my colleague Anna Salamon and I had a disagreement. I thought—this sounds really stupid in retrospect, but keep in mind that this was without benefit of hindsight—I thought that the best way to teach people about detaching from sunk costs was to write a script for local Less Wrong meetup leaders to carry out exercises, thus enabling all such meetups to be taught how to avoid sunk costs. We spent a couple of months trying to write this sunk costs unit, though a lot of that was (as I conceived of it) an up-front cost to figure out the basics of how a unit should work at all.
Anna was against this. Anna thought we should not try to carefully write a unit. Anna thought we should just find some volunteers and improvise a sunk costs teaching session and see what happened.
I explained that I wasn’t starting out with the hypothesis that you could successfully teach anti-sunk-cost reasoning by improvisation, and therefore I didn’t think I’d learn much from observing the improvised version fail. This may sound less stupid if you consider that I was accustomed to writing many things, most of which never worked or accomplished anything, and a very few of which people paid attention to and mentioned later, and that it had taken me years of writing practice to get even that far. And so, to me, negative examples seemed too common to be valuable. The literature was full of failed attempts to correct for cognitive biases—would one more example of that really help?
I tried to carefully craft a sunk costs unit that would rise above the standard level (which was failure), so that we would actually learn something when we ran it (I reasoned). I also didn’t think up-front that it would be two months to craft; the completion time just kept extending gradually—beware the planning fallacy!—and then at some point we figured we had to run what we had.
As read by one of the more experienced meetup leaders, the script did not work. It was, by my standards, a miserable failure.
Here are three lessons I learned from that experiment.
The first lesson is to not carefully craft anything that it was possible to literally just improvise and test immediately in its improvised version, ever. Even if the minimum improvisable product won’t be representative of the real version. Even if you already expect the current version to fail. You don’t know what you’ll learn from trying the improvised version.4
The second lesson was that my model of teaching rationality by producing units for consumption at meetups wasn’t going to work, and we’d need to go with Anna’s approach of training teachers who could fail on more rapid cycles, and running centralized workshops using those teachers.
The third thing I learned was to avoid disagreeing with Anna Salamon in cases where we would have common knowledge of the disagreement.
What I learned wasn’t quite as simple as, “Anna is often right.” Eliezer is also often right.
What I learned wasn’t as simple as, “When Anna and Eliezer disagree, Anna is more likely to be right.” We’ve had a lot of first-order disagreements and I haven’t particularly been tracking whose first-order guesses are right more often.
But the case above wasn’t a first-order disagreement. I had presented my reasons, and Anna had understood and internalized them and given her advice, and then I had guessed that in a situation like this I was more likely to be right. So what I learned is, “Anna is sometimes right even when my usual meta-reasoning heuristics say otherwise,” which was the real surprise and the first point at which something like an extra push toward agreement is additionally necessary.
It doesn’t particularly surprise me if a physicist knows more about photons than I do; that’s a case in which my usual meta-reasoning already predicts the physicist will do better, and I don’t need any additional nudge to correct it. What I learned from that significant multi-month example was that my meta-rationality—my ability to judge which of two people is thinking more clearly and better integrating the evidence in a given context—was not particularly better than Anna’s meta-rationality. And that meant the conditions for something like Cowen and Hanson’s extension of Aumann’s agreement theorem were actually being fulfilled. Not pretend ought-to-be fulfilled, but actually fulfilled.
Could adopting modest epistemology in general have helped me get the right answer in this case? The versions of modest epistemology I hear about usually involve deference to the majority view, to the academic mainstream, or to publicly recognized elite opinion. Anna wasn’t a majority; there were two of us, and nobody else in particular was party to the argument. Neither of us were part of a mainstream. And at the point in time where Anna and I had that disagreement, any outsider would have thought that Eliezer Yudkowsky had the more impressive track record at teaching rationality. Anna wasn’t yet heading CFAR. Any advice to follow track records, to trust externally observable eliteness in order to avoid the temptation to overconfidence, would have favored listening to Yudkowsky over Salamon—that’s part of the reason I trusted myself over her in the first place! And then I was wrong anyway, because in real life that is allowed to happen even when one person has more externally observable status than another.
Whereupon I began to hesitate to disagree with Anna, and hesitate even more if she had heard out my reasons and yet still disagreed with me.
I extend a similar courtesy to Nick Bostrom, who recognized the importance of AI alignment three years before I did (as I discovered afterwards, reading through one of his papers). Once upon a time I thought Nick Bostrom couldn’t possibly get anything done in academia, and that he was staying in academia for bad reasons. After I saw Nick Bostrom successfully found his own research institute doing interesting things, I concluded that I was wrong to think Bostrom should leave academia—and also meta-wrong to have been so confident while disagreeing with Nick Bostrom. I still think that oracle AI (limiting AI systems to only answer questions) isn’t a particularly useful concept to study in AI alignment, but every now and then I dust off the idea and check to see how much sense oracles currently make to me, because Nick Bostrom thinks they might be important even after knowing that I’m more skeptical.
There are people who think we all ought to behave this way toward each other as a matter of course. They reason:
a) on average, we can’t all be more meta-rational than average; and
b) you can’t trust the reasoning you use to think you’re more meta-rational than average. After all, due to Dunning-Kruger, a young-Earth creationist will also think they have plausible reasoning for why they’re more meta-rational than average.
… Whereas it seems to me that if I lived in a world where the average person on the street corner were Anna Salamon or Nick Bostrom, the world would look extremely different from how it actually does.
… And from the fact that you’re reading this at all, I expect that if the average person on the street corner were you, the world would again look extremely different from how it actually does.
(In the event that this book is ever read by more than 30% of Earth’s population, I withdraw the above claim.)
ii.
I once poked at someone who seemed to be arguing for a view in line with modest epistemology, nagging them to try to formalize their epistemology. They suggested that we all treat ourselves as having a black box receiver (our brain) which produces a signal (opinions), and treat other people as having other black boxes producing other signals. And we all received our black boxes at random—from an anthropic perspective of some kind, where we think we have an equal chance of being any observer. So we can’t start out by believing that our signal is likely to be more accurate than average.
But I don’t think of myself as having started out with the a priori assumption that I have a better black box. I learned about processes for producing good judgments, like Bayes’s Rule, and this let me observe when other people violated Bayes’s Rule, and try to keep to it myself. Or I read about sunk cost effects, and developed techniques for avoiding sunk costs so I can abandon bad beliefs faster. After having made observations about people’s real-world performance and invested a lot of time and effort into getting better, I expect some degree of outperformance relative to people who haven’t made similar investments.
To which the modest reply is: “Oh, but any crackpot could say that their personal epistemology is better because it’s based on a bunch of stuff that they think is cool. What makes you different?”
Or as someone advocating what I took to be modesty recently said to me, after I explained why I thought it was sometimes okay to give yourself the discretion to disagree with mainstream expertise when the mainstream seems to be screwing up, in exactly the following words: “But then what do you say to the Republican?”
Or as Ozy Brennan puts it, in dialogue form:
becoming sane side: “Hey! Guys! I found out how to take over the world using only the power of my mind and a toothpick.”
harm reduction side: “You can’t do that. Nobody’s done that before.”
becoming sane side: “Of course they didn’t, they were completely irrational.”
harm reduction side: “But they thought they were rational, too.”
becoming sane side: “The difference is that I’m right.”
harm reduction side: “They thought that, too!”
This question, “But what if a crackpot said the same thing?”, I’ve never heard formalized—though it seems clearly central to the modest paradigm.
My first and primary reply is that there is a saying among programmers: “There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code.”
This is known as Flon’s Law.
The lesson of Flon’s Law is that there is no point in trying to invent a programming language which can coerce programmers into writing code you approve of, because that is impossible.
The deeper message of Flon’s Law is that this kind of defensive, adversarial, lock-down-all-the-doors, block-the-idiots-at-all-costs thinking doesn’t lead to the invention of good programming languages. And I would say much the same about epistemology for humans.
Probability theory and decision theory shouldn’t deliver clearly wrong answers. Machine-specified epistemology shouldn’t mislead an AI reasoner. But if we’re just dealing with verbal injunctions for humans, where there are degrees of freedom, then there is nothing we can say that a hypothetical crackpot could not somehow misuse. Trying to defend against that hypothetical crackpot will not lead us to devise a good system of thought.
But again, let’s talk formal epistemology.
So far as probability theory goes, a good Bayesian ought to condition on all of the available evidence. E. T. Jaynes lists this as a major desideratum of good epistemology—that if we know A, B, and C, we ought not to decide to condition only on A and C because we don’t like where B is pointing. If you’re trying to estimate the accuracy of your epistemology, and you know what Bayes’s Rule is, then—on naive, straightforward, traditional Bayesian epistemology—you ought to condition on both of these facts, and estimate P(accuracy|know_Bayes) instead of P(accuracy). Doing anything other than that opens the door to a host of paradoxes.
The convergence that perfect Bayesians exhibit on factual questions doesn’t involve anyone straying, even for a moment, from their individual best estimate of the truth. The idea isn’t that good Bayesians try to make their beliefs more closely resemble their political rivals’ so that their rivals will reciprocate, and it isn’t that they toss out information about their own rationality. Aumann agreement happens incidentally, without any deliberate push toward consensus, through each individual’s single-minded attempt to reason from their own priors to the hypotheses that best match their own observations (which happen to include observations about other perfect Bayesian reasoners’ beliefs).
Modest epistemology seems to me to be taking the experiments on the outside view showing that typical holiday shoppers are better off focusing on their past track record than trying to model the future in detail, and combining that with the Dunning-Kruger effect, to argue that we ought to throw away most of the details in our self-observation. At its epistemological core, modesty says that we should abstract up to a particular very general self-observation, condition on it, and then not condition on anything else because that would be inside-viewing. An observation like, “I’m familiar with the cognitive science literature discussing which debiasing techniques work well in practice, I’ve spent time on calibration and visualization exercises to address biases like base rate neglect, and my experience suggests that they’ve helped,” is to be generalized up to, “I use an epistemology which I think is good.” I am then to ask myself what average performance I would expect from an agent, conditioning only on the fact that the agent is using an epistemology that they think is good, and not conditioning on that agent using Bayesian epistemology or debiasing techniques or experimental protocol or mathematical reasoning or anything in particular.
Only in this way can we force Republicans to agree with us… or something. (Even though, of course, anyone who wants to shoot off their own foot will actually just reject the whole modest framework, so we’re not actually helping anyone who wants to go astray.)
Whereupon I want to shrug my hands helplessly and say, “But given that this isn’t normative probability theory and I haven’t seen modesty advocates appear to get any particular outperformance out of their modesty, why go there?”
I think that’s my true rejection, in the following sense: If I saw a sensible formal epistemology underlying modesty and I saw people who advocated modesty going on to outperform myself and others, accomplishing great deeds through the strength of their diffidence, then, indeed, I would start paying very serious attention to modesty.
That said, let me go on beyond my true rejection and try to construct something of a reductio. Two reductios, actually.
The first reductio is just, as I asked the person who proposed the signal-receiver epistemology: “Okay, so why don’t you believe in God like a majority of people’s signal receivers tell them to do?”
“No,” he replied. “Just no.”
“What?” I said. “You’re allowed to say ‘just no’? Why can’t I say ‘just no’ about collapse interpretations of quantum mechanics, then?”
This is a serious question for modest epistemology! It seems to me that on the signal-receiver interpretation you have to believe in God. Yes, different people believe in different Gods, and you could claim that there’s a majority disbelief in every particular God. But then you could as easily disbelieve in quantum mechanics because (you claim) there isn’t a majority of physicists that backs any particular interpretation. You could disbelieve in the whole edifice of modern physics because no exactly specified version of that physics is agreed on by a majority of physicists, or for that matter, by a majority of people on Earth. If the signal-receiver argument doesn’t imply that we ought to average our beliefs together with the theists and all arrive at an 80% probability that God exists, or whatever the planetary average is, then I have no idea how the epistemological mechanics are supposed to work. If you’re allowed to say “just no” to God, then there’s clearly some level—object level, meta level, meta-meta level—where you are licensed to take your own reasoning at face value, despite a majority of other receivers getting a different signal.
But if we say “just no” to anything, even God, then we’re no longer modest. We are faced with the nightmare scenario of having granted ourselves discretion about when to disagree with other people, a discretionary process where we take our own reasoning at face value. (Even if a majority of others disagree about this being a good time to take our own beliefs at face value, telling us that reasoning about the incredibly deep questions of religion is surely the worst of all times to trust ourselves and our pride.) And then what do you say to the Republican?
And if you give people the license to decide that they ought to defer, e.g., only to a majority of members of the National Academy of Sciences, who mostly don’t believe in God; then surely the analogous license is for theists to defer to the true experts on the subject, their favorite priesthood.
The second reductio is to ask yourself whether a superintelligent AI system ought to soberly condition on the fact that, in the world so far, many agents (humans in psychiatric wards) have believed themselves to be much more intelligent than a human, and they have all been wrong.
Sure, the superintelligence thinks that it remembers a uniquely detailed history of having been built by software engineers and raised on training data. But if you ask any other random agent that thinks it’s a superintelligence, that agent will just tell you that it remembers a unique history of being chosen by God. Each other agent that believes itself to be a superintelligence will forcefully reject any analogy to the other humans in psychiatric hospitals, so clearly “I forcefully reject an analogy with agents who wrongly believe themselves to be superintelligences” is not sufficient justification to conclude that one really is a superintelligence. Perhaps the superintelligence will plead that its internal experiences, despite the extremely abstract and high-level point of similarity, are really extremely dissimilar in the details from those of the patient in the psychiatric hospital. But of course, if you ask them, the psychiatric patient could just say the same thing, right?
I mean, the psychiatric patient wouldn’t say that, the same way that a crackpot wouldn’t actually give a long explanation of why they’re allowed to use the inside view. But they could, and according to modesty, That’s Terrible.
iii.
To generalize, suppose we take the following rule seriously as epistemology, terming it Rule M for Modesty:
Rule M: Let X be a very high-level generalization of a belief subsuming specific beliefs X1, X2, X3.… For example, X could be “I have an above-average epistemology,” X1 could be “I have faith in the Bible, and that’s the best epistemology,” X2 could be “I have faith in the words of Mohammed, and that’s the best epistemology,” and X3 could be “I believe in Bayes’s Rule, because of the Dutch Book argument.” Suppose that all people who believe in any Xi, taken as an entire class X, have an average level F of fallibility. Suppose also that most people who believe some Xi also believe that their Xi is not similar to the rest of X, and that they are not like most other people who believe some X, and that they are less fallible than the average in X. Then when you are assessing your own expected level of fallibility you should condition only on being in X, and compute your expected fallibility as F. You should not attempt to condition on being in X3 or ask yourself about the average fallibility you expect from people in X3.
Then the first machine superintelligence should conclude that it is in fact a patient in a psychiatric hospital. And you should believe, with a probability of around 33%, that you are currently asleep.
Many people, while dreaming, are not aware that they are dreaming. Many people, while dreaming, may believe at some point that they have woken up, while still being asleep. Clearly there can be no license from “I think I’m awake” to the conclusion that you actually are awake, since a dreaming person could just dream the same thing.
Let Y be the state of not thinking that you are dreaming. Then Y1 is the state of a dreaming person who thinks this, and Y2 is the state of actually being awake. It boots nothing, on Rule M, to say that Y2 is introspectively distinguishable from Y1 or that the inner experiences of people in Y2 are actually quite different from those of people in Y1. Since people in Y1 usually falsely believe that they’re in Y2, you ought to just condition on being in Y, not condition on being in Y2. Therefore you should assign a 67% probability to currently being awake, since 67% of observer-moments who believe they’re awake are actually awake.
Which is why—in the distant past, when I was arguing against the modesty position for the first time—I said: “Those who dream do not know they dream, but when you are awake, you know you are awake.” The modest haven’t formalized their epistemology very much, so it would take me some years past this point to write down the Rule M that I thought was at the heart of the modesty argument, and say that “But you know you’re awake” was meant to be a reductio of Rule M in particular, and why. Reasoning under uncertainty and in a biased and error-prone way, still we can say that the probability we’re awake isn’t just a function of how many awake versus sleeping people there are in the world; and the rules of reasoning that let us update on Bayesian evidence that we’re awake can serve that purpose equally well whether or not dreamers can profit from using the same rules. If a rock wouldn’t be able to use Bayesian inference to learn that it is a rock, still I can use Bayesian inference to learn that I’m not.
Next: Status Regulation and Anxious Underconfidence.
The full book will be available November 16th. You can go to equilibriabook.com to pre-order the book or learn more.
-
See Cowen and Hanson, “Are Disagreements Honest?” ↩
-
This doesn’t mean the net estimate of who’s wrong comes out 50-50. It means that if you rationalized last Tuesday then you expect yourself to rationalize this Tuesday, if you would expect the same thing of someone else after seeing the same evidence. ↩
-
And then the recursion stops here, first because we already went in a loop, and second because in practice nothing novel happens after the third level of any infinite recursion. ↩
-
Chapter 22 of my Harry Potter fanfiction, Harry Potter and the Methods of Rationality, was written after I learned this lesson. ↩
48 comments
Comments sorted by top scores.
comment by Vladimir · 2017-11-15T04:13:29.068Z · LW(p) · GW(p)
Great post once again, when I was younger I was a hedgehog, when I got older and started reading the sequences I strove for complete foxhood and have been using exactly some of these "modest epistemology" arguments although a much weaker version that I couldn't really formalize. This has been very helpful in clarifying things to myself and seeing the weaknesses of the latter approach. One criticism, why bring up Republicans, I'm not even a Republican and I sort of recoiled at that part.
Replies from: Silver_Swift, RobbBB↑ comment by Silver_Swift · 2017-11-16T12:48:59.326Z · LW(p) · GW(p)
One criticism, why bring up Republicans, I'm not even a Republican and I sort of recoiled at that part.
Agreed. Also not a Republican (or American, for that matter), but that was a bit off putting. To quote Eliezer himself:
In Artificial Intelligence, and particularly in the domain of nonmonotonic reasoning, there's a standard problem: "All Quakers are pacifists. All Republicans are not pacifists. Nixon is a Quaker and a Republican. Is Nixon a pacifist?"
What on Earth was the point of choosing this as an example? To rouse the political emotions of the readers and distract them from the main question? To make Republicans feel unwelcome in courses on Artificial Intelligence and discourage them from entering the field?Replies from: Viliam
↑ comment by Viliam · 2017-11-16T22:16:53.692Z · LW(p) · GW(p)
Yeah, I was thinking about exactly the same quote. Is this what living in Bay Area for too long does to people?
How about using an example of a Democrat who insists that logic is colonialistic and oppressive; Aumann's agreement theorem is wrong because Aumann was a white male; and the AI should never consider itself smarter than an average human, because doing so would be sexist and racist (and obviously also islamophobic if the AI concludes that there are no gods). What arguments could Eliezer give to zir? For bonus points, consider that any part of the reply would be immediately taken out of context and shared on Twitter.
Okay, I'll stop here.
For the record, otherwise this is a great article!
Replies from: CronoDAS↑ comment by CronoDAS · 2017-11-17T00:05:02.893Z · LW(p) · GW(p)
I think the only argument I could give is a facepalm.
Replies from: Viliam↑ comment by Viliam · 2017-11-17T11:20:34.961Z · LW(p) · GW(p)
Same here. But -- returning to the topic of the article -- anything written by humans is in the same reference class, therefore outside view suggests that one should ignore all arguments made by humans, ever. And anything you might say in defense of your arguments is merely an inside view, therefore less trustworthy. I mean, the strawman example in my comment could also provide special arguments in support of their agument, which only shows that "special arguments" are better ignored.
At the end you are left with: "I believe my opinions are better than yours, because it's obvious given my meta-opinions. And I believe my meta-opinions are better than yours, because it's obvious given my meta-meta-opinions. And I believe my meta-meta-opinions are better than your, because... uhm, this is getting too abstract now, but I simply believe that this is how it is." Again, not very convincing from the outside view. :D
↑ comment by Rob Bensinger (RobbBB) · 2017-11-15T05:05:28.536Z · LW(p) · GW(p)
I think the intent is just to illustrate how ridiculous real-world violations of Flon's Law often look (using a non-fake example).
comment by Tyrrell_McAllister · 2017-11-15T19:54:20.037Z · LW(p) · GW(p)
Modest epistemology is slippery. You put forward an abstract formulation (Rule M), but "modestists" will probably not identify with it. Endorsing such an abstract view would conflict with modesty itself. Only a hedgehog would put any confidence in such a general principle, so divorced from any foxy particulars.
That's why any real-world modestist will advocate modesty only in particular contexts. That's why your friend was happy to say "Just no" about belief in God. God was not among the contexts where he thought that his being modest was warranted.
Consistent modestists don't advocate modesty "in general". They just think that, for certain people, including you and them, self-doubt is especially warranted when considering certain specific kinds of questions. Or they'll think that, for certain people, including you and them, trusting certain experts over one's own first-order reasoning is especially warranted. Now, you could ask them how their modesty could allow them to be so confident in their conclusion that modesty is warranted in just those cases. But they can consistently reply that, for people like them, that conclusion is not among the kinds of belief such that being modest is warranted.
The first several chapters of your book are very much on point, here. You're making the case that modesty is not warranted in certain cases — specific cases where your modest reader might have thought that it was (central bank policies and medical treatment). And you're providing powerful general methods for identifying such cases.
But this chapter, which argues against modesty in general, has to miss its mark. It might be pursuasive to modest hedgehogs who have universalized their modesty. But modest hedgehogs are almost a contradiction in terms.
comment by ialdabaoth · 2017-11-15T20:09:17.178Z · LW(p) · GW(p)
Oh. OHHHH.
I now understand MUCH better when to, and when not to, second-guess myself. I've been using my status module to second-guess myself, instead of my prediction module. And I hadn't even noticed that there was a difference I could feel.
Well, fuck that.
comment by Ben Pace (Benito) · 2017-11-15T19:29:18.770Z · LW(p) · GW(p)
I found this chapter a bit hard to parse as a whole - there were a bunch of individual arguments against modesty, but I didn't come away feeling their interconnectedness. This might just be a personal quirk, and I will read it again, because I really appreciated many of the different sections e.g. the specific examples of when to actually Auman (with Salamon and Bostrom) that helped me understand how to use the agreement theorem. I also really liked the recommendation to spend most of your time on the object level, a little time on the meta, and much less on the meta-meta. As usual the writing quality is also really high, and so for these reasons I've promoted it to Featured.
I actually wasn't much of a fan of the reductios section - I feel an intuition that I expect mathematicians who only accept constructivist proofs feel, in that I have a gut level feel of 'wastefulness' in spending time thinking about epistemology that isn't about following good reasoning, but is just about finding technical disproofs of possible reasoning moves, that don't inform why the arguments for the epistemic move are false.
comment by CronoDAS · 2017-11-17T00:03:24.263Z · LW(p) · GW(p)
What’s the difference between “improvise your teaching materials and get them in front of students as soon as possible because you don’t know what you’ll learn” and what Strawman Startup Founder wanted to do by getting his program to users as soon as possibl?
comment by ryan_b · 2017-11-15T17:31:28.479Z · LW(p) · GW(p)
The new sequence is very good and I have been able to put a couple of them to immediate use. I had the following series of thoughts when reading this piece:
1) This sequence appears less timeless than Rationality: A-Z, because that was addressed to everyone and this looks addressed to the current community.
2) Is it really targeted at the community specifically, or should I expect Modest Epistemology to be a common local maxima in the pursuit of clear thinking? If the latter, then the first feeling was wrong.
3) To what extent does the community represent a local maxima or plateau? If I think a lot of other individuals will go through the same sort of progression, there may be overlap.
I realize the goal is for the community to be a special case of general pursuit of rationality, but it has been a minute since I thought about it as a whole. This is a another mark in favor of the sequence.
comment by Said Achmiz (SaidAchmiz) · 2017-11-14T21:14:58.275Z · LW(p) · GW(p)
Eliezer, you mischaracterize epistemic modesty, I think, by equating it with majoritarianism.
An example:
As I was reading this post, and got to the part about your disagreement with Anna, I thought:
“Ah, but where is the third person, who heard both Eliezer and Anna, and said to them, ‘the best way to teach people about detaching from sunk costs is, first, to go and find someone who has real-world experience in teaching people things, and ask them what the best way to teach this thing is’?”
And I kept reading, and found no mention of such a person, and was perplexed.
I judge both you and Anna to be epistemically immodest, on those grounds.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2017-11-14T21:59:38.779Z · LW(p) · GW(p)
I didn't quite follow your comment - why does the fact that such a person didn't exist, mean Eliezer was characterising modesty as majroitarianism?
Also, this paragraph states that majoritarianism is one of three things (Eliezer thinks) modesty is often associated with, and he goes to explain why none of them applied here.
The versions of modest epistemology I hear about usually involve deference to the majority view, to the academic mainstream, or to publicly recognized elite opinion. Anna wasn’t a majority; there were two of us, and nobody else in particular was party to the argument. Neither of us were part of a mainstream. And at the point in time where Anna and I had that disagreement, any outsider would have thought that Eliezer Yudkowsky had the more impressive track record at teaching rationality. Anna wasn’t yet heading CFAR. Any advice to follow track records, to trust externally observable eliteness in order to avoid the temptation to overconfidence, would have favored listening to Yudkowsky over Salamon—that’s part of the reason I trusted myself over her in the first place! And then I was wrong anyway, because in real life that is allowed to happen even when one person has more externally observable status than another.Replies from: SaidAchmiz
↑ comment by Said Achmiz (SaidAchmiz) · 2017-11-14T22:33:41.700Z · LW(p) · GW(p)
Yes, I may have been unclear, sorry. Let me try to clarify…
I didn't quite follow your comment - why does the fact that such a person didn't exist, mean Eliezer was characterising modesty as majroitarianism?
It doesn’t, of course.
My example was an example of what I am saying epistemic modesty is, in contrast to what Eliezer seems to think it is—i.e., majoritarianism (which view of his did not, as I saw it, need an example).
Also, this paragraph states that majoritarianism is one of three things (Eliezer thinks) modesty is often associated with, and he goes to explain why none of them applied here.
Indeed, I read it. And upon reading this part—
And at the point in time where Anna and I had that disagreement, any outsider would have thought that Eliezer Yudkowsky had the more impressive track record at teaching rationality. Anna wasn’t yet heading CFAR. Any advice to follow track records, to trust externally observable eliteness in order to avoid the temptation to overconfidence, would have favored listening to Yudkowsky over Salamon—that’s part of the reason I trusted myself over her in the first place!
—I thought: “No; any outsider would have thought that you both had unimpressive track records, and that neither of you were anywhere near the sort of expert to whom epistemic modesty counsels deference.” Advice to follow track records and trust externally observable eliteness, would—contra Eliezer—have favored listening to neither Eliezer nor Anna, but rather finding an actual expert!
In short: this elaborate hand-wringing over whether Eliezer should have trusted himself over Anna, or Anna over himself, badly misses the point of epistemic modesty. I daresay Eliezer is knocking down a strawman.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2017-11-15T00:09:08.970Z · LW(p) · GW(p)
“No; any outsider would have thought that you both had unimpressive track records, and that neither of you were anywhere near the sort of expert to whom epistemic modesty counsels deference.” Advice to follow track records and trust externally observable eliteness, would—contra Eliezer—have favored listening to neither Eliezer nor Anna, but rather finding an actual expert!
Who is the relevant expert class in how to teach rationality? I daresay they'd both read the relevant debiasing literature (which I've read and is remarkably sparse and low in recommendations), and to figure out an expert class after that is a lot of inside view work. Is it high school math teachers? University lecturers? The psychologists who've helped create the heuristics and biases literature but profess limited knowledge of how to do debiasing? If the latter, does epistemic modesty profess that Eliezer and Anna are not allowed to come up with their own models to test, for they are not the psychologists with the 'expert' label, and the 'experts' have said they don't know how to do it?
My read here is that Eliezer is trying to figure out what modesty epistemology says to do in this disagreement between Eliezer and Anna, and finding it unhelpful because it doesn't actually talk about the all-important question of how to figure out new truths.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2017-11-15T01:02:31.459Z · LW(p) · GW(p)
Who is the relevant expert class in how to teach rationality?
I’m glad you asked, because this highlights what I think is a fairly common mistake among rationalists (not only, of course, but notably so).
“Who is the relevant expert class in how to teach rationality” is the wrong question; specifically, it’s a wrong framing. The right question is:
Are there any experts whose expertise bears on the question “how to teach rationality”, in a way that makes them plausibly either
a) plain more knowledgeable/competent in “how to teach rationality”, than the non-expert; or,
b) able to contribute significant knowledge/competence to a non-expert’s attempt to figure out how to teach rationality?
And if such experts exist, in what fields may we find them?
Asked thus, some obvious answers present themselves. We might, for instance, look for experts in—teaching!
After all—pedagogy, curriculum design… these are well-studied topics! There is a tremendous body of professional knowledge and expertise there. (My mother is an educator—a teacher, a curriculum designer, and a consultant on the design of educational materials—so I am somewhat familiar with this subject in particular.) I think, if you have never investigated this field, you might be rather shocked at the depth of expertise that’s there to find.
So the very first thing I would do, in Eliezer and Anna’s place, is seek out such a person, or several such people; and have a good, long chat with them. Then, I would either invite such a person to play a key role in this “teaching rationality” project, or ask them for pointers on finding someone similar who might be thus invited. I would also follow any other advice that person (or people) gave me.
It seems to me that in order to deny this view, you would have to insist that “teaching rationality” is so profoundly different from “teaching anything else” or “teaching, in general” that the expert’s competence, domain knowledge, professional experience, etc., is (almost) entirely non-transferable to “teaching rationality” (‘almost’, because it is a coherent—absurd, in my view, but coherent—view to say, e.g., that yes the expert’s competence is slightly transferable, but that slight applicable expertise is swamped entirely by the extraordinary qualities of the subject matter, and Eliezer&Anna’s special competence in that). But how on earth would you, a non-expert, know this to be the case, without having made the slightest attempt to consult an expert, and, indeed, (seemingly) without being aware that there exist experts to be consulted?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2017-11-15T01:23:03.876Z · LW(p) · GW(p)
Yup, it seems to me quite correct that general expertise in pedagogy, and specifically pedagogy of technical subjects (math, physics etc) is highly relevant. I do believe that one of the founders of CFAR has a PhD in Math/Science Education, so I think that Eliezer+Anna agreed with you and sought such expertise pretty early on.
(In general from my experience of CFAR workshops, high school, and university, CFAR seems to have excelled in incorporating general lessons of pedagogy, and I think as a whole have been very strong on the due diligence of learning from experts in fields that have valuable insights for their work such as cognitive psychology and teaching. Their lessons often include case studies of pedagogical mistakes they learned from various literature reviews, and they teach how to avoid these mistakes when trying yourself to learn a skill.)
I'm curious if we in fact have a disagreement? Might you want to defend the stronger claim that not just ought they have looked into such fields early on (as they did) but in fact not attempted to test an improvised class whatsoever until they had concluded an in-depth literature review and discussed it with people with more experience and a better track record in pedagogy?
Replies from: RobbBB, SaidAchmiz↑ comment by Rob Bensinger (RobbBB) · 2017-11-15T01:35:19.098Z · LW(p) · GW(p)
Is Said claiming anything about there being an object-level consensus on these issues? If there's an academic consensus among education specialists saying something like 'always improvise classes as a test run as early as possible, even if you don't have prior experience and don't have a lesson plan or exercises worked out yet', then that certainly seems relevant to this example. Ditto to a lesser extent if script-reading and student-led classes are known to be a dead end.
I don't think Eliezer and the modest would actually disagree that if there's an academic consensus like that, then that's an important input to the decisionmaking process. I think that treating this as the crux would not be steel-manning modest epistemology, and would be straw-manning the civilizational inadequacy viewpoint.
But it's still relevant to the object-level dispute, and the object-level disputes all matter because (in aggregate) they help us judge how these different reasoning heuristics actually perform in the real world.
(This is one reason I've been happy to see how often these LW discussions have ended up focusing in on specific object-level details and examples. Eliezer's book is in some sense an argument for "actually, let's just talk more about object-level stuff," so this seems totally appropriate to me.)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2017-11-15T02:29:06.627Z · LW(p) · GW(p)
I don’t know whether there’s an object-level consensus; I am not an expert in education, after all. (I’ll ask my mother about it at some point, perhaps; it is an interesting point on its own, certainly.)
But this post is fairly clearly making a meta-level point, which is what I am addressing.
Is there an academic[1] consensus? Well, perhaps the thing to ask is, why don’t we, the readers, already know the answer to that? This should be discussed in Eliezer’s post! At the point in his story when the question arises, Eliezer should say—“and then, instead of spending any serious length of time considering or debating which of us was right, we immediately sought the counsel of experts; and delved into the literature; and here is what we found…” He doesn’t say any such thing, because (it seems) nothing of the sort ever happened, and instead he and Anna argued the matter, and then went as far as actually spending two months implementing one plan, and then some more time implementing another plan, etc., without ever seeking expert guidance.
In short, the fact that there is still, among us readers, a question of what the expert consensus is, is itself damning!
[1] By the way, it’s noteworthy that you speak of “academic consensus”, whereas I spoke of expert competence, of domain knowledge, of professional expertise. This is the old dichotomy of an “expert on” vs. an “expert at”. A working educator, even one with a Ph.D., is an expert at, whereas someone who studies the field and does academic research on it is an expert on.
Now, I do not say that in such cases you ought not consult experts on the field of your endeavor, and indeed your expert consultants should have at least some expertise on the field in question; but expertise at the field is at least as important.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-11-16T20:10:15.013Z · LW(p) · GW(p)
There's a core assumption in CFAR that the way the academics currently go about inventing debiasing interventions doesn't seem to work well and that it's worth to try a different strategy.
A while ago I tried to read up on education research. The problem in which I ran into, was that most papers are full of postmodern jargon. The class that was taught at our university was full of such paper assignements. Postmodern fields generally don't have much consensus either.
Bill Gates presented statistics in one of his Ted talks about what determines teacher performance. According to his numbers students of teacher who have a masters of education don't do better than when the teacher has only a bachlor degree.
This means that the knowledge that's taught seems to be worthless for getting students to learn the material they are supposed to learn better and in addition the teaching quality of the education professors is so bad that they don't manage to teach actual teaching skills.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2017-11-16T23:58:18.538Z · LW(p) · GW(p)
… academics…
… education research…
… papers…
Yes, that is precisely why I suggested that the right way to go is to speak to a working educator, an expert at, and not merely speaking to academics (experts on), and certainly not to simply read some papers!
There's a core assumption in CFAR…
And is it an axiom, that CFAR is right about these things? Isn't that what we're here to discuss?
This means that the knowledge that's taught seems to be worthless for getting students to learn the material they are supposed to learn better and in addition the teaching quality of the education professors is so bad that they don't manage to teach actual teaching skills.
This does not match my experience. In any case, what you would want to do (obviously, or so I thought) is to find an educator who is known (via recommendation, reputation, etc.) to be capable.
Replies from: CronoDAS↑ comment by CronoDAS · 2017-11-17T00:16:35.368Z · LW(p) · GW(p)
It’s hard to teach implicit unconscious knowledge; the best pianists aren’t also the best piano teachers. Just because someone can teach math, doesn’t mean they can teach teaching!
Replies from: edward-jones↑ comment by Edward Jones (edward-jones) · 2017-12-13T11:16:32.599Z · LW(p) · GW(p)
It seems like they could still demonstrate the procedure they'd use for teaching, if only for an unrelated topic, and some of that could be transferred. I expect it would be more efficient than trying to reconstruct the syllabus-building techniques (&c) separately.
There are also teachers who train other teachers, presumably, so you could employ the help of one of them.
↑ comment by Said Achmiz (SaidAchmiz) · 2017-11-15T02:14:11.360Z · LW(p) · GW(p)
I do believe that one of the founders of CFAR has a PhD in Math/Science Education, so I think that Eliezer+Anna agreed with you and sought such expertise pretty early on.
Did they? Then what on earth is this post about…?
Might you want to defend the stronger claim that not just ought they have looked into such fields early on (as they did) but in fact not attempted to test an improvised class whatsoever until they had concluded an in-depth literature review and discussed it with people with more experience and a better track record in pedagogy?
Absolutely—although I would amend, perhaps, the ‘literature review’ bit—not that it would be inadvisable, just that I’d seek out an expert to speak with first, and then, as part of (or in parallel with) following that expert’s advice, review the literature. (Or, heck, do it immediately, why not. Depends how easy it is to deploy one a team member to do a lit review, vs. how easy it is to get hold of a suitable expert.)
More fundamentally, what I am saying is that Eliezer’s ruminations about whether to trust himself or Anna on this matter are simply irrelevant, because both of the possible answers that he proffers are wrong. If it costs nothing, or very little, to test an improvised class, sure, do it. Heck, do whatever you want, at any time and for any reason. But if you think that what you’re doing matters; if success is important, if failure is bad, if time and effort spent on the attempt are valuable; then the answer to “do I trust myself (a non-expert) or my friend (also a non-expert)” is “neither; immediately find an expert and consult them”.
Replies from: TAGcomment by TAG · 2017-11-16T16:16:46.626Z · LW(p) · GW(p)
Those who dream do not know they dream, but when you are awake, you know you are awake.”
I once made that argument in a dream...
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2017-11-16T19:17:40.688Z · LW(p) · GW(p)
Please don't bold your whole comment.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2017-11-16T19:18:31.638Z · LW(p) · GW(p)
I think this is a bug, not TAG's fault.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-11-16T20:11:36.324Z · LW(p) · GW(p)
TAG's fault is to avoid unbolding the text that the UI bug bolds.
Replies from: habryka4↑ comment by habryka (habryka4) · 2017-11-16T20:13:00.819Z · LW(p) · GW(p)
I feel that this is counterintuitive enough that I wouldn't fault anyone for it. I hope I can get around to fixing that bug soon.
comment by Chris_Leong · 2017-11-17T04:53:47.052Z · LW(p) · GW(p)
The Crackpot Problem is a very interesting challenge and we need to be able to address it.
I suspect that if you talked to the average crackpot, they would be unable to give reasons for why they are more likely to be correct that the average person would ever find plausible and that the average person would become clear of this the more that they dug into this.
On the other hand, if you have good reasons for thinking that your epistemology may be good (ie. read into cognitive biases, taken the time to read other perspectives, studied logic, ect.) than other people will find it at least plausible that you have a better epistemology.
This eliminates a good proportion of the crackpot field, but it doesn't completely solve the issue as there are probably some extremely verbally fluent and persuasive crackpots out there and most people can't differentiate them from rationalists.
comment by ChristianKl · 2017-11-16T19:15:40.749Z · LW(p) · GW(p)
I mean, the psychiatric patient wouldn’t say that, the same way that a crackpot wouldn’t actually give a long explanation of why they’re allowed to use the inside view. But they could, and according to modesty, That’s Terrible.
Why does it matter more that you could hypothise a crackpot who who would act that way than what we learned empirically about how crackpots actually behave?
If the goal is to not go into the trap of being a crackpot than it's very worthwhile to reason based on how actual crackpots behave.
comment by whpearson · 2017-11-15T19:07:22.170Z · LW(p) · GW(p)
I occaisionally wonder if I am a crank ( not quite a super intelligent being conditioning on them being insane, but it is some kind of evidence).
It keeps me from being too sure of my ideas. I console myself that I am exploring relatively unexplored areas (and hard to explore areas). But I am ready to pivot if my direction seems fruitless.
comment by AgentME · 2020-04-23T05:37:39.451Z · LW(p) · GW(p)
Not saying this just because I disagree with Flon's Law, but I found the use of Flon's Law to argue against Modest Epistemology as very distracting in the article, partly because the argument that all programming languages are inherently equally easy to mess up in seems like a very typical example of Modest Epistemology. (We imagine there are people with beliefs X1, X2, X3..., Xn, each of the form "I believe Pi is the best language". Throwing out all the specifics, we must accept that they're all equally negligibly correct.)
Probability theory and decision theory shouldn’t deliver clearly wrong answers. [...] But if we’re just dealing with verbal injunctions for humans, where there are degrees of freedom, then there is nothing we can say that a hypothetical crackpot could not somehow misuse.
It's funny that Flon's Law is used to support the bit leading up to this, because it's almost exactly what I'd say to argue against Flon's Law: Some programming languages encourage the writer by default to structure their ideas in ways that certain properties can be automatically enforced or checked in a mathematical way from the structure, and dynamic untyped languages are instead more like arbitrary verbal reasoning that isn't rigorous enough for any properties to be proven from the structure itself. Sure, it's technically possible to make nonsense in any programming language, but you have to try harder in some, in the same way you have to try harder to make diagonals with legos than plain blocks, or be a little clever to make a false math proof that looks right on the surface while in verbal reasoning you can say something that sounds right but is wrong just by using a word twice while relying on different meanings in each use.
I get the logic the article is going for in using Flon's Law -- that it's trying to make a parallel between fancy programming languages and flavors of verbal reasoning (Modest Epistemology) that claim to be able to solve problems from their structure without engaging with the content -- but then the article goes on to talk about the specifics of math are actually better than verbal reasoning like Modest Epistemology, and it's extremely confusing to read as someone that perceives the correct groupings as {math, fancy programming languages with provable properties} and {verbal reasoning, dynamic untyped programming languages}, which is the very division that Flon's Law argues against being useful.
(Huh, this really wasn't intended to be my thesis on Flon's Law, but I guess it is now. I just meant to nitpick the choice of metaphor and argue that Flon's Law is at the very least an ambiguously bad example to use.)
comment by ChickCounterfly · 2017-11-30T17:48:11.554Z · LW(p) · GW(p)
> If we were fully rational (and fully honest), then we would always eventually reach consensus on questions of fact.
The things you cite right before this sentence say the exact opposite. This is only possible give equal priors, and there's no reason to assume ratioanl and honest people would have equal priors about...anything.
comment by countingtoten · 2017-11-14T23:26:45.335Z · LW(p) · GW(p)
I assume you believe you're awake because you've tried to levitate, or tested the behavior of written words, or used some other more-or-less reliable test?
Replies from: RobbBB, sil-ver↑ comment by Rob Bensinger (RobbBB) · 2017-11-14T23:48:46.490Z · LW(p) · GW(p)
I think it's more like "I don't actually have dreams that include perfectly detailed, perfectly realistic, completely specified visual fields," "I don't actually have dreams in which I spend five minutes eating a salad or doing similarly mundane tasks," "I don't actually have dreams in which I spend all this time visually inspecting a webpage in all this sensory detail and effortfully introspecting about my experience in order to generate examples to illustrate an abstract idea," etc. Although I rarely realize it while I'm dreaming, my whole experience of dreaming is actually really different from any given random five-minute snapshot of my daily life.
Replies from: countingtoten↑ comment by countingtoten · 2017-11-17T18:50:06.791Z · LW(p) · GW(p)
No, seriously, what you're saying sounds like nonsese. Number one, dreams can have vivid stimuli that I recall explicitly using as evidence that I wasn't dreaming; of course I've also thought I was performing mundane tasks. Number two, how does dream-you distinguish the difference without having "tested the behavior of written words, or used some other more-or-less reliable test?"
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2017-11-17T19:01:39.239Z · LW(p) · GW(p)
The thing I'm claiming is that (at least for most people) dreams actually feel really different from being awake. The moment-to-moment experience of dreaming isn't a perfect phenomenological copy-paste of a representative waking experience. (At least, insofar as I'm currently accurately remembering what dreaming is like. But if I'm not accurately remembering, it may be in the direction of overestimating the verisimilitude of my dreams, not just underestimating it.)
The important claim here is that your moment-to-moment sensory experience while awake can be full of features that give you good evidence you're awake, even if your dreaming self lacks the capacity to recognize that those things are missing while you're dreaming. Hence:
If a rock wouldn’t be able to use Bayesian inference to learn that it is a rock, still I can use Bayesian inference to learn that I’m not.
The bits of information that I'm receiving about a salad I'm visually inspecting, about a thought I'm introspecting about while eating a salad, etc. provide Bayesian evidence that I'm not dreaming, . That's not because my dreaming self would have the metacognitive wherewithal to notice the absence of those kinds of bits and to infer . It's because the probability of given is much higher than the probability of given .
Replies from: countingtoten↑ comment by countingtoten · 2017-11-17T19:45:59.099Z · LW(p) · GW(p)
The part about sensory data sounds totally wrong to me personally, and of course you know where this is going (see also). Whereas my dream self can, in fact, notice logical flaws or different physics and conclude that I'm dreaming.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2017-11-17T20:24:54.721Z · LW(p) · GW(p)
Cool, that makes sense of our disagreement! You're the second person I've run into who was puzzled by the dream reductio, and for the same reason: their dreams were very mundane, detailed, and otherwise "realistic," closely matching waking experiences in sensory feel and content.
Replies from: countingtoten↑ comment by countingtoten · 2017-11-17T21:22:37.610Z · LW(p) · GW(p)
That's actually not quite right - my dream *content* varies widely in how mundane it is. My point is that I learned to recognize dreams not by practicing the thought 'This experience is too vivid to be a dream,' but by practicing tests which seemed likely to work.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2017-11-17T22:34:03.980Z · LW(p) · GW(p)
Good to know. I want to continue to emphasize, though, that talking about "learning to recognize dreams" as a single thing might be the wrong framing. The skills and techniques that work best for "learning to recognize dreams when you're asleep" may be very different from the skills and techniques that work best for "learning to recognize non-dreams when you're awake."
When people in the lucid dreaming community practice reality checks while awake, for example, they're really trying to train a habit into themselves that they expect to be useful for detecting dreams while they're sleeping; they're not earnestly trying to come up with the most efficient possible methods for updating on their sensory and introspective evidence for the hypothesis while awake.
(I would claim that this is because they're not actually uncertain about whether they're awake, because they're swimming in an ocean of tiny omnipresent moment-to-moment bits of evidence for . In the overwhelming majority of cases, they're not anxious or curious about the possibility that this is all a dream (nor should they be); they're going through the motions of running tests in order to have the habit installed when they need it later.)
Replies from: countingtoten↑ comment by countingtoten · 2017-11-18T03:00:37.435Z · LW(p) · GW(p)
Like many people in the past year, I frequently wonder if I'm dreaming while awake. This seems to make up >10% of the times I've tested it. I'm also running out of ways to say that I mean what I say.
You may be right that the vast majority of the time (meaningful cough) when humans wonder if they're dreaming, they are. People who know that may account for nearly all exceptions.
↑ comment by Rafael Harth (sil-ver) · 2017-11-20T11:28:34.565Z · LW(p) · GW(p)
I personally don't think I'm awake anymore when dreaming (ever, I think). Instead I'm not sure, and then conclude I'm probably not awake because if I were I would be sure. I still have ended up assigning a fairly seizable probability to being awake though (rather than below 1%) a bunch of times.
comment by TAG · 2017-11-16T16:10:59.219Z · LW(p) · GW(p)
I think that’s my true rejection, in the following sense: If I saw a sensible formal epistemology underlying modesty and I saw people who advocated modesty going on to outperform myself and others, accomplishing great deeds through the strength of their diffidence, then, indeed, I would start paying very serious attention to modesty.
Outperformance can include the avoidance of errors.
Replies from: CronoDAS