Get Curious

post by lukeprog · 2012-02-24T05:10:48.795Z · LW · GW · Legacy · 100 comments

Contents

  Step 1: Feel that you don't already know the answer.
    Exercise 1.1: Import the feeling of uncertainty.
    Exercise 1.2: Consider all the things you've been confident but wrong about.
  Step 2: Want to know the answer.
    Exercise 2.1: Visualize the consequences of being wrong.
    Exercise 2.2: Make plans for different worlds.
    Exercise 2.3: Recite the Litany of Tarski.
    Exercise 2.4: Recite the Litany of Gendlin.
  Step 3: Sprint headlong into reality.
  Conclusion: Curiosity in Action
None
100 comments

Being levels above in [rationality] means doing rationalist practice 101 much better than others [just like] being a few levels above in fighting means executing a basic front-kick much better than others.

- lessdazed

I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.

- Bruce Lee

Recently, when Eliezer wanted to explain why he thought Anna Salamon was among the best rationalists he knew, he picked out one feature of Anna's behavior in particular:

I see you start to answer a question, and then you stop, and I see you get curious.

For me, the ability to reliably get curious is the basic front-kick of epistemic rationality. The best rationalists I know are not necessarily those who know the finer points of cognitive psychology, Bayesian statistics, and Solomonoff Induction. The best rationalists I know are those who can reliably get curious.

Once, I explained the Cognitive Reflection Test to Riley Crane by saying it was made of questions that tempt your intuitions to quickly give a wrong answer. For example:

A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

If you haven't seen this question before and you're like most people, your brain screams "10 cents!" But elementary algebra shows that can't be right. The correct answer is 5 cents. To get the right answer, I explained, you need to interrupt your intuitive judgment and think "No! Algebra."

A lot of rationalist practice is like that. Whether thinking about physics or sociology or relationships, you need to catch your intuitive judgment and think "No! Curiosity."

Most of us know how to do algebra. How does one "do" curiosity?

Below, I propose a process for how to "get curious." I think we are only just beginning to learn how to create curious people, so please don't take this method as Science or Gospel but instead as an attempt to Just Try It.

As with my algorithm for beating procrastination, you'll want to practice each step of the process in advance so that when you want to get curious, you're well-practiced on each step already. With enough practice, these steps may even become habits.

Step 1: Feel that you don't already know the answer.

If you have beliefs about the matter already, push the "reset" button and erase that part of your map. You must feel that you don't already know the answer.

Exercise 1.1: Import the feeling of uncertainty.

  1. Think of a question you clearly don't know the answer to. When will AI be created? Is my current diet limiting my cognitive abilities? Is it harder to become the Prime Minister of Britain or the President of France?
  2. Close your eyes and pay attention to how that blank spot on your map feels. (To me, it feels like I can see a silhouette of someone in the darkness ahead, but I wouldn't take bets on who it is, and I expect to be surprised by their identity when I get close enough to see them.)
  3. Hang on to that feeling or image of uncertainty and think about the thing you're trying to get curious about. If your old certainty creeps back, switch to thinking about who composed the Voynich manuscript again, then import that feeling of uncertainty into the thing you're trying to get curious about, again.

Exercise 1.2: Consider all the things you've been confident but wrong about.

  1. Think of things you once believed but were wrong about. The more similar those beliefs are to the beliefs you're now considering, the better.
  2. Meditate on the frequency of your errors, and on the depths of your biases (if you know enough cognitive psychology).

Step 2: Want to know the answer.

Now, you must want to fill in this blank part of your map.

You mustn't wish it to remain blank due to apathy or fear. Don't avoid getting the answer because you might learn you should eat less pizza and more half-sticks of butter. Curiosity seeks to annihilate itself.

You also mustn't let your desire that your inquiry have a certain answer block you from discovering how the world actually is. You must want your map to resemble the territory, whatever the territory looks like. This enables you to change things more effectively than if you falsely believed that the world was already the way you want it to be.

Exercise 2.1: Visualize the consequences of being wrong.

  1. Generate hypotheses about the ways the world may be. Maybe you should eat less gluten and more vegetables? Maybe a high-protein diet plus some nootropics would boost your IQ 5 points? Maybe your diet is fairly optimal for cognitive function already?
  2. Next, visualize the consequences of being wrong, including the consequences of remaining ignorant. Visualize the consequences of performing 10 IQ points below your potential because you were too lazy to investigate, or because you were strongly motivated to justify your preference for a particular theory of nutrition. Visualize the consequences of screwing up your neurology by taking nootropics you feel excited about but that often cause harm to people with cognitive architectures similar to your own.

Exercise 2.2: Make plans for different worlds.

  1. Generate hypotheses about the way the world could be — different worlds you might be living in. Maybe you live in a world where you'd improve your cognitive function by taking nootropics, or maybe you live in a world where the nootropics would harm you.
  2. Make plans for what you'll do if you happen to live in World #1, what you'll do if you happen to live in World #2, etc. (For unpleasant possible worlds, this also gives you an opportunity to leave a line of retreat for yourself.)
  3. Notice that these plans are different. This should produce in you some curiosity about which world you actually live in, so that you can make plans appropriate for the world you do live in rather than for one of the worlds you don't live in.

Exercise 2.3: Recite the Litany of Tarski.

The Litany of Tarski can be adapted to any question. If you're considering whether the sky is blue, the Litany of Tarski is:

If the sky is blue
I desire to believe the sky is blue.
If the sky is not blue,
I desire not to believe the sky is blue.

Exercise 2.4: Recite the Litany of Gendlin.

The Litany of Gendlin reminds us:

What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it
doesn't make it go away.
And because it's true,
it is what is there to be interacted with.
Anything untrue isn't there to be lived.
People can stand what is true,
for they are already enduring it.

Step 3: Sprint headlong into reality.

If you've made yourself uncertain and then curious, you're now in a position to use argument, empiricism, and scholarship to sprint headlong into reality. This part probably requires some domain-relevant knowledge and an understanding of probability theory and value of information calculations. What tests could answer your question quickly? How can you perform those tests? If the answer can be looked up in a book, which book?

These are important questions, but I think the first two steps of getting curious are more important. If someone can master steps 1 and 2, they'll be so driven by curiosity that they'll eventually figure out how to do step 3 for many scenarios. In contrast, most people who are equipped to do step 3 pretty well still get the wrong answers because they can't reliably execute steps 1 and 2.

Conclusion: Curiosity in Action

A burning itch to know is higher than a solemn vow to pursue truth. If you think it is your duty to doubt your own beliefs and criticize your own arguments, then you may do this for a while and conclude that you have done your duty and you're a Good Rationalist. Then you can feel satisfied and virtuous and move along without being genuinely curious.

In contrast,

if you can find within yourself the slightest shred of true uncertainty, then guard it like a forester nursing a campfire. If you can make it blaze up into a flame of curiosity, it will make you light and eager, and give purpose to your questioning and direction to your skills.

My recommendation? Practice the front-kick of epistemic rationality every day. For months. Train your ape-brain to get curious.

Rationality is not magic. For many people, it can be learned and trained.

100 comments

Comments sorted by top scores.

comment by ryjm · 2012-02-23T15:38:01.141Z · LW(p) · GW(p)

Also, learn to differentiate between genuine curiosity and what I like to call pseudo-curiosity - basically, being satisfied by conclusions rather than concepts. Don't let the two overlap. This is especially hard when conclusions are most of the time readily available and often the first item in a google search. In terms of genuine curiosity, google has been the bane of my existence - I will start off moderately curious, but instead of moving to that higher stage of curiosity, I will be sated by facts and conclusions without actually learning anything (similar to a guessing the teacher's password situation). After a couple hours of doing this, I feel very scholarly and proud of my ability to parse so much information, when in reality all I did was collect a bunch of meaningless symbols.

To combat this, I started keeping a "notebook of curiosities". The moment I get curious, I write whatever it is I'm curious about, and then write everything I know about it. At this point, I determine whether or not anything I know is a useful springboard; otherwise, I start from scratch. Then I circle my starting node and start the real work, with the following rules:

  • Every fact or concept I write must follow directly from a previous node (never more than two or three reasoning steps away). Most of the time, this results in a very large diagram referencing multiple pages. I use pen and paper only because I like to use it outside.
  • Wikipedia is a last resort - I don't want to be tempted by easy facts. I use textbooks -> arxiv -> jstor -> google scholar in order of preference. It's a lot of work.
  • If I skip some reasoning or concept because I think it is trivial, I write the reason why it is trivial. Most of the time, this results in something interesting.

Doing this has revealed many gaps in my knowledge. I've become increasingly aware of a lack of internalization of basic concepts and modes of thinking that are necessary for certain concepts. It also forces me to confront my actual interest in the subject, rather than my perceived interest.

The majority of what I use it for is math related, so it's more tailored to that use case.

Replies from: RomeoStevens
comment by RomeoStevens · 2012-02-24T08:56:39.145Z · LW(p) · GW(p)

This goes away when you start to realize what shit sources like wikipedia are. Go through the sources cited by wikipedia articles sometimes. Realize that everything presented to you as fact is generally a conclusion come to by people who are downright terrible at basic reasoning.

Practicing rephrasing everything as being written in E-Prime can be helpful, taking special note when normative and positive statements start becoming muddled.

Replies from: Dmytry
comment by Dmytry · 2012-02-24T11:22:12.075Z · LW(p) · GW(p)

I have to agree on terribleness of wikipedia. The approach in Wikipedia is as such: if you can cite that 2 2 = 5 , then you can write about it, but it is a mortal sin against the wikipedia to derive the 2 2 = 4 from first principles . That's because wikipedia is an encyclopedia, and the wikipedia's process only rearranges knowledge while introducing biases and errors; that's by design. The most common bias is to represent both sides equally when they shouldn't be, the second most common bias is the side with the most people editing wikipedia winning while screaming lalala original research can not hear you, when it comes to basic reasoning.

For 2 * 2 , it does generally work and the rules would be glossed over, for anything more complicated, well the wikipedia equates any logic with any nonsense.

Then, a great deal of websites regurgitate stuff from wikipedia, often making it very difficult or impossible to find any actual information.

That being said, the wikipedia is a pretty good online link directory. Just don't rely on the stuff written on the wikipedia, and don't rely on articles that were repeating the 'citation needed' section, and then were added as the needed citation. And be aware that the selection of links can be very biased.

Replies from: army1987, bungula
comment by A1987dM (army1987) · 2012-02-24T17:46:02.694Z · LW(p) · GW(p)

The thing about citations and against derivations from first principles is deliberate and (so long as participation is open to everybody) I think removing it could do more harm than keeping it: it's hard to tell if a derivation from first principles in a field you're not familiar with is valid, so short of somehow magically increasing the number of (say) editors with a PhD in physics by a factor of 10, allowing OR would essentially give free rein to crackpots, since there wouldn't be that many people around who could find the flaws in their reasoning. Right now, they (at least in principle) would have to find peer-reviewed publications supporting their arguments, which is not as easy as posting some complicated derivation and hoping no-one finds the errors.

One big problem with Wikipedia (which I'm not sure could be fixed even in principle) is that sometimes you're not allowed to taboo words, because you're essentially doing lexicography. If the question is “Was Richard Feynman Jewish?”, “He had Jewish ancestry but he didn't practise Judaism” is not a good-enough answer if what you're deciding is whether or not the article about Feynman should be in the category for Jewish American physicists; if the question is “Was an infant who has since become a transsexual woman a boy?”, answering “it had masculine external genitalia but likely had feminine brain anatomy” is not good enough if what you're deciding is whether the article should say “She was born as a boy”; and so on and so forth. (There once was an argument about whether accelerometers measure inertial acceleration even though both parties agreed about what an accelerometer would read in all of the situations they could come up with, because they meant different things by inertial acceleration. What happened is that someone come up with other situations such as magnetically levitating the accelerometer or placing it somewhere with non-negligible tidal forces, and the parties did disagree about what would happen. (My view is that then you're just misusing the accelerometer, and drawing any conclusions from such circumstances is as silly as saying that resistance is not what ohmmeters measure because if you put a battery across an ohmmeter, what it reads is not the internal resistance of the battery. But IIRC, rather than pointing that out I just walked away and left Wikipedia, even though I later came back with a different user name.)

Replies from: Dmytry
comment by Dmytry · 2012-02-24T18:11:15.038Z · LW(p) · GW(p)

Agreed that removing the condition against first principles would perhaps screw stuff up more.

But the attitude against original research is uncalled for. When there's someone who misunderstands the quoted articles, you can't just go ahead and refer to first principles, noooo thats original research, and the attitude is: i'm not ashamed i'm instead proud i don't understand topic we're talking about, i'm proud i don't (because can't) do original research. Non experts come up with all sorts of weird nonsense interpretations of what experts say, that experts would never even feel need to publish anything to dispel. And then you can't argue with them rationally, they proudly reject any argumentation from first principles.

Replies from: army1987
comment by A1987dM (army1987) · 2012-02-24T19:03:04.399Z · LW(p) · GW(p)

Huh, yes. OR shouldn't be allowed into articles but it should on talk pages. (Plus, some people use a ridiculously broad definition of OR. If I pointed out that the speed of light in m/s is exact and the number of metres in a yard is exact and proceeded to give the exact value of the speed of light in imperial units, and I called that original research of mine anywhere outside Wikipedia, I'd be (rightly) laughed away. Hell, even my pointing out that the word Jewish has several meanings was dismissed as OR, by someone who insisted that on Wikipedia the only possible meaning of Jewish is ‘someone who a reliable source refers to as Jewish’.

Replies from: thomblake
comment by thomblake · 2012-02-24T19:20:13.584Z · LW(p) · GW(p)

If I pointed out that the speed of light in m/s is exact and the number of metres in a yard is exact and proceeded to give the exact value of the speed of light in imperial units

That's not reasonably called OR on Wikipedia either. See:
http://en.wikipedia.org/wiki/Wikipedia:No_original_research#Routine_calculations

someone who insisted that on Wikipedia the only possible meaning of Jewish is ‘someone who a reliable source refers to as Jewish’.

That actually sounds pretty reasonable to me. If you want to use a more nuanced concept to refer to someone, you could always find a reliable source who has used that nuanced concept to refer to the person. Or you could do the OR somewhere else and then someone else can use that to improve the article.

Replies from: army1987
comment by A1987dM (army1987) · 2012-02-24T20:06:16.124Z · LW(p) · GW(p)

If I pointed out that the speed of light in m/s is exact and the number of metres in a yard is exact and proceeded to give the exact value of the speed of light in imperial units

That's not reasonably called OR on Wikipedia either. See: http://en.wikipedia.org/wiki/Wikipedia:No_original_research#Routine_calculations

For some time, they claimed that converting exact values as rational numbers (as opposed to conversions with a finite number of sigfigs) is not a routine calculation. (To be honest, I'm not sure I remember what eventually happened. [goes to check] Oh, yeah. The footnote stayed because we did find a citation. Not that I'd normally consider the personal website of a cryptographer as a reliable source, but still.)

comment by bungula · 2012-02-24T11:54:51.634Z · LW(p) · GW(p)

Can you give specific examples of articles that are biased? Your comment and it's parent made me curious about wikipedia's quality :)

Replies from: IlyaShpitser, Dmytry
comment by IlyaShpitser · 2012-02-24T17:11:27.913Z · LW(p) · GW(p)

Well, here's a talk section of an article on a subject I know something about. This should give an idea of wikipedia's process and what kind of content results from it:

http://en.wikipedia.org/wiki/Talk:Bayesian_network

Here's another one:

http://en.wikipedia.org/wiki/Confounding

The very first sentence is wrong.

comment by Dmytry · 2012-02-24T13:10:52.757Z · LW(p) · GW(p)

Well, this article is pretty bad:

http://en.wikipedia.org/wiki/Radiation_hormesis

but it used to be even worse. First of all,

that low doses of ionizing radiation (within the region and just above natural background levels) are beneficial

is hardly a hypothesis. A proper hypothesis would be "[specific mechanism] activates in presence of ionizing radiation and has such and such consequences". It would, incidentally, be easy to get rid of if it was wrong, or show correct if it was correct, and it'd be interesting even if the effect was too weak to beat the direct damage from radiation. I barely managed to get their proposed cause (some untapped powers of self repair mechanisms) into the definition of the hypothesis, 'cause the group that's watching article loved to just have a hypothesis that low doses of radiation are beneficial, whatever the mechanisms may be, they don't care, they just propose that effect is here. They don't care to propose that there's some self repair mechanism that activates by low doses of radiation, either, they want to propose that the effect is so strong there's actual benefit.

Also, note the complete absence of the references to radiation cure quacks of early 20th century - which fall under the definition here. And good luck adding those because there's some core group that's just removing 'em as "irrelevant". The link selection is honed to make it look like something new and advanced that could only have been thought of via some cool counter intuitive reasoning, rather than the first thing ever we thought of when we discovered radiation - ohh cool some poison, we don't sure how it works but it must be good in moderation - then it took about 60 years to finally discard this hypothesis and adopt LNT.

And of course, don't even dream of adding here the usual evolutionary counter argumentation to various allusions to some untapped powers of human body.

Note: radioactive remedies such as radon springs, radon caves, healing stones, etc. are a big business.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-02-24T15:20:51.803Z · LW(p) · GW(p)

I doubt that selecting less than half a sentence from the lead paragraph of an article is a very careful approach to criticism.

This article actually looks pretty typical of Wikipedia articles on relatively obscure quackish biomedical ideas. It outlines what the "hypothesis" is, then makes clear that it is explicitly rejected by various people who have studied the matter. The subject doesn't have enough history or enough attention from skeptics to get the kind of treatment that, say, the article on homeopathy does.

There are two completely junk charts (no scale!) in the article. Yuck!

When read carefully, the article makes clear it's talking about an effect that even if it existed, would be very close to the noise threshold. It requires some statistical awareness — much more than the typical academic has, to say nothing of Wikipedians — to recognize that this is the same thing as saying "there's no reason to suspect an effect here."

The primary bias problem here isn't the article; it's that the subject matter is made of bias, at least as far as I can tell. There's only so many times an article can say "there are a few noisy experiments, but nobody who actually counts on radiation safety thinks this exists."

That said, there's one thing I was really surprised to find: the talk page doesn't seem to be full of supporters saying that their hypothesis is being persecuted by the mainstream and skeptics calling them a bunch of names. And that suggests to me that improvement shouldn't be too hard.

Replies from: Dustin, Dmytry
comment by Dustin · 2012-02-29T05:33:24.468Z · LW(p) · GW(p)

When read carefully, the article makes clear it's talking about an effect that even if it existed, would be very close to the noise threshold. It requires some statistical awareness — much more than the typical academic has, to say nothing of Wikipedians — to recognize that this is the same thing as saying "there's no reason to suspect an effect here."

Is this really true? I'm not a part of academia in any sort of way, nor do I have any sort of math or statistical training beyond what's referred to as College Algebra, and I recognized immediately what the effect being close the noise threshold meant.

I'm just wondering if I just have a better intuitive grasp of statistics than your typical academic (and what exactly you mean by academic...all teachers? professors? english professors? stats majors?).

Of course, I read LessWrong and understand Bayes because of it, so maybe that's all it takes...

Replies from: Barry_Cotter
comment by Barry_Cotter · 2012-02-29T21:05:02.334Z · LW(p) · GW(p)

Is this really true?

Yes. Most of the academy doesn't use math or have any feel for it. Being forced to take algebra when you truly do not give a damn about it results in people learning enough to pass the test and then forgetting it forever.

I'm just wondering if I just have a better intuitive grasp of statistics than your typical academic (and what exactly you mean by academic...all teachers? professors? english professors? stats majors?).

Academics are people who have jobs teaching/lecturing in tertiary education. In a US context the lowest you can go and still be an academic is teaching at a community college. Alternatively an academic is part of the community of scholars, people who actually care about knowledge as such rather than as a means to an end. Most of these people would not know statistics if it bit them on the ass. Remember, the world is insane.

comment by Dmytry · 2012-02-24T17:39:38.470Z · LW(p) · GW(p)

Well yea, that's a very good way to describe it - made of bias. We always believed that if something is bad in excess it's good in moderation, and then proceeded to rationalize.

The topic is actually not very obscure. It pops up in any discussion of Chernobyl or Fukushima or cold war nuclear testing or radon testing of the households or the like, there's that 'scepticism' towards choosing linear no threshold model as a prior.

The seriously bad bit is that it is entirely missing the historical reference. When I am looking up an article on some pseudoscience, I want to see the history of said branch of pseudoscience. It's easier to reject something like this when you know that it is the first hypothesis we made about biological effects of radiation (and the first hypothesis we would make about new poisons in general until 20th century).

With regards to sanity of the talk page, that's what's most creepy. They get rid of historical background on this thing, calmly and purposefully (i don't know if that's still the case, going to try adding link to quack radiation cures again). There are honest pseudo-scientists who believe their stuff and they put up all the historical context up themselves. And there's the cases whereby you got some sane rational people with an agenda whose behaviour is fairly consistent with knowing full well that it is a fraud.

note: the LNT makes sense as a prior based on knowledge that the radiation at near the background level is a very minor contributor to number of mutations, and if you look at the big picture - number of mutations - for doses up to many times background, you're still varying it by microscopic amount around some arbitrary point, and you absolutely should choose linear behaviour as prior. Still, there's the 'sceptics' who want to choose zero effect at low doses as a prior because the effects were never shown and occam's razor blah blah blah.

edit: ahh by the way, i wrote some of that description outlining the hypothesis, making it clearer that they start from beneficial effects then hypothesise some defence mechanisms that are strong enough to cancel the detrimental effect. That's such completely backwards reasoning.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-02-24T18:07:29.768Z · LW(p) · GW(p)

Overall, that sounds more like a bunch of folks who have heard of this cool, weird, contrarian idea and are excited by it, rather than people who are trying to perpetrate a fraud for personal benefit. Notably, there isn't any mention on the article of any of the quack treatments you mention above; there's no claims of persecution or conspiracy; there's not even much in the way of anti-epistemology.

Replies from: Dmytry
comment by Dmytry · 2012-02-24T18:21:44.947Z · LW(p) · GW(p)

It's a pseudoscience article from which they remove the clues by which one could recognize pseudoscience, that's what's bad.

Also, it should link to past quack treatments of 20th century. I'm going to try again adding those when I have time. It's way less cool and contrarian when you learn that it was popular nonsense when radiation was first discovered.

Replies from: thomblake
comment by thomblake · 2012-02-24T19:14:32.324Z · LW(p) · GW(p)

I'm going to try again adding those when I have time.

If you added those before and they were reverted, then you should be discussing it on Talk and going for consensus.

Replies from: Dmytry
comment by Dmytry · 2012-02-24T20:34:24.940Z · LW(p) · GW(p)

It's been ages ago (>5 years i think), i don't even quite remember how it all went.

What's irritating about wikipedia is that the rule against original research in the articles spills over and becomes attitude against any argumentation not based on appeal to authority. So you have the folks there, they are curious about this hormesis concept, maybe they are actually just curious, not some proponents / astroturf campaign. But they are not interested in trying to listen to any argument and think if it is correct or not themselves. I don't know, maybe it's an attempt to preserve own neutrality on issue. In any case it is incredibly irritating. It's half-curiosity.

comment by Dmytry · 2012-02-24T00:57:18.592Z · LW(p) · GW(p)

"A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?"

I had following (in rapid succession): 10 cents, whoops it adds up to 120 cents , aha, 5 cents, adds up to 110 , done.

Doesn't really matter what stupid heuristic you try if you verify the result. I can of course do: let a+b=1.1 , a=b+1 , b+1+b=1.1 , 2b=0.1 , b=0.05 , but it takes a lot longer to write, and to think, and note the absence of verification step here.

The "No! Algebra" is sure fire way to do things slower. Verification and double checking is the key imo. Algebra is for unwieldy problems where you can't test guesses quickly, failed to guess, have to use pencil and paper, etc. When you rely on short term memory you really could be best off trying to intuitively get the answer, then checking it, then rewarding yourself when correct (if verification is possible)

  • whoops is more like some parallel pondering just of how stupid i must be.
Replies from: Zvi
comment by Zvi · 2012-02-25T15:11:42.247Z · LW(p) · GW(p)

Instinctively my thought process goes: The dollar is the extra, then the ten cents is split, $0.05, done (plus or minus a double check). I can sense the $0.10 answer trying to be suggested instantly in the background, but it has a fraction of a second before it gets cut off, presumably because this is a kick type I've done 10,000 times.

Formal algebra is the very slow (in relative terms) but reliable answer.

Replies from: Dmytry
comment by Dmytry · 2012-02-25T16:20:19.517Z · LW(p) · GW(p)

Well yea, the processes at that timescale are not even exactly serial. When the 10 cents appears i just derail into pondering how stupid I must be to have 10 cents even pop up consciously, while 5 cents pops up.

When we were taught math at school we often had to do verification step. Then i was doing contests a fair bit and you care to check yourself there, you solve each problem and check the answer, then in the end if you solved everything you go over them again and doublecheck, triplecheck. We had few hard problems on tests instead of many easy ones. You often had to think - how do i check this?

It seems not everyone's taught this way, some people have self esteem-boosting cultural stuff in mind, and the self doubt can be seen as worst thing ever culturally. In US movies there's always someone who's like, i can't do it, i can't do it, then the hero talks them into jumping over the gap anyway, and they do it, which is just silly.

For other example, say, I face something like monty hall problem. I think - how can i solve it so that i can be sure in the answer? Well, the foolproof way is to consider all the possibilities, which i can do rather rapidly by visualizing it. I don't need to think in terms of probabilities. There's other important thing here: reductionism. One need to know what things are derived, and that derived things aren't 'better' or 'right'. The probabilities are substitute for evaluating a potentially infinite number of possible worlds and counting them. If you ever have conflict between some first principles reasoning and some advanced high level reasoning, the advanced reasoning is not the one that's working correctly, probably you're misapplying it.

I recall many arguments over physics on some forum with some guy who just didn't understand the reductionism. His barrels would float due to Archimedes law, not due to pressure difference; then it gets confusing when you have a barrel fall down into water (dynamical situation), and he would try to use highest level concepts he can think of. Or when you have submarine stuck to the seafloor. Or plugging your sink with a piece of styrofoam that you think would float, by Archimedes law, except it won't, because there's no pressure being applied to it's bottom. The people who don't get reductionism, they have the pressure difference first principles thing saying the styrofoam won't float, and archimedes law that they misapply and it says it will, and archimedes law sounds advanced so they think its the one that's right.

comment by nickpelling · 2012-02-23T01:08:48.273Z · LW(p) · GW(p)

Having worked on the Voynich Manuscript (which you namecheck above) for over a decade now, I'd say that uncertainty isn't just a feeling: rather, it's the default (and indeed natural) state of knowledge, whereas certainty is normally a sign that we've somehow failed to grasp and appreciate the limits and nature of our knowledge.

Until you can eradicate the itch that drives you to want to make knowledge final, you can never be properly curious. Real knowledge doesn't do final or the last words on a subject: it's conditional, partial, constrained, and heuristic. I contend that you should train your ape-brain to stay permanently curious: almost all certain knowledge is either fake or tautologous.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-02-22T19:56:49.915Z · LW(p) · GW(p)

Exercise 2.2: Make plans for different worlds... Maybe you live in a world where you'd improve your cognitive function by taking nootropics, or maybe you live in a world where the nootropics would harm you.

On the bright side, this is pretty much the thought process I go through whenever I don't know the right answer to something. On the other hand ("on the dark side"?) I think my automatic instinct is "there's no scientific consensus on this that I've read about in my textbooks...therefore this is a Permanent Blank in my map and I just have to live with it." Even if I'm not up to going out and doing the original research to answer Question X, I suspect that I would often be wrong about there being no already-investigated answers. Looking a given topic up and reading about all the conflicting theories, rather than a scientific conseusus, still provides more information than not reading up on it at all.

And again, thank you for the excellent article! I really like this one.

comment by CarlShulman · 2012-03-04T17:46:07.160Z · LW(p) · GW(p)

Once, I explained the Cognitive Reflection Test to Riley Crane by saying it was made of questions that tempt your intuitions to quickly give a wrong answer. For example:

This could use spoiler tags, or ideally some substitute: it's useful for people to have a chance to be administered the CRT unawares (lest they imagine by hindsight bias that they would not have been misled, or others lose the chance to test them).

comment by HungryTurtle · 2012-02-29T16:05:44.288Z · LW(p) · GW(p)

In feeling that you do not know the answer, Luke suggests to "Think of things you once believed but were wrong about." Why not take it a step further and say

1.3 When thinking about a time when you were wrong, think about how right being wrong feels*up until the moment you realize we are wrong.

In reflecting on times when I have been wrong what I find most disturbing is not what I was wrong about, but the degree to which being wrong is cognitively similar to being right. In college, I went to an Elizabeth Loftus lecture where she shockingly announced that the degree of confidence you have in a memory has no effect on its validity or fallacy. The more I think on this idea the more I find it to be true. Being wrong feels like being right. If that is the case how can I ever be certain in any ideas? Luke suggests tools like a cognitive reflection test to work towards uncovering when you are wrong. However, is this really a method for uncovering cognitive blind spots, or is it the rigorous application of an existing paradigm of problem solving? I would argue it is the later. It is convenient that the example given is a math problem, but what happens when you need to cognitively reflect over false intuition in another realm (you mentioned sociology). Thinking NO! Algebra might help some problems, but not all. How do you justify your belief in the application of algebra to a situation? How do you discover new paradigms of problem solving? Eliezer states

When you're really curious, you'll gravitate to inquiries that seem most promising of producing shifts in belief, or inquiries that are least like the ones you've tried before.

I agree with this statement. Is it illogical to think inquires that are least like the ones I have tried before are the ones that I have such a low confidence in I have actually dismissed them? Or in other words the ideas I actively disbelieve. I argue that a truly curious person would actively work to see the truth in things he or she knows to be wrong. An epistemological take on “Keep your friends close but your enemies closer.” If you think theism is absurd, perhaps you should be more curious about it. I am not advocating complete relativism or anything close to that. I do think there is right and wrong. But I think looking for the right in what you think is wrong will better mark the path of moderation.
What is right is moderation.

comment by Will_Newsome · 2012-02-22T19:37:52.669Z · LW(p) · GW(p)

Curiosity is one possible motivation that forces you to actually look at evidence. Fear is more reliable and can be used when curiosity is hard to manufacture.

Replies from: wedrifid, Incorrect, lukeprog, awe-lotta
comment by wedrifid · 2012-02-23T13:47:34.437Z · LW(p) · GW(p)

Curiosity is one possible motivation that forces you to actually look at evidence. Fear is more reliable and can be used when curiosity is hard to manufacture.

Fear can be powerful but it is far from reliable and usually not used best for ongoing motivation of any kind.

Replies from: army1987
comment by A1987dM (army1987) · 2012-02-23T14:47:18.679Z · LW(p) · GW(p)

It depends on the kind of fear. The fear of going off my beeminder roads is good enough to motivate me to stay on them. YMMV.

Replies from: wedrifid
comment by wedrifid · 2012-02-23T15:00:30.426Z · LW(p) · GW(p)

It depends on the kind of fear. The fear of going off my beeminder is good enough to motivate me to stay on them. YMMV.

It quite possibly would (vary). I have developed something of a "@#%@# you!" attitude to threats that are ongoing and try to reserve fear as an exception-oriented motivation device.

comment by Incorrect · 2012-02-22T21:17:42.255Z · LW(p) · GW(p)

I don't think I could really feel fear about something in far mode thinking.

comment by lukeprog · 2012-02-22T20:11:12.509Z · LW(p) · GW(p)

I worry that fear may paralyze. Curiosity seems more likely to spring someone into action. These effects probably vary between persons.

Replies from: steven0461, Will_Newsome
comment by steven0461 · 2012-02-23T00:55:36.557Z · LW(p) · GW(p)

If fear paralyzes, maybe it's best used in bursts at times when you don't immediately need anything done and can spend some time on reevaluating basic assumptions. I wonder if there should be a genre of fiction that's analogous to horror except aimed at promoting epistemic paranoia. I've heard the RPG Mage: the Ascension cited in that context. I guess there's also movies like the Matrix series, the Truman Show, Inception. One could have an epistemic counterpart to Halloween.

Replies from: Will_Newsome, Alicorn
comment by Will_Newsome · 2012-02-23T04:43:14.614Z · LW(p) · GW(p)

I just watched The Truman Show a few days ago. I interpreted it as a story about a schizophrenic who keeps getting crazier, eventually experiencing a full out break and dying of exposure. The scenes with the production crew and audience are actually from the perspective of the schizophrenic's imagination as he tries to rationalize why so many apparently weird things keep happening. The scenes with Truman in them are Truman's retrospective exaggerations and distortions of events that were in reality relatively innocuous. All this allows you to see how real some schizophrenics think their delusions are.

Replies from: army1987, spiceupthemind
comment by A1987dM (army1987) · 2012-02-23T10:46:52.018Z · LW(p) · GW(p)

I had never heard anybody interpreting it that way before.

comment by spiceupthemind · 2012-02-26T04:14:31.682Z · LW(p) · GW(p)

I've never heard that one before, but there is a psychiatric illness in which people believe themselves to be watched at all times and that the world around them was created specifically for them, et cetera. It's called Truman Syndrome.

All I know about schizophrenia I know from the copious number of psychiatric volumes and memoirs I've read. I have an older cousin with paranoid schizophrenia, but I don't even remember the last time I spoke to him.

comment by Alicorn · 2012-02-23T01:14:26.029Z · LW(p) · GW(p)

an epistemic counterpart to Halloween.

I'm now imaginging children wearing signs with cognitive biases written on them running around door to door, and people answering the door, uttering brief arguments, and rewarding each kid with paperback science fiction if the kid can correctly identify the fallacy.

Replies from: steven0461, J_Taylor
comment by steven0461 · 2012-02-23T01:26:15.793Z · LW(p) · GW(p)

What I had in mind was replacing rituals involving the fear of being hurt with rituals involving the fear of being mistaken. So in a more direct analogy, kids would go around with signs saying "you have devoted your whole existence to a lie", and threaten (emptily) to go into details unless they were given candy.

Replies from: fiddlemath, Alicorn, pedanterrific
comment by fiddlemath · 2012-02-23T01:58:22.475Z · LW(p) · GW(p)

Upvoted for making me laugh until it hurt.

You could probably get sufficiently-twisted kids to do this on the usual Halloween. Dress them up as professors of philosophy or something; it'd be far scarier than zombie costumes. (This would actually be fantastic.)

Alternately, dress up as a "philosopher" (Large fake beard and pipe, maybe?), set up something like a fake retiring room on your front porch, tell small children that their daily lives are based on subtly but critically broken premises, and give them candy. (Don't actually do this, unless your neighbors love or hate you unconditionally. Or you're moving away soon.)

Replies from: pedanterrific, gwern, spiceupthemind
comment by pedanterrific · 2012-02-23T02:03:51.085Z · LW(p) · GW(p)

You could probably get sufficiently-twisted kids to do this on the usual Halloween. Dress them up as professors of philosophy or something; it'd be far scarier than zombie costumes. (This would actually be fantastic.)

Alternately, dress up as a zombie philosopher and shamble around moaning "quaaaalia" instead of "braaaains".

Replies from: radical_negative_one
comment by radical_negative_one · 2012-02-23T04:24:18.633Z · LW(p) · GW(p)

Last Halloween i dressed as a P-zombie. I explained to anybody who would listen that i had the same physical composition as a conscious human being, but was not in fact conscious. I'm not sure that any of them were convinced that i really was in costume.

Replies from: arundelo
comment by arundelo · 2012-02-23T04:48:36.416Z · LW(p) · GW(p)

For this to be really convincing and spoooky, you could stay in character:

Halloween party attendant: Hi radical_negative_one, what are you dressed as?
confederate: radical_negative_one is a p-zombie, who acts just like a real person but is not actually conscious!
radical_negative_one: That's not true, I am conscious! I have qualia and an inner life and everything!

Replies from: Richard_Kennaway, pedanterrific
comment by Richard_Kennaway · 2012-02-23T13:04:27.649Z · LW(p) · GW(p)

radical_negative_one: (To confederate:) No, you're the p-zombie, not me! (To Halloween party attendant:) They're getting everywhere, you know. They look and act just like you and me, physically you can't tell, but they have no soul! They're just dead things!! They sound like us, but nothing they say means anything, it's just noises coming out of a machine!!! Your best friend could be a p-zombie!!!! All your friends could be p-zombies!!!!!

confederate It's all true! And he's one of them! Say, how do I know you're not a zombie?

comment by pedanterrific · 2012-02-23T05:16:15.082Z · LW(p) · GW(p)

confederate: No, radical_negative_one. You are the demons

And then radical_negative_one was a zombie.

comment by gwern · 2012-02-23T02:26:21.086Z · LW(p) · GW(p)

Large fake beard and pipe, maybe?

And tweed jacket with leather patches on the elbows, don't forget.

Replies from: fiddlemath
comment by fiddlemath · 2012-02-23T02:30:35.770Z · LW(p) · GW(p)

Ah, yes. That would satisfy nicely.

comment by spiceupthemind · 2012-02-26T04:03:17.250Z · LW(p) · GW(p)

Oh, great. Now I have half a mind to go out this Halloween for the first time since junior high school dressed as a philosophy professor to scare middle aged housewives with rationalist arguments.

And I would carry out my threat of giving details as to how they have devoted their whole existences to a lie. I do that a lot, actually, just not in a costume and generally not by coming up to stranger's houses for candy.

comment by Alicorn · 2012-02-23T02:25:09.312Z · LW(p) · GW(p)

kids would go around with signs saying "you have devoted your whole existence to a lie", and threaten (emptily) to go into details unless they were given candy.

But that's the fear of learning that one is mistaken, not the fear of being mistaken...

Replies from: steven0461
comment by steven0461 · 2012-02-23T04:00:09.061Z · LW(p) · GW(p)

You're right, of course. I don't think a fully direct analogy is possible here. You can't really threaten to make someone have been wrong.

Replies from: Nisan, shokwave
comment by Nisan · 2012-02-24T03:28:10.909Z · LW(p) · GW(p)

"You always thought I wasn't the kind of person who would TP your house on Halloween, but if you don't give me candy I'll make you have been wrong all along!"

Replies from: Alicorn
comment by Alicorn · 2012-02-24T04:20:45.647Z · LW(p) · GW(p)

"Hah, got you - I actually thought all along that you were the kind of person who would TP my house if and only if denied candy on Errorwe'en!"

"Okay, and given your beliefs, are you gonna give me candy?"

"...Have a Snickers."

comment by shokwave · 2012-02-24T03:14:33.541Z · LW(p) · GW(p)

I can easily imagine a sci-fi horror story in which someone is powerful enough to do that. You'd have to demonstrate it first, of course, and the story would have to take some time to carefully explore what changes when someone is made to have been wrong, but it seems plausibly doable.

comment by pedanterrific · 2012-02-23T01:58:23.702Z · LW(p) · GW(p)

Emptily? Just how sure of that are you?

(I like skittles.)

Replies from: spiceupthemind
comment by spiceupthemind · 2012-02-26T04:09:53.376Z · LW(p) · GW(p)

Yes! Give me a Three Musketeers bar or I shall prove that you have devoted your entire existence to a lie using only logic and rhetoric.

comment by J_Taylor · 2012-02-23T01:36:32.475Z · LW(p) · GW(p)

What we need is a rationalist hell-house.

http://en.wikipedia.org/wiki/Hell_house

Replies from: None
comment by [deleted] · 2012-02-23T03:25:19.628Z · LW(p) · GW(p)

.

comment by Will_Newsome · 2012-02-22T20:40:14.156Z · LW(p) · GW(p)

Looking back it seems I use curiosity more for hours or days-long knowledge-gaining quests, e.g. immersing myself in a new academic field, whereas I use fear more when philosophizing on my own, especially about AI/FAI. Introspectively it seems that fear is more suited to examining my own thoughts or thoughts I identify with whereas curiosity is more suited to examining ideas that I don't already identify with or things in my environment. I suspect this is because people generally overestimate the worth of their own ideas while underestimating the worth of others' -- negative motivations reliably act as critical inductive biases to counterbalance systematic overconfidence in oneself, whereas positive motivations reliably act as charitable inductive biases to counterbalance systematic underconfidence in others. As you say, it's probable that others would have different cognitive quirks to balance and counterbalance.

comment by awe lotta (awe-lotta) · 2020-12-19T16:58:02.504Z · LW(p) · GW(p)

Fear of bad consequences seems to be part of (how this post defines) curiosity. i.e. Exercise 2.1: Visualize the consequences of being wrong.

comment by Armok_GoB · 2012-02-22T22:15:08.917Z · LW(p) · GW(p)

I consistently fail several times over at this. I always feel I DO know everything worth knowing, and while obviously wrong can't come up with any salient counterexamples. Probably related to memory problems I have, I don't seem able to come up with examples or counterexamples of anything ever.

And when I do consider multiple possibilities, they never seem to matter for what actions I should take, which drains any motivation to find out the answer if it takes more than 30 seconds of googling or I happen to not be at my computer when the question occurs.

All the information I take in seems to be about new ideas, not evidence for or against old ones.

All this is obviously absurd and I'm a bad rationalist and deserve extremely low status for this heinous lack of virtue, being but a burden to the tribe! Woe is me!

Help?

Replies from: lukeprog
comment by lukeprog · 2012-02-23T01:12:19.584Z · LW(p) · GW(p)

Good. Let's see if we can make progress.

  1. New habit: Every time you're wrong, write down what you were wrong about.
  2. Play 'the calibration game': Use Wits & Wagers cards and give your confidence intervals. You'll probably find that 40% of the time, the correct answer was outside your 90% confidence interval. Write down all those failures.
  3. If the different hypotheses don't matter for which actions you take, you're either bad at realizing the decision-theoretic implications of various hypotheses, or you're bad at spending your time thinking about things that matter. Which do you think it is?
  4. Rarely is new information not evidence for or against old ideas. Maybe you need more practice in model-building? This is a separate post I'd like to write at some time; I'm not sure what useful thing I can say about it now.
  5. Re: your "heinous lack of virtue." Reward yourself for effort, not for results. You have more control over the former.
Replies from: army1987, Dmytry, Armok_GoB
comment by A1987dM (army1987) · 2012-02-23T14:52:21.267Z · LW(p) · GW(p)

Awesome. I'm going to keep that in mind. I only have a quibble about

Reward yourself for effort, not for results.

That could lead me to try but nowhere near as hard as I can, and making excuses when I fail.

Replies from: Mass_Driver, Zvi
comment by Mass_Driver · 2012-02-23T15:15:45.750Z · LW(p) · GW(p)

To clarify: reward yourself for taking new and improved actions, or for taking more of the right kind of actions, even if these actions don't immediately cause the desired results. Once your new level becomes a habit, stop rewarding yourself and reward the next level up. Rinse and repeat until you're close enough to a goal that it makes sense to reward yourself directly for the results you actually want.

Replies from: Zvi
comment by Zvi · 2012-02-25T15:25:41.799Z · LW(p) · GW(p)

I continue to celebrate a job well done even if it's force of habit, if only to give myself better incentives to form more good habits.

comment by Zvi · 2012-02-25T15:24:10.706Z · LW(p) · GW(p)

There's signaling effort (especially to yourself), and then there's effort. You want to reward effort but not signaling effort.

Often one will make a cursory attempt at something, but with the goal of signaling to themselves or others that they put in effort or tried rather than doing what was most likely to accomplish the goal. This leads to statements like "I tried to get there on time" or "I did everything I was supposed to do." That's excuse making. Don't reward that.

Instead, reward yourself to the extent that you did that which you had reason to believe was most likely to work, including doing your best to figure that out, even if it didn't succeed. Do the opposite if you didn't make the best decisions and put forth your best efforts, even if you do succeed.

The danger is that effort is much easier to self-deceive about than results - and the people who need this the most will often have the most trouble with that. Not enough attention is paid to this problem, and it may well deserve a top level post.

comment by Dmytry · 2012-02-25T16:36:28.377Z · LW(p) · GW(p)

you need both the instances you are right and the instances you are wrong to do correct stats... otherwise i can have 90% confidence, be wrong one time out of 10, and 100% of those times that i am wrong, have the answer outside 90% confidence interval.

comment by Armok_GoB · 2012-02-23T13:04:35.384Z · LW(p) · GW(p)
  1. I so far have a 100% failure rate in establishing habits that involve writing things down or in other ways externalize memory.
  2. I don't have any such cards. I also doubt paying a game once for 5 minutes will help much, and akrasia and stress will prevent any more than that.
  3. Of those, absolutely the latter, but neither seems plausible.
  4. I have zero control over both, because akrasia.

... my "not true rejection!" alarm is going of but I can't seem to find anything to do with that information either.

Replies from: lukeprog, hamnox, NancyLebovitz
comment by lukeprog · 2012-02-23T20:31:19.197Z · LW(p) · GW(p)

Yeah, sounds like you have a general motivation problem that needs fixing before you can get better at a lot of other things.

Replies from: Armok_GoB
comment by Armok_GoB · 2012-02-23T22:02:45.320Z · LW(p) · GW(p)

Not quite, but it seem unlikely this conversation will get further without getting into mental problems I really don't want to discus with someone whose opinion I care about, like you.

Replies from: Zvi, lukeprog
comment by Zvi · 2012-02-25T15:28:01.059Z · LW(p) · GW(p)

I find your honesty in these posts inspiring. I wish more people had such courage.

Replies from: Armok_GoB
comment by Armok_GoB · 2012-02-25T20:21:38.503Z · LW(p) · GW(p)

Ah, yea. Backing out of a conversation and retracting all my posts as soon as it gets uncomfortable sure is courageous!

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-02-25T20:59:14.609Z · LW(p) · GW(p)

It still took a good bit of nerve to make those posts.

comment by lukeprog · 2012-02-23T22:26:45.545Z · LW(p) · GW(p)

Sure.

comment by hamnox · 2012-02-23T18:04:38.534Z · LW(p) · GW(p)
  1. I so far have a 100% failure rate in establishing habits that involve writing things down or in other ways externalize memory.

This is true for me as well. Which is why I try to rely on programs that prompt me to reply at random intervals through computer popups or sms, rather than habit.

I highly doubt you have zero control over effort. Akrasia limits your ability to act on willpower, it doesn't negate willpower entirely. Reward yourself for those 30 second googling bursts if nothing else.

I'm serious, have a jar of mini chocolate chips by your desk and pop one in your mouth every time you google an interesting question on scholar or wikipedia.

Replies from: None, rhollerith_dot_com, Dmytry
comment by [deleted] · 2012-02-23T20:20:53.805Z · LW(p) · GW(p)

have a jar of mini chocolate chips by your desk and pop one in your mouth every time you google an interesting question on scholar or wikipedia.

Is there any evidence this works? 1) Does the brain treat these discretionary pleasures as reinforcement? 2) If it does, do attribution effects undermine the efficacy? Research in attribution effects show that extrinsic rewards sometimes undermine intrinsic interest, i.e., curiosity. "Negative effects are found on high-interest tasks when the rewards are tangible, expected (offered beforehand), and loosely tied to level of performance."

comment by RHollerith (rhollerith_dot_com) · 2012-02-23T19:39:24.363Z · LW(p) · GW(p)

have a jar of mini chocolate chips by your desk and pop one in your mouth every time you google an interesting question on scholar or wikipedia.

Disagree. The target of your advice has reported serious health problems (and his akrasia would probably be a lot easier to overcome if it weren't for the health problems, according to my models (which are based only on what he has posted to LW and on information not specific to him)) so I would advise him not to choose what to eat for its reward value.

To help him decide what weight to give my advice, I will add that I have had serious health problems for the last 40 years.

Moreover, I have serious doubts about the usefulness of setting up blatantly artificial (i.e., self-imposed for the purpose of conditioning oneself) cause-and_effect relationships between desired changes in behavior and rewards even when the rewards have no expected negative effect on health.

Replies from: hamnox
comment by hamnox · 2012-02-24T21:37:46.334Z · LW(p) · GW(p)

You're right. This was very poorly considered advice. I'm ashamed to admit I kind of recognized that as I was writing it, but posted it anyways for reasonable-sounding justifications that now suspiciously elude memory.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2012-02-26T03:34:13.763Z · LW(p) · GW(p)

I kind of recognized that as I was writing it, but posted it anyways

I know the feeling (from times I have given advice).

comment by Dmytry · 2012-02-25T16:37:51.438Z · LW(p) · GW(p)

I'm serious, have a jar of mini chocolate chips by your desk and pop one in your mouth every time you google an interesting question on scholar or wikipedia.

maaaan i have to condition myself NOT to google interesting questions else i can't get any work done for my job. But i see what you mean, that may work for conditioning oneself to work.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2012-02-25T19:33:35.756Z · LW(p) · GW(p)

(A caution: I've found that naive implementations of the "reward oneself with candy" method for overcoming akrasia don't work because it becomes too tempting to just eat the candy for no reason. It has been suggested to me that it might help to explicitly write down beforehand exactly what actions justify a reward, but I haven't gotten around to testing this yet. Individual results may vary; further research is needed.)

comment by NancyLebovitz · 2012-02-23T15:45:41.431Z · LW(p) · GW(p)

Post some hypotheses and/or predictions at Less Wrong. There's a least a reasonable chance that people will tell you if you're mistaken.

comment by fiddlemath · 2012-02-23T01:59:43.914Z · LW(p) · GW(p)

I approve strongly! Publicly-posted exercises may yield practice, practice yields habit, and habit yields changed behavior. Developing deeper, more-focused curiosity would be a grand step towards becoming more awesome. But!

( summary: It is important to practice this skill at appropriate times, like when it is useful and feasible to work on answering the given question, and not just at random, or whenever it's convenient to schedule the practice. I plan to attach a reminder to my research to-do list.)

Alright, says I, this exercise seems plausible enough. So I'll start practicing this exercise and see how well it works. But how ought I do this for regular practice?

At first, I thought about walking through this list as part of my morning routine. But how would I actually do that? The exercise needs an unanswered question, and I don't generally have a fresh, new, important question every morning. So:

  • If I pick an arbitrary question I don't know the answer to, I should be able to import the feeling of uncertainty, but clear evaluation of the consequences of being wrong will be demotivating, and the Litany of Gendlin will be silly.
  • If I pick an important question I don't know the answer to, then I suspect that I can use this exercise to get myself quite motivated to better answer it. In the context of my morning routine, though, this would be terrible. My morning routine is optimized for satisfying basic needs, waking up quickly, and getting to my office at a reasonable hour. This form of the exercise, if effective, would actually conflict sharply with my long-term goals.

In fact, I suspect that intense curiosity about any random topic, at any random time, is actually a bad idea. For instance, if I'm on a several-hours drive, by myself, and by random musing I become intensely curious about how to resolve (say) anthropic probabilities. I don't know much about the real arguments in anthropics, so I'm just going to be frustrated at my situation. Eventually, I'll think about something else, and the curiosity will pass before I can use it. (And now I've associated that particular curiosity with frustration, and my inability to satisfy it, and perhaps next time I won't become curious so readily.)

What we want, then, is the ability to get curious about a question because we recognize that we'd like to answer it. When we have a verbal justification to resolve some question correctly, we want to invoke the appropriate emotions as motivation to do so. We want to practice this invocation. I don't see how to do this as part of my morning routine, to admit convenient, regular practice.

So, I now plan attach the reminder near where I manage the relevant to-do list. Any time Ι start a block of time aimed (even indirectly) at answering some question, I'll run through this exercise for that question. I hope to develop this habit so that when I'm reading for pleasure, or satisfying my own interest, or even doing research for writing blog posts, or just discussing some question, I'll first run this exercise -- or, eventually, just be curious.

Does anyone else plan to actually carry out this exercise? How will you hold yourself to regularity?

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-19T06:46:00.291Z · LW(p) · GW(p)

Another idea from Anna Salamon is just to brainstorm a ton of questions on the topic you want to get curious about for a predetermined period of N minutes. Very limited data suggests this method works significantly better for me.

comment by faul_sname · 2012-02-28T00:53:21.648Z · LW(p) · GW(p)

Am I the only one who searched the phrase "I see you start to answer a question, and then you stop, and I see you get curious." to see who it referred to?

comment by Jonathan_Graehl · 2012-02-23T00:33:03.246Z · LW(p) · GW(p)

Closing my eyes gives me only the feeling of having defensively headed a long ball in soccer a few hours ago. Sometimes I try to think and nothing seems to happen :)

VoI shouldn't be abbreviated (even with hyperlink).

Thinking about how I've been mistaken in the past feels pretty bad for me - akin to true embarrassment. But I suppose it's almost the only reason I'm ever cautiously uncertain, and that seems sad.

I really value your suggestion to purposefully cultivate delight-based exploration, instead of merely looking to minimize regret (even fairly assigned regret at coming up short of boundedly-optimal-rational, without confusing outcome for expected outcome in hindsight).

Replies from: lukeprog
comment by lukeprog · 2012-02-23T01:14:40.890Z · LW(p) · GW(p)

I really value your suggestion to purposefully cultivate delight-based exploration, instead of merely looking to minimize regret

Maybe I should have emphasized this more.

comment by lukeprog · 2012-11-15T05:36:07.521Z · LW(p) · GW(p)

Setting step one as "Feel that you don't already know the answer" fits with Loewenstein (1994)'s "gap theory of curiosity", summarized by Cooney (2010):

[Loewenstein's] theory is that curiosity happens when people feel a gap in their knowledge about something... Laying out a question and inviting others to ponder it will help keep the individual's attention, because it gets them mentally involved and because there's an element of unexpectedness. This is why cliffhangers are often used at the end of television soap operas, to get viewers to tune in to the next episode, or at the end of chapters in a thriller to keep readers glued to the page.

Taking the gap theory a step further, Harvard physics professor Eric Mazur has developed a teaching tool he calls concept testing. Mazur has found that posing a question to stimulate curiosity and then asking questions to vote publicly on the answer make them more engaged and curious about the outcome. Mazur has also found that fostering disagreement among students is particularly effective at stimulating interest. Not only has their curiosity been stimulated, but learning the answer now has personal relevance — it will show whether or not they're smarter than their classmates.

See also: Guthrie, I'm Curious: Can We Teach Curiosity?

comment by folkTheory · 2012-02-24T22:33:18.798Z · LW(p) · GW(p)

So, should I start consuming butter half-sticks?

Replies from: vali, Alicorn
comment by vali · 2012-02-25T21:52:55.881Z · LW(p) · GW(p)

The study had just 27 participants, and wasn't double blind. While it was an interesting experiment, I certainly wouldn't act on it, except perhaps to read another, similar experiment.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-02-26T01:30:40.866Z · LW(p) · GW(p)

It doesn't seem like the cost of a self experiment here would be very high, and you are the only research subject that really matters to yourself...

comment by Alicorn · 2012-02-25T00:13:37.946Z · LW(p) · GW(p)

At least eat them with something, ew. Melt it in a pan and fry something in it.

comment by [deleted] · 2012-03-03T20:29:39.814Z · LW(p) · GW(p)

Curious about what though? It seems like a very important piece of the above lesson is missing if we have no guidance as to what we should be curious about. It does me no good, perhaps no small amount of harm, to be intensely curious about the details of a fictional world. I ought not be curious about the personal life of my neighbor. And while curiosity about insects may serve some, it's unlikely to do most people any good at all. I think we have no good reason to believe that we're generally curious about the right sorts of things.

And there seems to be a deeper problem here too. Some things about which we're curious might just not be very knowable. I can study ancient history all I like, but there's just a limit to what we can know about what caused Peloponnesian war, not just because of the temporal distance or lack of record, but because there's just a lot of fundamental incoherence to things like that. History, to take one example, just isn't that knowable. Curiosity about history can be rewarded, but only a very restrained curiosity.

I think this is where the idea that 'a burning itch to know is better than a vow to pursue the truth': I've felt that burning itch to know and I know from experience that it doesn't of itself distinguish between worthy topics of curiosity and unworthy ones. A vow, at least, already has the idea of seriousness and purposefulness built into it.

comment by jwhendy · 2012-02-26T20:54:42.405Z · LW(p) · GW(p)

...it will make you light and eager, and give purpose to your questioning and direction to your skills.

And this article rekindled that for me. I have a motivation to explore I have not felt in quite some time. Thanks for writing this, Luke!

comment by skepsci · 2012-02-26T08:19:57.100Z · LW(p) · GW(p)

If you have beliefs about the matter already, push the "reset" button and erase that part of your map. You must feel that you don't already know the answer.

It seems like a bad idea to intentionally blank part of your map. If you already know things, you shouldn't forget what you already know. On the other hand, if you have reason to doubt what you think you know, you should blank the suspect parts of your map when you had reason to doubt them, and not artificially as part of a procedure for generating curiosity.

I think what you may be trying to say is that it is good practice to periodically rethink what you think you know, and make sure that A) you remember how you came to believe what you believe, and B) your conclusions still make sense in light of current evidence. However, when you do this, it is important not to get into the habit of quickly returning to the same conclusions for the same reasons. If you never change your conclusions while rethinking them, that's probably a sign that you are too resistant to changing your mind.

comment by Giles · 2012-02-26T00:19:07.659Z · LW(p) · GW(p)

This is all good stuff, but it makes curiosity sound complicated. I thought that the point of using curiosity as a hook into epistemic rationality is that once you feel the emotion of curiosity, your brain often just knows what to do next.

Also curiosity feels good.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-02-26T01:31:20.447Z · LW(p) · GW(p)

Curiosity in itself isn't necessarily complicated, and yes it feels good, but a lot of times, for a lot of people, it doesn't happen by itself. And it sounds like the process of producing curiosity in oneself is more complicated than simply feeling it naturally.

comment by MixedNuts · 2012-02-25T11:27:46.749Z · LW(p) · GW(p)

Bug report: step 2, exercise 2.1. If the consequences of my current best guess being wrong are much less dire than the consequences of being wrong on recomputing, my social circle thinks that the plan based on this current best guess is very important, and I hate the people who disagree, then I'm terrified of trying to recompute.

comment by Bruno_Coelho · 2012-02-24T06:30:26.484Z · LW(p) · GW(p)

People try very hard to ignore the consequences of being wrong. Fear in this case is dangerous, because cause stagnation and break curiosity.

Replies from: BillyOblivion
comment by BillyOblivion · 2012-03-07T08:10:34.377Z · LW(p) · GW(p)

My father was in the Korean war, on the peninsula.

He did not have access to butter or milk for something like 9 months.

When he got R & R to Tokyo he ate a pound of butter with a knife and fork.

I should note that while I don't know how fast he could do math in his head he could count/remember cards like nobody's business. Also he died of a massive coronary at 64 weighing close to 290 pounds.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-03-07T09:01:32.798Z · LW(p) · GW(p)

Are you implying that there is a causal link between his consumption of butter and his weight gain?

Replies from: BillyOblivion
comment by BillyOblivion · 2012-03-10T12:36:45.864Z · LW(p) · GW(p)

Bah. It looks like an eariler, much more detailed and funnier reply got eaten by something.

But to answer, no, I don't think specifically and narrowly his butter eating lead to his rather large size, but rather his eating of almost everything that would taste good, and in quantities that were sometimes moderately impressive.

Given how much he ate and smoked, and how little he moved it's a wonder he wasn't twice as big and that he lived as long as he did.