0 comments
Comments sorted by top scores.
comment by Chris_Leong · 2017-10-03T08:25:27.925Z · LW(p) · GW(p)
I really like the term Unambiguous Local First Mover, although it'd personally simplify it to Unambiguous First Mover. It really captures how easy it is to fail to look for alternative explanations when there is an idea that pops into your head immediately that also appears obviously right.
(I see Just-So stories as a seperate term. They also describe a cases where you are given an explanation that seems to perfectly explain everything until you realise that there are other possible explanations as well. However, sometimes they can be very unobvious, such as with many of the Just-So stories that emerge from evolutionary psychology).
comment by hamnox · 2017-09-26T21:55:18.674Z · LW(p) · GW(p)
Note to readers: naively breaking your ability to be satisfied with incomplete answers may come with unwanted side effects.
If each truth is connected to many other truths, you can expect people frequently underestimate the value of knowing many or varied truths... But this does not guarantee that using your marginal effort on acquiring more knowledge will get you, personally, anywhere near a catalytic threshold of understanding. Especially if you're not even asking the right questions.
↑ comment by Conor Moreton · 2017-09-26T23:09:19.097Z · LW(p) · GW(p)
Strong agreement with this point; it seems to be expressing itself in multiple ways as people make comments here. If I ever flesh this out, I'll be sure to throw more caveats up front.
comment by StefanDeYoung · 2017-09-26T14:38:12.859Z · LW(p) · GW(p)
At what point do we judge that our map of this particular part of the territory is sufficiently accurate, and accept the level of explanation that we've reached?
If we're going to keep pulling on the thread of "why are the dominoes on the floor" past "Zacchary did it" then we need to know why we're asking. Are we trying to prevent future messes? Are we concerned about exactly how our antique set of dominoes was treated? Are we trying to figure out who should clean this mess?
If we're only trying to figure out who should clean the mess, then "Zacchary did it" is sufficient, and we can stop looking deeper.
↑ comment by the gears to ascension (lahwran) · 2017-09-26T19:24:13.971Z · LW(p) · GW(p)
Satvik Beri was telling me recently about a thing in management culture about this: "5 whys" - keep asking why several times, so that you get a causal chain as an answer. (Satvik then amended this to say "and when using it to debug systems, ask 5 whys at a whole bunch of different problems in parallel, and then look for common causes.")
↑ comment by Conor Moreton · 2017-09-26T17:23:42.941Z · LW(p) · GW(p)
Strong endorsement of this point. My concern is that I've noticed my brain stopping too soon when the actual reason that I'm investigating is to understand deeply.
There's also an interesting clash between my point up above and the concept of value-of-information. Not sure how to synthesize those, except maybe as twin double-checks on the process? With VOI, you ask yourself "should I stop digging? Will more info actually help?" while you're still intrinsically motivated to keep going. And with this, you ask yourself "should I keep going? Have I actually learned what I set out to learn?" when you feel satisfied and ready to stop.
↑ comment by magfrump · 2017-09-26T17:49:35.589Z · LW(p) · GW(p)
VoI isn't necessarily about telling yourself to stop earlier; it's about when to stop. It can also give you the answer that more info is very valuable and you should be more curious! Honestly I would say this post is about how our intuitive sense often underestimates the true value of information--there's no clash there at all.
↑ comment by Conor Moreton · 2017-09-26T19:41:44.134Z · LW(p) · GW(p)
Good point. I agree with that also.
↑ comment by StefanDeYoung · 2017-09-26T19:53:21.892Z · LW(p) · GW(p)
Agreed. In the article, Conor says
I claim that my brain frequently produces narrative satisfaction long before the story's really over, causing me to think I've understood when I really haven't.
When you use the phrase Value of Information, are you drawing from any particular definition or framework? Are you using the straightforward concept of placing value on having the information?
↑ comment by Unnamed · 2017-09-28T04:49:10.035Z · LW(p) · GW(p)
Value of information is a named concept from decision analysis. See: Wikipedia and previous LW discussion.
↑ comment by Conor Moreton · 2017-09-27T08:00:57.274Z · LW(p) · GW(p)
Not sure if you were asking me, but ... I mean VOI in terms of "do some form of actual math or pseudomath to determine whether you genuinely expect further investigation to be worth the time." Sort of the ... layman's technical sense of the term? Like, I'm not an economist or a mathematician, but my understanding and use of the term came from economics and math.
↑ comment by StefanDeYoung · 2017-09-27T16:38:10.400Z · LW(p) · GW(p)
Thanks. That answers my question; seeing VOI capitalised and immediately acronymed made me think that it might be a Named Concept.
When you're thinking about whether to keep pulling on the thread of inquiry, do you actually write down any pseudomath or do you decide by feeling? Sometimes, I think through some pseudomath, but I wonder whether it might be worth recording that information, or if thinking on paper would produce better results than thinking "out-loud."
↑ comment by Conor Moreton · 2017-09-27T17:23:36.481Z · LW(p) · GW(p)
I usually do something akin to "rubber ducking." I don't necessarily write it down, but I say it, out loud, in complete and coherent sentences as if talking to a real person. Disciplining myself to do that causes me to put things in explicit, sensible order, and if I try to put things in order and find that I can't, that's a good flag that I need to clear up some sort of confusion.
Often I do this with some kind of currency exchange—like, I'll try to quantify both the additional information and the time and effort I'm spending on it in a common unit so that it's not hard to do comparisons.