Reality has a surprising amount of detail
post by jsalvatier
This is a link post for http://johnsalvatier.org/blog/2017/reality-has-a-surprising-amount-of-detail
Comments sorted by top scores.
comment by RomeoStevens ·
2017-05-14T08:08:03.427Z · LW(p) · GW(p)
This is why I like Naruto as a rationalist fanfic substrate: perceptual skills are explicitly upstream of action skills in the naruto universe. I think this mirrors the real universe and explains much of the valley of bad self-help. Action skills are pointless if you don't have the cues on when where and why to deploy them.
Another frame on the same concept: don't keep teaching people spells when their mana pool size sucks.
Replies from: Raemon
↑ comment by Raemon ·
2017-05-14T10:33:04.908Z · LW(p) · GW(p)
I currently have almost zero knowledge of Naruto and I'm interested in hearing more things about the perception/action skills thing as it applies to Naruto Classic (and/or rationalist!naruto)
Replies from: RomeoStevens, jsalvatier
↑ comment by RomeoStevens ·
2017-05-15T06:52:02.352Z · LW(p) · GW(p)
Time Braid and The Waves Arisen. Super fun reads, and also seem to put me in agenty mode better even than other rationalist fics. I haven't seen the naruto anime and I got on just fine with both.
As for why my model works this way: heavily influenced by the research on deliberate practice#Deliberate_practice). Essentially, it caused me to see expert performance as the combination of several core traits which are all predicated on perceptual skills. The first is generating the correct chunkings that mirror the causal structure in the domain in the first place, which are composed of distinctions that you must learn to make. If you've ever done something like music where you went from hearing complicated sounds to hearing specific 'phrases' this is what i'm pointing to with perception of chunks. In order to build these up one has to also isolate the feedback/reward loop that allows you to zero in on your performance of that chunk. Cleanly delineating the hits from the misses and having that information be on the smallest time delay possible. The other skill is navigating the chunked tree, which is predicated on perception of cues/proxies that indicate which decision paths to take in your knowledge tree. This structure then has the ability to get activated by experiences in the real world, where you notice something that looks like a chunk you've already seen. Normal self help techniques generally don't have these hooks that fire in specific times and places, meaning you likely just don't remember to use them.
comment by tristanm ·
2017-05-15T17:29:05.456Z · LW(p) · GW(p)
I think the first time this hit me I was looking at some software which allowed you to generate the Mandelbrot set and zoom into any level of detail you wanted, and thinking "how can this be possible? Does it really go on forever?" But it wasn't just seeing the level of detail go on infinitely that drove the significance home, but rather when I finally looked up the algorithm that generates the Mandelbrot set, and saw how simple it was. That was what made me first think, "Yeah, we probably can't know everything there is to know."
comment by Raemon ·
2017-05-13T21:01:39.132Z · LW(p) · GW(p)
This felt important but I'm not quite sure what my next action is supposed to be.
Replies from: Uffetg, jsalvatier, jb55
↑ comment by Uffetg ·
2020-01-27T19:46:00.291Z · LW(p) · GW(p)
I feel that the first step is to be open to other ways of seeing things, or to be open to the notion that you might be wrong about your assumptions or certainties. Very first step, that one... to doubt yourself in small, healthy doses. Then you naturally begin to ask questions, hopefully well-constructed, which (if you're curious) will lead you to explore and gather data.
↑ comment by jsalvatier ·
2017-05-14T06:41:59.627Z · LW(p) · GW(p)
Yeah, I wasn't too specific on that. I do endorse the piece that jb55 quotes below, but I'm still figuring out what to tell people to do. I'll hopefully have more to say in the coming months.
↑ comment by jb55 ·
2017-05-14T00:03:30.447Z · LW(p) · GW(p)
The end had some good pointers:
seek detail you would not normally notice about the world. When you go for a walk, notice the unexpected detail in a flower or what the seams in the road imply about how the road was built. When you talk to someone who is smart but just seems so wrong, figure out what details seem important to them and why.
comment by jsalvatier ·
2017-05-13T20:31:15.281Z · LW(p) · GW(p)
John Maxwell posted this quote:
The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it.
-- Daniel Kahneman
Replies from: RomeoStevens
↑ comment by RomeoStevens ·
2017-05-14T08:12:24.114Z · LW(p) · GW(p)
Ontology lock in. If you have nice stuff built on top of something you'll demand proof commensurate with the value of those things when someone questions the base layer even if those things built on top could be supported by alternative base layers. S1 is cautious about this, which is reasonable. Our environment is much safer for experimentation than it used to be.
Replies from: jsalvatier
comment by Daniel Bruce (daniel-bruce) ·
2020-01-11T21:43:50.336Z · LW(p) · GW(p)
“I would start doubting if I noticed numerous important mistakes in the details my side’s data and my colleagues didn’t want to talk about it” . - I couldn't quite follow this. Could you give an example, or explain it a different way?Replies from: daniel-adeyemi
↑ comment by Daniel Adeyemi (daniel-adeyemi) ·
2020-01-12T20:37:18.480Z · LW(p) · GW(p)
I think what he is saying is they'd want to hear something overwhelmingly obvious, however, the beginning of doubt starts with noticing an error in one of the details of the assumption, which most of the time are subtle.
"The important details you haven’t noticed are invisible to you, and the details you have noticed seem completely obvious and you see right through them. This all makes makes it difficult to imagine how you could be missing something important."
Replies from: daniel-bruce
↑ comment by Daniel Bruce (daniel-bruce) ·
2020-01-12T18:10:38.368Z · LW(p) · GW(p)
"noticing an error in one of the details of the assumption" - I don't quite get this. Like, it would be an error in the mental model, or there would be a detail / piece of data which didn't quite fit the model or something else? I'm not arguing anything here, I just can't quite understand what you are saying.
comment by [deleted] ·
2017-05-14T05:53:24.781Z · LW(p) · GW(p)
Excellent article....and brilliantly explained. Reminds me of an old saying :
“Self-assertion may deceive the ignorant for a time; but when the noise dies away, we cut open the drum, and find it was emptiness that made the music.”
― Mary Elizabeth Braddon, Aurora Floyd