The Pervasive Illusion of Seeing the Complete World

post by Shmi (shminux) · 2023-02-09T06:47:36.628Z · LW · GW · 1 comments

Contents

1 comment

It is a tautology that we do not notice our blind spots. 

It is not a tautology that we forget they exist, shortly after learning that they do. 

 Michael Crichton's Gell-Mann Amnesia effect as quoted by gwern [LW · GW] is one of many examples: we know that we cannot model the news veracity with any accuracy, yet we forget it the moment this observation stops hitting us in face.

Scott Alexander's classic What Human Experiences Are You Missing Without Realizing It is even more egregious: the data about our blind spots keeps coming and we intuitively rationalize it away.

The ironic part is that everyone's favorite LLM keeps forgetting about its own blind spot, the way a human would.

I guess there is something about blind spots that is antimemetic, not very surprisingly.

Speculation: These meta-blind spots tend to develop around actual blind spots naturally, because of the way the brain works. We notice stuff that changes, because the brain is akin to a multi-level prediction error minimization machine. If you wear cracked or dirty glasses, you stop noticing them after a short time, unless the cracks or dirt actively interfere with something you have to see, reminding you of the cracks. Worse than that, you forget that the cracks exist, unless reminded. This meta-blind spot, or a tower of blind spots can probably go several levels up, if there is no prediction error detected at that level.

Another speculation: the tower of blind spots creates an illusion of seeing the complete world, with nothing else existing. After all, to notice the existence of something the brain needs to be able to compare predictions with inputs, and if there are no inputs at any level, there is nothing to activate the prediction error minimization machine.

This was the descriptive part. The prescriptive part is, as usual, much more speculative. 

An aside: It is worth explicitly paying attention when a write-up switches from descriptive to prescriptive, from analysis to offering solutions. For example, Marx gave a fantastically good analysis of problems with 19th century capitalism, but then offered a fantastically bad prescription for fixing it, with disastrous consequences. My wild guess is that the difference is because our prediction abilities are a blind spot in itself, and self-calibration is a comparatively new, rare and hard rationality skill.

So, the prescriptive part is to identify (hard) and topple (easier) the blind spot towers. For example, once you conceive of God not being the ultimate source of everything, you can start questioning the unstated assumptions, jump-starting the predictive error minimization machine, with the data coming from outside and from "inside". The source of data can, of course, be corrupted by emotions, and is a tower of blind spots in itself. Thus one can reason oneself into atheism or Pascal's wager equally easily. 

Oh, and just to undermine everything I said so far, here is a completely personal view that I reasoned myself into some years ago, that clashes severely with this site's consensus. The consensus is that there is an external reality that we, as embedded agents, build maps of, the map/territory dichotomy. I believe it is one of those blind spots, and a more accurate model is that it is maps all the way down [LW(p) · GW(p)]. (And that the terms like "exist", "reality", "truth" and "fact" have a limited domain of applicability that is constantly and subconsciously exceeded by nearly everyone.)

1 comments

Comments sorted by top scores.

comment by tailcalled · 2023-02-09T12:54:18.807Z · LW(p) · GW(p)

Strongly agree with it being a very concerning illusion. I wrote about related things under Random facts can come back to bite you [LW · GW] and Apparently winning by the bias of your opponents [LW · GW]. Other writings I consider relevant include Wittgenstein's revenge and Ignorance, a skilled practice.

Especially with the rise of language models where there are certain kinds of knowledge that they are excellent at, but lots of knowledge they are terrible at, I've focused in on the pieces they don't understand and come to feel that there is lots of stuff I don't understand or know very well.

Oh, and just to undermine everything I said so far, here is a completely personal view that I reasoned myself into some years ago, that clashes severely with this site's consensus. The consensus is that there is an external reality that we, as embedded agents, build maps of, the map/territory dichotomy. I believe it is one of those blind spots, and a more accurate model is that it is maps all the way down. (And that the terms like "exist", "reality", "truth" and "fact" have a limited domain of applicability that is constantly and subconsciously exceeded by nearly everyone.)

I'm not sure what the value in going mysticist about it is. To me, it seems like the appropriate solution is mainly to fill the map up with scribblings like "there's probably tons of interesting things going on here that I have no idea about", and to mark big parts of the map with warnings like "I only believe this because John Doe said so, TODO figure out if John Doe is trustworthy and consider taking a direct peak yourself".