Against Amazement

post by SquirrelInHell · 2016-09-20T19:25:25.238Z · LW · GW · Legacy · 6 comments

This is a link post for http://squirrelinhell.blogspot.com/2016/09/against-amazement.html

I have moved this post to my blog: http://squirrelinhell.blogspot.com/2016/09/against-amazement.html

6 comments

Comments sorted by top scores.

comment by moridinamael · 2016-09-20T20:07:28.375Z · LW(p) · GW(p)

There are other emotional reactions which should register as confusion but don't.

Imagine a smart person who sees asphalt being deposited to pave a road. "How disgusting," they think. "Surely our civilization can think of something better than this." They spend a few minutes ruminating on various solutions for road construction and maintenance that would obviously be better than asphalt and then get distracted and never think about it again.

They thus manage to never realize that asphalt is a fantastic solution to this problem, that stacks of PhDs have been written on asphalt chemistry and thermal processes, that it's a highly optimized, cheap, self-healing material, that it's the most economical solution by leaps and bounds. All they noticed was disgust based purely on error and ignorance.

Any thought of the form "That's stupid, I can easily see a better way" should qualify as confusion.

Replies from: MrMind
comment by MrMind · 2016-09-26T13:53:23.904Z · LW(p) · GW(p)

Confusion is a sign that a mental model is incoherent, and as a general principle we cannot have incoherent models of facts. But a model can be perfectly coherent without being sound or complete.
"I can easily see a better way" is a sign of a model being incomplete, and should not be categorized as confusion.

comment by Houshalter · 2016-09-23T11:23:22.917Z · LW(p) · GW(p)

Juergen Schmidhuber has a theory of artificial curiosity. His theory proposes that seeking confusion is actually a good thing. Agents that seek out situations where surprising things happen, put their internal models to the test and learn the most. And that's all curiosity is.

Amazement is just a form of curiosity. People who are interested in AlphaGo have had their internal models of AI progress challenged, and are updating them.

comment by ChristianKl · 2016-09-20T20:22:12.081Z · LW(p) · GW(p)

The problem isn't amazement but cheap amazement. It's like the problem with eating fast food at McDonalds isn't about eating food but about eating easily digestable food.

The amazement that Feymann talks about from understanding a flower on a deep level is much better.

Noticing amazement can be as wonderful as noticing confusion.

comment by Lumifer · 2016-09-21T14:59:01.177Z · LW(p) · GW(p)

As the old joke goes, Alzheimer's is the best illness, there is no pain and each morning you get lots of interesting news.

But note that improving the model would result in less pleasant experiences of wonder, but also in less unpleasant experiences of disappointment. Basically you reduce your variance, but it's not obvious to me that you imperfect model necessarily has a pessimistic bias.

Replies from: MrMind
comment by MrMind · 2016-09-26T13:54:05.490Z · LW(p) · GW(p)

Indeed. Every pleasant surprise is an update, but not every update is a pleasant surprise.