Chapter 9: Why can it Select?
post by ld97 · 2020-03-02T10:11:37.047Z · LW · GW · 0 commentsContents
No comments
Our final problem. Let's take a look at our task:
- We know that memories are coupled subnetworks in the Knowledge Graph.
- We know that what we can recall them using our Working memory.
- We understood the regulation mechanism that motivates us to do it. We also realized it's binding with Late Long-Term Potentiation, which allows us to store data for long periods.
- We know that we can receive signals about activations of memory parts. And their strengths.
How to select what signal or Reference will pass the spam-filter of attention? What criteria for filtering will we use? How should it work in terms of activations in neurons?
Let's take a simple example. Please fill blank spaces, and try to observe how are you doing that:
- The whale shark is ... than elephant
- The ... is smaller than Jupiter.
- Do you ...?
I am quite sure that it was like that:
- Bigger or smaller? I don't know. The average whale is bigger. That's obvious. But whale shark?
- Guy, are you serious? There are a lot of things smaller than Jupiter. All the planets in the solar system smaller than it!
- Do I what? Oh, I understood, you've gone completely mad.
What you were doing is creating hypotheses. Bigger or smaller? What planet did he mean? Had he gone mad? You've been trying to explain what is happening with the information you've had. You've been trying to guess.
And you always do it! While watching serials, while talking with someone, while trying to choose between chocolate and strawberry ice-cream, reading this sequence of articles.
But how to describe the prediction you making and how to choose between them?
Prediction is an easy one thing - you have information, you activate object in the knowledge graph, you receive the results of your mind-search. The activations had some power; we choose the strongest of them and... And here, we have a problem because we want different meanings for the same things in different contexts.
Context is what we currently think about, so let's simplify it to object-references in our Working Memory.
How should work the filter that has one reference that is context, and a bunch of other references that could be related to this object? How to pass only meaningful, only bound references?
Bound. BOUND!
What if filter coactivates all the references by pairs, looks on results, and chooses the strongest signals? That will allow it to filter complete junk, that will explain the context-dependency of meanings.
It doesn't require any complex mechanism to check all the pairs.
So, your attention is responsible for recalling objects by references. But not only objects you are working on now, but all the possible combinations of neighbors activations! And the references to the strongest activation patterns replace your current references in WM.
But here comes another problem. Why don't we always think about the same thing? Why do we stop?
I think attention can signal the regulation mechanism that it's frazzled out with that task. What do we feel talking to someone and see that he even doesn't try to understand us? Why do we stop listening to the monotonous voice of our chemistry teacher? We are losing patience. And it seems that it's kind of notification that we are doing a useless job. We are losing interest. It's the opposite process to regulate curiosity.
And that was the last part of our puzzle.
Let's assemble it!
0 comments
Comments sorted by top scores.