post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Steven Byrnes (steve2152) · 2020-04-08T20:50:42.485Z · LW(p) · GW(p)

Interesting! But I think I largely disagree.

I don't think that latency is the main reason we need emotions. For example, guilt is not particularly time-sensitive. I also don't think that subconscious processing is exactly the same as emotions. In fact, the term "subconscious" lumps together "some of the things happening in the neocortex" with "everything happening elsewhere in the brain" (amygdala, tectum, etc.) which I think are profoundly different and well worth distinguishing.

I am currently writing a giant post on emotions in the brain (3000 words and 3 diagrams so far!) which I hope to finish in the next couple weeks, so I'll just defer a proper reply until then. Sorry, I know that's annoying. In the meantime, my earlier post Human Instincts, Symbol Grounding and the Blank-Slate Neocortex [LW · GW] is a hint of my perspective, including why I think a neocortex by itself cannot do anything biologically useful.

Replies from: abe-dillon
comment by Abe Dillon (abe-dillon) · 2020-04-11T22:51:04.942Z · LW(p) · GW(p)

Thanks for the insight!

This is actually an incomplete draft that I didn't mean to publish, so I do intend to cover some of your points. It's probably not going to go into the depth you're hoping for since it's pretty much just a synthesis of the bit of information from a segment from a Radiolab episode and three theorems about neural networks.

My goal was to simply use those facts to provide an informal proof that a trade-off exists between latency and optimality* in neural networks and that said trade-off explains why some agents (including biological creatures) might use multiple models at different points in that trade-off instead of devoting all their computational resources to one very deep model or one low-latency model. I don't think it's a particularly earth-shattering revelation, but sometimes; even pretty straight forward ideas can have an impact**.

I also don't think that subconscious processing is exactly the same as emotions.

The position I present here is a little more subtle than that. It doesn't directly equate subconscious processing to emotions. I state that emotions are: a conscious recognition of physiological processes triggered by faster stimulus-response paths in your nervous system.

The examples given in the podcast focus mostly on fight-or-flight until they get later into the discussion about research on paraplegic subjects. I think that might hint at a hierarchy of emotional complexity. It's easy to explain the most basic 'emotion' that even the most primitive brains should express. As you point out; emotions like guilt are more difficult to explain. I don't know if I can give a satisfactory response to that point because it's beyond my lay understanding, but my best guess is: this feed-back loop from stimulus to response back to stimulus and so on can be initiated from something other than direct sensory input and the information fed back might include more than physiological state.

Each path has some input which propagates through it and results in some output. The output might include more than signals that directly physiological control signals such as various muscles. It include more abstract information such as a compact representation of the internal state of the path. The input might include more than sensory input. The feedback might be more direct.

For instance, I believe I've read that some parts of the brain receive a copy of recent motor commands which may or may not correspond to physiological change. Along with the in-direct feedback from sensors that measure your sweaty palms, the output of a path may directly feed back the command signals to release hormones or to blink eyes or whatever as input to other paths. A path might output signals that don't correspond to any physiological control, they may be specifically meant to be feedback signals that communicate more abstract information.

Another example is: you don't cry at the end of Schindler's List because of any direct sensory input. The emotion arises from a more complex, higher-order cognition of the situation. Perhaps there are abstract outputs from the slower paths that feed back into the faster paths which makes the whole feed-back system more complex and allows for a higher-order cognition paths to indirectly result in physiological responses that they don't directly control.

Another piece of the puzzle may be that the slowest path which I, perhaps erroneously; refer to consciousness, is supposedly where the physiological state triggered by faster paths gets labeled. That slower path almost definitely uses other context to arrive at such a label. A physiological state can have multiple causes. If you've just run a marathon on a cold day, it's unlikely you'll feel you're frightened if you register as an elevated heart rate, sweaty palms, goosebumps, etc.

I lump all those 'faster stimulus-response paths' including reflexes under the umbrella term 'subconscious' which might not be correct. I'm not sure if any of the related fields (neurology, psychology, etc.) have a more precise definition for subconscious. The word used in the podcast is the 'autonomic nervous system' which, according to Google means: the part of the nervous system responsible for control of the bodily functions not consciously directed, such as breathing, the heartbeat, and digestive processes.

There's a bit of a blurred line there, since reflexes are often included as part of the autonomic nervous system even though they govern responses that can also be consciously directed, such as blinking. Also, I believe the debate of what, exactly, 'consciously directed' means, is still out since, AFAIK; there's no generally agreed upon formal definition of the word 'consciousness'.

In fact, the term "subconscious" lumps together "some of the things happening in the neocortex" with "everything happening elsewhere in the brain" (amygdala, tectum, etc.) which I think are profoundly different and well worth distinguishing. ... I think a neocortex by itself cannot do anything biologically useful.

I think there are a lot of words related to the phenomenon of intelligence and consciousness that have nebulous, informal meanings which vaguely reference concrete implementations (like the human mind and brain), but could and should be formalized mathematically. In that pursuit, I'd like to extract the essence of those words from the implementation details like the neocortex.

There are many other creatures, such as octopuses and crows; which are on a similar evolutionary path of increasing intelligence but have completely different anatomy to humans and each other. I agree that focusing research on the neocortex itself is a terrible way to understand intelligence. It's like trying to understand how a computer works by looking only at media files on the hard drive. Ignoring the BIOS, operating system, file system, CPU, and other underlying systems that render that data useful.

I believe, for instance; Artificial Intelligence is a misnomer. We should be studying the phenomenon of intelligence as an abstract property that a system can exhibit regardless of whether it's man-made. There is no scientific field of artificial aerodynamics or artificial chemistry. There's no fundamental difference between the way air behaves when it interacts with a wing that depends upon whether the wing is natural or man-made.

Without a formal definition of 'intelligence' we have no way of making basic claims like, "system X is more intelligent than system Y". It's similar to how fields like physics were stuck until previously vague words like force and energy were given formal mathematical definitions. The engineering of heat engines benefited greatly when thermodynamics was developed and formalized ideas like 'heat' and 'entropy'. Computer science wasn't really possible until Church and Turing formalized the vague ideas of computation and computability. Later Shannon formalized the concept of information and allowed even greater progress.

We can look to specific implementations of a phenomenon to draw inspiration and help us understand the more universal truths about the phenomenon in question (as I do in this post), but if an alien robot came from outer-space and behaved in every way like a human, I see no reason to treat its intelligence as a fundamentally distinct phenomenon. When it exhibits emotion, I see no reason to call it anything else.

Anyway, I haven't read your post yet, but I look forward to it! Thanks, again!

*here, optimality refers to producing the absolute best outputs for a given input. It's independent of the amount of resources required to arrive at those outputs.

**I mean: Special Relativity (SR) came from the fact that the velocity of light (measured in space/time) appeared constant across all reference frames according to Maxwell's equations (and backed up by observation). Einstein made the genius but obvious (in hind-sight) conclusion that the only way it's possible for a value of space/time to remain constant between reference frames is if the measure space and time themselves are variable. The Lorentz transform is the only transform consistent with such dimensional variability between reference frames. There are only three terms in c = time/space, If c is constant and different reference frames demand variability, time and space must not be constant.

Not that I think I'm presenting anything as amazing as Special Relativity or that I think I'm anywhere near Einstein. It's just a convenient example.

comment by Gordon Seidoh Worley (gworley) · 2020-04-09T17:00:37.419Z · LW(p) · GW(p)

Does the existence of emotions in humans need to be justified? Emotions aren't a design choice, they're an evolved feature, and there doesn't seem to be much point in arguing for why they should be there (to make a normative claim about the existence of emotions) since they simply are there and there isn't much to do about it.

Now if the question were something like "I'm building an AGI; should I design something analogous to what we experience as emotions into it?" that seems like a more interesting reason to possible need a justification for their usefulness.

Replies from: abe-dillon
comment by Abe Dillon (abe-dillon) · 2020-04-10T15:33:40.916Z · LW(p) · GW(p)

In short, your second paragraph is what I'm after.

Philosophically, I don't think the distinction you make between a design choice and an evolved feature carries much relevance. It's true that some things evolve that have no purpose and it's easy to imagine that emotions are one of things especially since people often conceptualize emotion as the "opposite" of rationality, however; some things evolve that clearly do serve a purpose (in other words there is a justification for their existence), like the eye. Of course nobody sat down with the intent to design an eye. It evolved, was useful, and stuck around because of that utility. The utility of the eye (its justification for sticking around) exists independent of whether the eye exists. A designer recognizes the utility before hand and purposefully implements it. Evolution "recognizes" the utility after stumbling into it.