Migraine hallucinations, phenomenology, and cognition
post by Richard_Kennaway · 2021-05-08T15:56:37.985Z · LW · GW · 6 commentsContents
6 comments
I have several times in my life experienced migraine hallucinations. I call them that because they look exactly like what other people report under that name.
I'll come back to those.
If I look at someone, and hold up my hand so as to block my view of their head, I do not experience looking at a headless person. I experience looking at a normal person, whose head I cannot see, because there is something else in the way.
Why is this? One can instantly talk about Bayesian estimation, prior experience, training of neural nets, constant conjunction, and so on. However, a real explanation must also account for situations in which this filling-in does not occur. One ordinary example is the pictures here. I see these as headless men, not ordinary men whose heads I cannot see.
Migraine hallucinations provide a more interesting example. If you've ever had one, you might already know what I'm going to say, but I do not know if this experience is the same for everyone.
If I superimpose the hallucination on someone's head, they seem to have no head. I don't mean that I cannot see their head (although indeed I can't), but that I seem to be looking at a headless person. If I superimpose it on a part of their head, it is as if that part does not exist. Whatever the blind spot covers, my brain does not fill it in. Whatever my hand covers, my brain does fill in, not at the level of the image (I don't confabulate an image of their face), but at some higher level. I know in both cases that they have a head. But at some level below knowing, the experience in one case is that they have no head, and in the other, that they do. My knowledge that they have a head does nothing to alter the sensation that they do not.
It is quite disconcerting to look at myself in a mirror and see half my head missing.
Those who have never had such hallucinations might try experimenting with their ordinary blind spots. I am not sure it will be the same. The brain has had more practice filling those in, and does not have to contend with the jaggies.
From this I cannot draw out much in the way of conclusions about vision and the brain, but it provides an interesting experience of the separation between two levels of abstraction [LW · GW]. When we look at the world and see comprehensible objects in it, our brain did that before it ever came into our subjective experience. When the mechanism develops a fault, it presents conclusions that we know to be false, yet still experience.
This presumably applies to all our senses, including that of introspection.
6 comments
Comments sorted by top scores.
comment by Stuart Anderson (stuart-anderson) · 2021-05-09T03:01:38.262Z · LW(p) · GW(p)
-
comment by Steven Byrnes (steve2152) · 2021-05-08T18:58:37.295Z · LW(p) · GW(p)
There's a nice brain-like vision model here, and it even parses optical illusions in the same way people do. As far as I understand it, if there's a sudden change of, um, color, or whatever it is for migraine aura, it has to be (1) an edge of a thing, (2) an edge of an occluding thing, (3) a change of texture within a single surface (e.g. wallpaper). When you block a head with your hand, your visual system obviously and correctly parses it as (2). But here there's no occluder model that fits all the visual input data—maybe because some of the neurons that would offer evidence of an occluding shape are messed up and not sending those signals. So (2) doesn't fit the data. And there's no single-surface theory that fits all the visual input data either, so (3) gets thrown out too. So eventually the visual system settles on (1) as the best (least bad) parsing of the scene.
I dunno, something like that, I guess.
Replies from: abramdemski↑ comment by abramdemski · 2021-08-16T15:15:43.668Z · LW(p) · GW(p)
I would conjecture that if we directly stimulated the retina to reproduce the shapes and colors of migrate auras, the brain would correctly see it as an occlusion, and thus, correctly infer the existence of occluded heads etc.
My hypothesis is that the migraine aura is actually injected at an intermediate abstraction level. (After all, it's not something happening on the physical retina, right?) It therefore interferes with the object representations themselves, rather than providing new low-level data for the brain to interpret normally.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2021-08-16T15:54:33.414Z · LW(p) · GW(p)
I agree with that, as long as "intermediate abstraction level" is sufficiently broad so as to also include V1. When I wrote "some of the neurons...are messed up and not sending those signals" I was mostly imagining neurons in V1. Admittedly it could also be neurons in V2 or something. I dunno. I agree with you that it's unlikely to originate before V1, i.e. retina or LGN (=the thalamus waystation between retina and V1). (Not having thought too hard about it.)
(My vague impression is that the lateral connections within V1 are doing a lot of the work in finding object boundaries.)
comment by Measure · 2021-05-08T17:35:44.496Z · LW(p) · GW(p)
Would you say the experience is similar to looking an an optical illusion and "experiencing" the illusory effect while "knowing" it doesn't match reality?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2021-05-08T17:54:07.722Z · LW(p) · GW(p)
Something like. The twisted cord illusion is an especially strong example.