Connectome-specific harmonic waves and meditation
post by Gordon Seidoh Worley (gworley) · 2019-09-30T18:08:45.403Z · LW · GW · 1 commentsContents
1 comment
TL;DR: meditation is a process for altering the harmonics of brainwaves to produce good brain states.
Pulling together a few threads:
- The theory of connectome-specific harmonic waves suggests one thing that happens in the brain is that oscillations in neuron activity (brainwaves) interact to form harmonics (correlated brainwaves that form patterns) that may explain certain aspects of brain and thereby mental activity.
- We can of course look at what happens with these harmonics in people on LSD.
- Probably related: people "hallucinating" all the time
- Meditation and psychedelics do similar things to the brain.
- Psychedelics can cause people to perform a process we might call "annealing" by analogy to the metallurgical process ("normalizing" would also work as a metaphor, as would "quenching + tempering" in the case of bad trips/trauma with after care). I've also been using the annealing metaphor talk about what happens during meditation (no links, all in person), as have others.
- Related metaphor: meditation as reinforcement learning [LW · GW] (cf. simulating annealing)
- "Good" is grounded as "minimization of prediction error". Explanation:
- Neurons form hierarchical control systems.
- cf. a grounding of phenomenological idealism using control systems, and its implications
- cf. the hierarchy is recursive and reasserts itself at higher levels of organization
- Those control systems aim to minimize prediction error via negative feedback (homeostatic) loops.
- The positive signal ("good") of the control system occurs when prediction error is minimized.
- By extension "bad" is when prediction error is maximized and "neutral" is when the threshold for signaling good or bad is not crossed (having a neutral signal reduces jitter).
- One way to reduce prediction error is with symmetry. This means that we usually think symmetry is good.
- Meditation, and anything that sets up harmonic neuronal oscillation, makes brain activity more symmetric, hence better or good.
I think we can infer a lot from these observations, but I'll leave those for another post.
1 comments
Comments sorted by top scores.
comment by Vaniver · 2019-09-30T22:35:46.455Z · LW(p) · GW(p)
I talked with Mike Johnson a bunch about this at a recent SSC meetup, and think that CSHW are a cool way to look at brain activity but that associating them directly with valence of experience (the simple claim "harmonic CSHW ≡ good") has a bunch of empirical consequences that seem probably false to me. (This is a good thing, in many respects, because it points at a series of experiments that might convince one of us!)
An observation is that I think this is a 'high level' or 'medium level' description of what's going on in the brain, in a way that makes it sort of difficult to buy as a target. If I think about meditation as something like having one thing on the stack [LW · GW], or as examining your code to refactor it, or directing your attention at itself, then I can see what's going on in a somewhat clear way. And it's easy to see how having one thing on the stack might increase the harmony (defined as a statistical property of a distribution of energies in the CSHW), but the idea that the goal was to increase the harmony and having one thing on the stack just happens to do so seems unsupported.
I do like that this has an answer for 'dark room' objections that seems superior to the normal 'priors' story for Friston-style approaches, in that you're trying to maximize a property (tho you still smuggle in the goals through the arrangement of the connectome, but that's fine because they had to come from somewhere).
Meditation, and anything that sets up harmonic neuronal oscillation, makes brain activity more symmetric, hence better or good.
I think this leap is bigger than it might seem, because it's not clear that you have control loops on the statistical properties of your brain as a whole. It reads something like a type error that's equivocating between individual loops and the properties of many loops.
Now, it may turn out that 'simplicity' is the right story here, where harmony / error-minimization / etc. are just very simple things to build and so basically every level of the brain operates on that sort of principle. In a draft of the previous paragraph I had a line that said "well, but it's not obvious that there's a control loop operating on the control loops that has this sort of harmony as an observation" but then I thought "well, you could imagine this is basically what consciousness / the attentional system is doing, or that this is true for boring physical reasons where the loops are all swimming in the same soup and prefer synchronization."
But this is where we need to flesh out some implementation details and see if it makes the right sorts of predictions. In particular, I think a 'multiple drives' model makes sense, and lines up easily with the hierarchical control story, but I didn't see a simple way that it also lines up with the harmony story. (In particular, I think lots of internal conflicts make sense as two drives fighting over the same steering wheel, but a 'maximize harmony' story needs to have really strong boundary conditions to create the same sorts of conflicts. Now, really strong boundary conditions is pretty sensible, but still makes it sort of weird as a theory of long-term arrangement, because you should expect the influence of the boundary conditions to be something the long-term arrangement can adjust.)