A Primer on the Symmetry Theory of Valence
post by Michael Edward Johnson (michael-edward-johnson) · 2021-09-06T11:39:06.223Z · LW · GW · 14 commentsContents
I. Suffering is a puzzle II. QRI’s model of suffering – history & roadmap QRI.2016: We released the world’s first crisp formalism for pain and pleasure: the Symmetry Theory of Valence (STV) QRI.2017: We figured out how to apply our formalism to brains in an elegant way: CDNS QRI.2018: We invested in the CSHW paradigm and built ‘trading material’ for collaborations QRI.2019: We synthesized a new neuroscience paradigm (Neural Annealing) QRI.2020: We raised money, built out a full neuroimaging stack, and expanded the organization III. What’s next? None 14 comments
Crossposted from opentheory.net
STV is Qualia Research Institute‘s candidate for a universal theory of valence, first proposed in Principia Qualia (2016). The following is a brief discussion of why existing theories are unsatisfying, what STV says, and key milestones so far.
I. Suffering is a puzzle
We know suffering when we feel it — but what is it? What would a satisfying answer for this even look like?
The psychological default model of suffering is “suffering is caused by not getting what you want.” This is the model that evolution has primed us toward. Empirically, it appears false (1)(2).
The Buddhist critique suggests that most suffering actually comes from holding this as our model of suffering. My co-founder Romeo Stevens suggests that we create a huge amount of unpleasantness by identifying with the sensations we want and making a commitment to ‘dukkha’ ourselves until we get them. When this fails to produce happiness, we take our failure as evidence we simply need to be more skillful in controlling our sensations, to work harder to get what we want, to suffer more until we reach our goal — whereas in reality there is no reasonable way we can force our sensations to be “stable, controllable, and satisfying” all the time. As Romeo puts it, “The mind is like a child that thinks that if it just finds the right flavor of cake it can live off of it with no stomach aches or other negative results.”
Buddhism itself is a brilliant internal psychology of suffering (1)(2), but has strict limits: it’s dogmatically silent on the influence of external factors on suffering, such as health, relationships, or anything having to do with the brain.
The Aristotelian model of suffering & well-being identifies a set of baseline conditions and virtues for human happiness, with suffering being due to deviations from these conditions. Modern psychology and psychiatry are tacitly built on this model, with one popular version being Seligman’s PERMA Model: P – Positive Emotion; E – Engagement; R – Relationships; M – Meaning; A – Accomplishments. Chris Kresser and other ‘holistic medicine’ practitioners are synthesizing what I would call ‘Paleo Psychology’, which suggests that we should look at our evolutionary history to understand the conditions for human happiness, with a special focus on nutrition, connection, sleep, and stress.
I have a deep affection for these ways of thinking and find them uncannily effective at debugging hedonic problems. But they’re not proper theories of mind, and say little about the underlying metaphysics or variation of internal experience.
Neurophysiological models of suffering try to dig into the computational utility and underlying biology of suffering. Bright spots include Friston & Seth, Panksepp, Joffily, and Eldar talking about emotional states being normative markers of momentum (i.e. whether you should keep doing what you’re doing, or switch things up), and Wager, Tracey, Kucyi, Osteen, and others discussing neural correlates of pain. These approaches are clearly important parts of the story, but tend to be descriptive rather than predictive, either focusing on ‘correlation collecting’ or telling a story without grounding that story in mechanism.
QRI thinks not having a good answer to the question of suffering is a core bottleneck for neuroscience, drug development, and next-generation mental health treatments, as well as philosophical questions about the future direction of civilization. We think this question is also much more tractable than people realize, that there are trillion-dollar bills on the sidewalk, waiting to be picked up if we just actually try.
II. QRI’s model of suffering – history & roadmap
What does “actually trying” to solve suffering look like? I can share what we’ve done, what we’re doing, and our future directions.
QRI.2016: We released the world’s first crisp formalism for pain and pleasure: the Symmetry Theory of Valence (STV)
QRI had a long exploratory gestation period as we explored various existing answers and identified their inadequacies. Things started to ‘gel’ as we identified and collected core research lineages that any fundamentally satisfying answer must engage with.
A key piece of the puzzle for me was Integrated Information Theory (IIT), the first attempt at a formal bridge between phenomenology and causal emergence (Tononi et. al 2004, 2008, 2012). The goal of IIT is to create a mathematical object ‘isomorphic to’ a system’s phenomenology — that is to say, to create a perfect mathematical representation of what it feels like to be something. If it’s possible to create such a mathematical representation of an experience, then how pleasant or unpleasant the experience is should be ‘baked into’ this representation somehow.
In 2016 I introduced the Symmetry Theory of Valence (STV) built on the expectation that, although the details of IIT may not yet be correct, it has the correct goal — to create a mathematical formalism for consciousness. STV proposes that, given such a mathematical representation of an experience, the symmetry of this representation will encode how pleasant the experience is (Johnson 2016). STV is a formal, causal expression of the sentiment that “suffering is lack of harmony in the mind” and allowed us to make philosophically clear assertions such as:
- X causes suffering because it creates dissonance, resistance, turbulence in the brain/mind.
- If there is dissonance in the brain, there is suffering; if there is suffering, there is dissonance in the brain. Always.
This also let us begin to pose first-principles, conceptual-level models for affective mechanics: e.g., ‘pleasure centers’ function as pleasure centers insofar as they act as tuning knobs for harmony in the brain.
QRI.2017: We figured out how to apply our formalism to brains in an elegant way: CDNS
We had a formal hypothesis that harmony in the brain feels good, and dissonance feels bad. But how do we measure harmony and dissonance, given how noisy most forms of neuroimaging are?
An external researcher, Selen Atasoy, had the insight to use resonance as a proxy for characteristic activity. Neural activity may often look random— a confusing cacophony— but if we look at activity as the sum of all natural resonances of a system we can say a great deal about how the system works, and which configuration the system is currently in, with a few simple equations. Atasoy’s contribution here was connectome-specific harmonic waves (CSHW), an experimental method for doing this with fMRI (Atasoy et. al 2016; 2017a; 2017b). This is similar to how mashing keys on a piano might produce a confusing mix of sounds, but through applying harmonic decomposition to this sound we can calculate which notes must have been played to produce it. There are many ways to decompose brain activity into various parameters or dimensions; CSHW’s strength is it grounds these dimensions in physical mechanism: resonance within the connectome. (See also work by Helmholtz, Tesla, and Lehar.)
QRI built our ‘Consonance Dissonance Noise Signature’ (CDNS) method around combining STV with Atasoy’s work: my co-founder Andrés Gomez Emilsson had the key insight that if Atasoy’s method can give us a power-weighted list of harmonics in the brain, we can take this list and do a pairwise ‘CDNS’ analysis between harmonics and sum the result to figure out how much total consonance, dissonance, and noise a brain has (Gomez Emilsson 2017). Consonance is roughly equivalent to symmetry (invariance under transforms) in the time domain, and so the consonance between these harmonics should be a reasonable measure for the ‘symmetry’ of STV. This process offers a clean, empirical measure for how much harmony (and lack thereof) there is in a mind, structured in a way that lets us be largely agnostic about the precise physical substrate of consciousness.
With this, we had a full empirical theory of suffering.
QRI.2018: We invested in the CSHW paradigm and built ‘trading material’ for collaborations
We had our theory, and tried to get the data to test it. We decided that if STV is right, it should let us build better theory, and this should open doors for collaboration. This led us through a detailed exploration of the implications of CSHW (Johnson 2018a), and original work on the neuroscience of meditation (Johnson 2018b) and the phenomenology of time (Gomez Emilsson 2018).
QRI.2019: We synthesized a new neuroscience paradigm (Neural Annealing)
2019 marked a watershed for us in a number of ways. On the theory side, we realized there are many approaches to doing systems neuroscience, but only a few really good ones. We decided the best neuroscience research lineages were using various flavors of self-organizing systems theory to explain complex phenomena with very simple assumptions. Moreover, there were particularly elegant theories from Atasoy, Carhart-Harris, and Friston, all doing very similar things, just on different levels (physical, computational, energetic). So we combined these theories together into Neural Annealing (Johnson 2019), a unified theory of music, meditation, psychedelics, trauma, and emotional updating:
Annealing involves heating a metal above its recrystallization temperature, keeping it there for long enough for the microstructure of the metal to reach equilibrium, then slowly cooling it down, letting new patterns crystallize. This releases the internal stresses of the material, and is often used to restore ductility (plasticity and toughness) on metals that have been ‘cold-worked’ and have become very hard and brittle— in a sense, annealing is a ‘reset switch’ which allows metals to go back to a more pristine, natural state after being bent or stressed. I suspect this is a useful metaphor for brains, in that they can become hard and brittle over time with a build-up of internal stresses, and these stresses can be released by periodically entering high-energy states where a more natural neural microstructure can reemerge.
This synthesis allowed us to start discussing not only which brain states are pleasant, but what processes are healing.
QRI.2020: We raised money, built out a full neuroimaging stack, and expanded the organization
In 2020 the QRI technical analysis pipeline became real, and we became one of the few neuroscience groups in the world able to carry out a full CSHW analysis in-house, thanks in particular to hard work by Quintin Frerichs and Patrick Taylor. This has led to partnerships with King’s College London, Imperial College London, National Institute of Mental Health of the Czech Republic, Emergent Phenomenology Research Consortium, as well as many things in the pipeline. 2020 and early 2021 also saw us onboard some fantastic talent and advisors.
III. What’s next?
We’re actively working on improving STV in three areas:
- Finding a precise physical formalism for consciousness. Asserting that symmetry in the mathematical representation of an experience corresponds with the valence of the experience involves a huge leap in clarity over other theories. But we also need to be able to formally generate this mathematical representation. I’ve argued previously against functionalism and for a physicalist approach to consciousness (partially echoing Aaronson), and Barrett, Tegmark, and McFadden offer notable arguments suggesting the electromagnetic field may be the physical seat of consciousness because it’s the only field that can support sufficient complexity. We believe determining a physical formalism for consciousness is intimately tied to the binding problem, and have conjectures I’m excited to test.
- Building better neuroscience proxies for STV. We’ve built our empirical predictions around the expectation that consonance within a brain’s connectome-specific harmonic waves (CSHW) will be a good proxy for the symmetry of that mind’s formal mathematical representation. We think this is a best-in-the-world compression for valence. But CSHW rests on a chain of inferences about neuroimaging and brain structure, and using it to discuss consciousness rests on further inferences still. We think there’s room for improvement.
- Building neurotech that can help people. The team may be getting tired of hearing me say this, but: better philosophy should lead to better neuroscience, and better neuroscience should lead to better neurotech. STV gives us a rich set of threads to follow for clear neurofeedback targets, which should allow for much more effective closed-loop systems, and I am personally extraordinarily excited about the creation of technologies that allow people to “update toward wholesome”, with the neuroscience of meditation as a model.
14 comments
Comments sorted by top scores.
comment by Steven Byrnes (steve2152) · 2021-09-06T18:07:15.223Z · LW(p) · GW(p)
I'll preface this by saying that I haven't spent much time engaging with your material (it's been on my to-do list for a very long time), and could well be misunderstanding things, and that I have great respect for what you're trying to do. So you and everyone can feel free to ignore this, but here I go anyway.
OK, maybe the most basic reason that I'm skeptical of your STV stuff is that I'm going in expecting a, um, computational theory of valence, suffering, etc. As in, the brain has all those trillions of synapses and intricate circuitry in order to do evolutionary-fitness-improving calculations, and suffering is part of those calculations (e.g. other things equal, I'd rather not suffer, and I make decisions accordingly, and this presumably has helped my ancestors to survive and have more viable children).
So let's say we're sitting together at a computer, and we're running a Super Mario executable on an emulator, and we're watching the bits in the processor's SRAM. You tell me: "Take the bits in the SRAM register, and take the Fourier transform, and look at the spectrum (≈ absolute value of the Fourier components). If most of the spectral weight is in long-wavelength components, e.g. the bits are "11111000111100000000...", then Mario is doing really well in the game. If most of the spectral weight is in the short-wavelength components, e.g. the bits are "101010101101010", then Mario is doing poorly in the game. That's my theory!"
I would say "Ummm, I mean, I guess that's possible. But if that's true at all, it's not an explanation, it's a random coincidence."
(This isn't a perfect analogy, just trying to gesture at where I'm coming from right now.)
So that's the real reason I don't believe in STV—it just looks wrong to me, in the same way that Mario's progress should not look like certain types of large-scale structure in SRAM bits.
I want a better argument than that though. So here are a few more specific things:
(1) waves and symmetries don't carry many bits of information. If you think valence and suffering are fundamentally few-dimensional, maybe that doesn't bother you; but I think it's at least possible for people know whether they're suffering from arm pain or finger pain or air-hunger or guilt or whatever. I guess I raised this issue in an offhand comment a couple years ago [LW(p) · GW(p)], and lsusr responded [LW(p) · GW(p)], and then I apparently dropped out of the conversation, I guess I must have gotten busy or something, hmm I guess I should read that. :-/
(2) From the outside, it's easy to look at an fMRI or whatever and talk about its harmonic decomposition and symmetries. But from the perspective of any one neuron, that information is awfully hard to access. It's not impossible, but I think you'd need the neuron to have a bunch of inputs from across the brain hooked into complicated timing circuits etc. My starting point, as I mentioned, is that suffering causes behavioral changes (including self-reports, trying not to suffer, etc.), so there has to be a way for the "am I suffering" information to impact specific brain computations, and I don't know what that mechanism is in STV. (In the Mario analogy, if you just look at one SRAM bit, or even a few bits, you get almost no information about the spectrum of the whole SRAM register.) If "suffering" was a particular signal carried by a particular neurotransmitter, for example, we wouldn't have that problem, we just take that signal and wire it to whatever circuits need to be modulated by the presence/absence of suffering. So theories like that strike me as more plausible.
(3) Conversely, I'm confused at how you would tell a story where getting tortured (for example) leads to suffering. This is just the opposite of the previous one: Just as a brain-wide harmonic decomposition can't have a straightforward and systematic impact on a specific neural signal, likewise a specific neural signal can't have a straightforward and systematic impact on a brain-wide harmonic decomposition, as far as I can tell.
(4) I don't have a particularly well-formed alternative theory to STV, but all the most intriguing ideas that I've played around with so far that seem to have something to do with the nature of valence and suffering (e.g. here [LW · GW] , here [LW · GW] , various other things I haven't written up) look wildly different from STV. Instead they tend to involve certain signals in the insular cortex and reticular activating system and those signals have certain effects on decisionmaking circuits, blah blah blah.
Replies from: michael-edward-johnson, ZeroFries, Measure↑ comment by Michael Edward Johnson (michael-edward-johnson) · 2021-09-06T21:08:51.576Z · LW(p) · GW(p)
Hi Steven, amazing comment, thank you. I’ll try to address your points in order.
0. I get your Mario example, and totally agree within that context; however, this conclusion may or may not transfer to brains, depending on how e.g. they implement utility functions. If the brain is a ‘harmonic computer’ then it may be doing e.g. gradient descent in such a way that the state of its utility function can be inferred from its large-scale structure.
1. On this question I’ll gracefully punt to lsusr‘s comment :) I endorse both his comment and framing. I’d also offer that dissonance is in an important sense ‘directional’ — if you have a symmetrical network and something breaks its symmetry, the new network pattern is not symmetrical and this break in symmetry allows you to infer where the ‘damage’ is. An analogy might be, a spider’s spiderweb starts as highly symmetrical, but its vibrations become asymmetrical when a fly bumbles along and gets stuck. The spider can infer where the fly is on the web based on the particular ‘flavor’ of new vibrations.
2. Complex question. First I’d say that STV as technically stated is a metaphysical claim, not a claim about brain dynamics. But I don’t want to hide behind this; I think your question deserves an answer. This perhaps touches on lsusr’s comment, but I’d add that if the brain does tend to follow a symmetry gradient (following e.g. Smolensky’s work on computational harmony), it likely does so in a fractal way. It will have tiny regions which follow a local symmetry gradient, it will have bigger regions which span many circuits where a larger symmetry gradient will form, and it will have brain-wide dynamics which follow a global symmetry gradient. How exactly these different scales of gradients interact is a very non-trivial thing, but I think it gives at least a hint as to how information might travel from large scales to small, and from small to large.
3. I think my answer to (2) also addresses this;
4. I think, essentially, that we can both be correct here. STV is intended to be an implementational account of valence; as we abstract away details of implementation, other frames may become relatively more useful. However, I do think that e.g. talk of “pleasure centers” involves potential infinite regress: what ‘makes’ something a pleasure center? A strength of STV is it fundamentally defines an identity relationship.
I hope that helps! Definitely would recommend lsusr’s comments, and just want to thank you again for your careful comment.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2021-09-08T17:01:56.830Z · LW(p) · GW(p)
I'm confused about your use of the term "symmetry" (and even more confused what a "symmetry gradient" is). For example, if I put a front-to-back mirror into my brain, it would reflect the frontal lobe into the occipital lobe—that's not going to be symmetric. The brain isn't an undifferentiated blob. Different neurons are connected to different things, in an information-bearing way.
You don't define "symmetry" here but mention three ideas: (1) "The size of the mathematical object’s symmetry group". Well, I am aware of zero nontrivial symmetry transformations of the brain, and zero nontrivial symmetry transformations of qualia. Can you name any? "If my mind is currently all-consumed by the thought of cucumber sandwiches, then my current qualia space is symmetric under the transformation that swaps the concepts of rain and snow"??? :-P (2) "Compressibility". In the brain context, I would call that "redundancy" not "symmetry". I absolutely believe that the brain stores information in ways that involve heavy redundancy; if one neuron dies you don't suddenly forget your name. I think brains, just like hard drives, can make tradeoffs between capacity and redundancy in their information encoding mechanisms. I don't see any connection between that and valence. Or maybe you're not thinking about neurons but instead imagining the compressibility of qualia? I dunno, if I can't think about anything besides how much my toe hurts right now, that's negative valence, but it's also low information content / high compressibility, right? (3) "practical approximations for finding symmetry in graphs…adapted for the precise structure of Qualia space (a metric space?)". If Qualia space isn't a graph, I'm not sure why you're bringing up graphs. Can you walk through an example, even an intuitive one? I really don't understand where you're coming from here.
I skimmed this thing by Smolensky and it struck me as quite unrelated to anything you're talking about. I read it as saying that cortical inference involves certain types of low-level algorithms that have stable attractor states (as do energy-based models, PGMs, Hopfield networks, etc.). So if you try to imagine a "stationary falling rock" you can't, because the different pieces are contradicting each other, but if you try to imagine a "purple tree" you can pretty quickly come up with a self-consistent mental image. Smolensky (poetically) uses the term "harmonious" for what I would call a "stable attractor" or "self-consistent configuration" in the model space. (Steve Grossberg would call them "resonant".) Again I don't see any relation between that and CSHW or STV. Like, when I try to imagine a "stationary falling rock", I can't, but that doesn't lead to me suffering—on the contrary, it's kinda fun. The opposite of Smolensky's "harmony" would be closer to confusion than suffering, in my book, and comes with no straightforward association to valence. Moreover, I believe that the attractor dynamics in question, the stuff Smolensky is (I think) talking about, are happening in the cortex and thalamus but not other parts of the brain—but those other parts of the brain are I believe clearly involved in suffering (e.g. lateral habenula, parabrachial nucleus, etc.).
(Also, not to gripe, but if you don't yet have a precise definition of "symmetry", then I might suggest that you not describe STV as a "crisp formalism". I normally think "formalism" ≈ "formal" ≈ "the things you're talking about have precise unambiguous definitions". Just my opinion.)
potential infinite regress: what ‘makes’ something a pleasure center?
I would start by just listing a bunch of properties of "pleasure". For example, other things equal, if something is more pleasurable, then I'm more likely to make a decision that result in my doing that thing in the future, or my continuing to do that thing if I'm already doing it, or my doing it again if it was in the past. Then if I found a "center" that causes all those properties to happen (via comprehensible, causal mechanisms), I would feel pretty good calling it a "pleasure center". (I'm not sure there is such a "center".)
(FWIW, I think that "pleasure", like "suffering" etc., is a learned concept with contextual and social associations, and therefore won't necessarily exactly correspond to a natural category of processes in the brain.)
Unrelated, but your documents bring up IIT sometimes; I found this blog post helpful in coming to the conclusion that IIT is just a bunch of baloney. :)
Replies from: michael-edward-johnson↑ comment by Michael Edward Johnson (michael-edward-johnson) · 2021-09-11T00:22:34.928Z · LW(p) · GW(p)
Hi Steven,
This is a great comment and I hope I can do it justice (took an overnight bus and am somewhat sleep-deprived).
First I’d say that neither we nor anyone has a full theory of consciousness. I.e. we’re not at the point where we can look at a brain, and derive an exact mathematical representation of what it’s feeling. I would suggest thinking of STV as a piece of this future full theory of consciousness, which I’ve tried to optimize for compatibility by remaining agnostic about certain details.
One such detail is the state space: if we knew the mathematical space consciousness ‘live in’, we could zero in on symmetry metrics optimized for this space. Tononi’s IIT for instance suggests it‘s a vector space — but I think it would be a mistake to assume IIT is right about this. Graphs assume less structure than vector spaces, so it’s a little safer to speak about symmetry metrics in graphs.
Another ’move’ motivated by compatibility is STV’s focus on the mathematical representation of phenomenology, rather than on patterns in the brain. STV is not a neuro theory, but a metaphysical one. I.e. assuming that in the future we can construct a full formalism for consciousness, and thus represent a given experience mathematically, the symmetry in this representation will hold an identity relationship with pleasure.
Appreciate the remarks about Smolensky! I think what you said is reasonable and I’ll have to think about how that fits with e.g. CSHW. His emphasis is of course language and neural representation, very different domains.
>(Also, not to gripe, but if you don't yet have a precise definition of "symmetry", then I might suggest that you not describe STV as a "crisp formalism". I normally think "formalism" ≈ "formal" ≈ "the things you're talking about have precise unambiguous definitions". Just my opinion.)
I definitely understand this. On the other hand, STV should basically have zero degrees of freedom once we do have a full formal theory of consciousness. I.e., once we know the state space, have example mathematical representations of phenomenology, have defined the parallels between qualia space and physics, etc, it should be obvious what symmetry metric to use. (My intuition is, we’ll import it directly from physics.) In this sense it is a crisp formalism. However, I get your objection and more precisely it’s a dependent formalism, and dependent upon something that doesn’t yet exist.
>(FWIW, I think that "pleasure", like "suffering" etc., is a learned concept with contextual and social associations, and therefore won't necessarily exactly correspond to a natural category of processes in the brain.)
I think one of the most interesting questions in the universe is whether you’re right, or whether I’m right! :) Definitely hope to figure out good ways of ‘making beliefs pay rent’ here. In general I find the question of “what are the universe’s natural kinds?” to be fascinating.
↑ comment by ZeroFries · 2023-03-21T20:22:15.038Z · LW(p) · GW(p)
> I'm going in expecting a, um, computational theory of valence
Let's contrast that with physicalist theory of valence, such as the STV.
> So that's the real reason I don't believe in STV—it just looks wrong to me, in the same way that Mario's progress should not look like certain types of large-scale structure in SRAM bits.
Well, since the STV is a physicalist theory, a better analogy might be like: properties like viscosity of a fluid can be found by the overall structure of the fluid.
I'm going to start with your last point, because I think it's the most important.
> Instead they tend to involve certain signals in the insular cortex and reticular activating system and those signals have certain effects on decisionmaking circuits, blah blah blah.
We're not necessarily interested in asking the question "*which* part is causally associated with valence?" but rather the question "*how* does that part actually do it, how is it implemented?". That is, how does the qualia of suffering/pleasure arise at all? How can the qualia itself be causally relevant? If it's mere computation, how does the qualia take on a particular texture, and what role does it play in the algorithm as a texture? At what point in the computation does it arise, how long does it arise for, etc. It leads to the so-called "Hard Problem of Consciousness". If physicalism is true, combined with the fact that we're not philosophical zombies, there must be some physical signature of consciousness.
> (1) waves and symmetries don't carry many bits of information. If you think valence and suffering are fundamentally few-dimensional, maybe that doesn't bother you; but I think it's at least possible for people know whether they're suffering from arm pain or finger pain or air-hunger or guilt or whatever.
Due to binding, a low-dimensional property can become mixed and imbued with other forms of qualia to form gestalts. Simple building blocks can create complex macro objects. You can still have a lot of information about the location, frequency, and phase of a textural pattern (1), even if it's consonant and thus carry (relatively) less information. That said, at the peak - where you get to fully consonant experiences (2), you do in fact see a loss of information content
> But from the perspective of any one neuron, that information is awfully hard to access. It's not impossible, but I think you'd need the neuron to have a bunch of inputs from across the brain hooked into complicated timing circuits etc.
If we go back to the viscosity, this would be like asking how a single atom can access information about the structure of the liquid as a whole. Furthermore, top-down causality can emerge in the right conditions in a physical system.
> If "suffering" was a particular signal carried by a particular neurotransmitter, for example, we wouldn't have that problem.
You have the worse problem though, of showing how qualia can arise from a neurotransmitter, with what textures and with what causal influence beyond the signal itself: if the causality is from the signal alone, why would qualia arise at all? What purpose would it serve in addition the purpose of the signal?
> Conversely, I'm confused at how you would tell a story where getting tortured (for example) leads to suffering. This is just the opposite of the previous one: Just as a brain-wide harmonic decomposition can't have a straightforward and systematic impact on a specific neural signal, likewise a specific neural signal can't have a straightforward and systematic impact on a brain-wide harmonic decomposition, as far as I can tell.
Brain-wide harmonics exert a top-down influence on individual neurons (e.g. through something like EM field dynamics [3]) and individual neurons collectively create the overall brain-wide harmonics.
1: https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/
2: https://qualiacomputing.com/2021/11/23/the-supreme-state-unconsciousness-classical-enlightenment-from-the-point-of-view-of-valence-structuralism/
3: https://psyarxiv.com/jtng9/?fbclid=IwAR3c7_mCMo44wB8AHR3OXAsObN5kRWtCtwq_FKXiMS1c-OZmRIZARwwSxmo
↑ comment by Steven Byrnes (steve2152) · 2023-03-21T21:20:52.760Z · LW(p) · GW(p)
Thanks for your reply.
If you ask me a question about, umm, I’m not sure the exact term, let’s say “3rd-person-observable properties of the physical world that have something to do with the human brain”—questions like “When humans emit self-reports about their own conscious experience, why do they often describe it as having properties A,B,C?” or “When humans move their mouths and say words on the topic of ‘qualia’, why do they often describe it as having properties X,Y,Z?”—then I feel like I’m on pretty firm ground, and that I’m in my comfort zone, and that I’m able to answer such questions, at least in broad outline and to some extent at a pretty gory level of detail. (Some broad-outline ingredients are in my old post here [LW · GW], and I’m open to further discussion as time permits.)
BUT, I feel like that’s probably not the game you want to play here. My guess is that, even if I perfectly nail every one of those “3rd-person” questions above, you would still say that I haven’t even begun to engage with the nature of qualia, that I’m missing the forest for the trees, whatever. (I notice that I’m putting words in your mouth; feel free to disagree.)
If I’m correct so far, then this is a more basic disagreement about the nature of consciousness and how to think about it and learn about it etc. You can see my “wristwatch” discussion here [LW · GW] for basically where I’m coming from. But I’m not too interested in hashing out that disagreement, sorry. For me, it’s vaguely in the same category as arguing with a theology professor about whether God exists (I’m an atheist): My position is “Y’know, I really truly think I’m right about this, but there’s a gazillion pages of technical literature on this topic, and I’ve read practically none of it, and my experience strongly suggests that we’re not going to make any meaningful progress on this disagreement in the amount of time that I’m willing to spend talking about it.” :-P Sorry!
↑ comment by Measure · 2021-09-07T14:02:48.765Z · LW(p) · GW(p)
Waves and symmetries don't carry many bits of information. If you think valence and suffering are fundamentally few-dimensional, maybe that doesn't bother you; but I think it's at least possible for people know whether they're suffering from arm pain or finger pain or air-hunger or guilt or whatever.
"What is the problem?" should have a pretty high information content, but there might be a separate "how bad is it?" question that constitutes the actual unpleasant part of the experience, which wouldn't have to be much more than a 1d scalar.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2021-09-08T17:11:14.122Z · LW(p) · GW(p)
For the record, I do actually believe that. I was trying to state what seemed to be a problem in the STV framework as I was understanding it.
In my picture, the brainstem communicates valence to the neocortex via a midbrain dopamine signal (one particular signal of the many [LW · GW]), and sometimes communicates the suggested cause / remediation via executing orienting reactions (saccading, moving your head, etc.—the brainstem can do this by itself), and sending acetylcholine to the corresponding parts of your cortex, which then override the normal top-down attention mechanism and force attention onto whatever your brainstem demands. For example, when your finger hurts a lot, it's really hard to think about anything else, and my tentative theory is that the mechanism here involves the brainstem sending acetylcholine to the finger-pain-area of the insular cortex. (To be clear, this is casual speculation that I haven't thought too hard about or looked into much.)
comment by adamShimi · 2021-09-06T12:12:48.948Z · LW(p) · GW(p)
Thanks for writing that! I had read some part of Principia Qualia, but this roadmap helps a lot to understand what you're doing. You made me want to read the neural annealing paper. :D
Replies from: michael-edward-johnson↑ comment by Michael Edward Johnson (michael-edward-johnson) · 2021-09-06T13:14:08.925Z · LW(p) · GW(p)
Thank you!
comment by YimbyGeorge (mardukofbabylon) · 2021-09-06T20:12:25.266Z · LW(p) · GW(p)
How can this be used by me right now in my life?
Replies from: michael-edward-johnson↑ comment by Michael Edward Johnson (michael-edward-johnson) · 2021-09-06T20:55:45.940Z · LW(p) · GW(p)
Neural Annealing is probably the most current actionable output of this line of research. The actionable point is that the brain sometimes enters high-energy states which are characterized by extreme malleability; basically old patterns ‘melt’ and new ones reform, and the majority of emotional updating happens during these states. Music, meditation, and psychedelics are fairly reliable artificial triggers for entering these states. When in such a malleable state, I suggest the following:
>Off the top of my head, I’d suggest that one of the worst things you could do after entering a high-energy brain state would be to fill your environment with distractions (e.g., watching TV, inane smalltalk, or other ‘low-quality patterns’). Likewise, it seems crucial to avoid socially toxic or otherwise highly stressful conditions. Most likely, going to sleep as soon as possible without breaking flow would be a good strategy to get the most out of a high-energy state- the more slowly you can ‘cool off’ the better, and there’s some evidence annealing can continue during sleep. Avoiding strong negative emotions during such states seems important, as does managing your associations (psychedelics are another way to reach these high-energy states, and people have noticed there’s an ‘imprinting’ process where the things you think about and feel while high can leave durable imprints on how you feel after the trip). It seems plausible that taking certain nootropics could help strengthen (or weaken) the magnitude of this annealing process.
(from The Neuroscience of Meditation)
Replies from: mardukofbabylon↑ comment by YimbyGeorge (mardukofbabylon) · 2021-09-07T13:35:10.629Z · LW(p) · GW(p)
Thanks!
comment by Charlie Steiner · 2021-09-09T15:09:19.895Z · LW(p) · GW(p)
I was gonna be more critical but, hey, whatever. Still, I figured I should put up my definition of pain rather than deleting it.
Pain is not people with hemispherectomies having asymmetrical brains. Pain is aversion, is learning not to do that again, and yelling and contorting my face, and fight-or-flight response, tensing my muscles, and the bodily sensations as my circulatory system responds to injury, and not being able to focus well on anything but short term strategies for removing the aversive stimulus, and priming my memory to recall danger and injury, and being able to easily compare the sensation with other signals that fit the learned word "pain," and knowing I'll feel like crap for a while even after the pain passes.