A review of Principia Qualia
post by jessicata (jessica.liu.taylor) · 2023-07-12T18:38:52.283Z · LW · GW · 6 commentsThis is a link post for https://unstablerontology.substack.com/p/a-review-of-principia-qualia
Contents
Motivating bottom-up physicalist consciousness theories Reviewing Integrated Information Theory and variants Specifying the problem of consciousness and valence Symmetry None 6 comments
Principia Qualia, by Michael Johnson, is a long document describing an approach to theorizing about and studying consciousness. I have heard enough positive things about the document to consider it worth reviewing.
I will split the paper (and the review) into 4 parts:
-
Motivating bottom-up physicalist consciousness theories
-
Reviewing Integrated Information Theory and variants
-
Specifying the problem of consciousness and valence
-
Symmetry
I already disagree with part 1, but bottom-up is still a plausible approach. Part 2 makes sense to pay attention to conditional on the conclusions of part 1, and is basically a good explanation of IIT and its variants. Part 3 describes desiderata for a bottom-up theory of qualia and valence, which is overall correct. Part 4 motivates and proposes symmetry as a theory of valence, which has me typing things like "NO PLEASE STOP" in my notes document.
Motivating bottom-up physicalist consciousness theories
The document is primarily concerned with theories of consciousness for the purpose of defining valence: the goodness or badness of an experience, in terms of pleasure and pain. The document motivates this in terms of morality: we would like to determine things about the consciousness of different agents whether they are human, non-human animal, or AIs. The study of valence is complicated by, among other things, findings of different brain circuitry for "wanting", "liking", and "learning". Neural correlates of pleasure tend to be concentrated in a small set of brain regions and hard to activate, while neural correlates of pain tend to be more distributed and easier to activate.
The methods of affective neuroscience are limited in studying valence and consciousness more generally. Johnson writes, "In studying consciousness we've had to rely on either crude behavioral proxies, or subjective reports of what we're experiencing." The findings of affective neuroscience do not necessarily tell us much about metaphysically interesting forms of valence and consciousness (related to Chalmers' distinction between easy problems of consciousness and the hard problem of consciousness).
Johnson defines top-down and bottom-up theories, and motivates bottom-up theories:
There are two basic classes of consciousness theories: top-down (aka 'higher-order' or 'cognitive' theories) and bottom-up. Top-down theories are constructed around the phenomenology of consciousness (i.e., how consciousness feels) as well as the high-level dynamics of how the brain implements what we experience...However, if we're looking for a solid foundation for any sort of crisp quantification of qualia, top-down theories will almost certainly not get us there, since we have no reason to expect that our high-level internal phenomenology has any crisp, intuitive correspondence with the underlying physics and organizational principles which give rise to it...Instead, if we're after a theory of valence/qualia/phenomenology as rigorous as a physical theory, it seems necessary to take the same bottom-up style of approach as physics does when trying to explain something like charge, spin, or electromagnetism.
If our high-level phenomenology has no "crisp, intuitive correspondence with the underlying physics and organizational principles which give rise to it", it's unclear why a bottom-up theory would succeed either.
To explain my take on this, I will first explain how my motivation for studying consciousness differs from Johnson's. Johnson is primarily concerned with identifying conscious agents experiencing pleasure and pain, for the purpose of optimizing the universe for more pleasure. I am primarily concerned with consciousness as a philosophical epistemic and ontological problem, which is foundational for other study, including physics itself.
Which brings me to my main disagreement with bottom-up approaches: they assume we already have a physics theory in hand, and are trying to locate consciousness within that theory. Yet, we needed conscious observations, and at least some preliminary theory of consciousness, to even get to a low-level physics theory in the first place. Scientific observations are a subset of conscious experience, and the core task of science is to predict scientific observations; this requires pumping a type of conscious experience out of a physical theory, which requires at least some preliminary theory of consciousness. Anthropics makes this clear, as theories such as SSA and SIA require identifying observers who are in our reference class.
I believe, therefore, that whatever theory we get of consciousness at the end of getting a physical theory should self-ratify by labeling the scientific observations used to form this theory as conscious experiences, and also labeling other analogous experiences as conscious. My own critical agential theory of free will and physics, itself similar to (though invented independently of) constructor theory, is a start to an approach to studying physics starting from an egocentric perspective (therefore, top-down) that can self-ratify by locating itself within its discovered physics, and consider its actions (and therefore its observations) to not be epiphenomenal.
If our high-level phenomenology has no "crisp, intuitive correspondence with the underlying physics and organizational principles which give rise to it", it's unclear why a bottom-up theory would succeed either for the purpose of predicting our high-level phenomenology, which is a core objective in my approach. It would seem that we need to move through many layers of abstraction in the stack, from psychology to neuroscience to chemistry to physics. I broadly agree with functionalism (even though I have some objections), so it seems to me that explaining these different abstraction layers, in a way that admits to alternative foundations of a given layer (therefore, allowing studying consciousness in alternative physics), is critical to understanding the relation of consciousness to physics. For a general argument as to why consciousness can be multiply realized, see Zombies! Zombies? [LW · GW] and The Generalized Anti-Zombie Principle [LW · GW]. (Johnson would consider functionalism a top-down approach.)
So, given that my ontology of science and consciousness is different from the authors', it's not surprising that I find their overall approach less compelling; however, I still find it compelling enough to look a the implications of taking a bottom-up approach to studying consciousness.
Reviewing Integrated Information Theory and variants
Johnson writes: "For now, the set of bottom-up, fully-mathematical models of consciousness has one subset: Giulio Tononi's Integrated Information Theory (IIT) and IIT-inspired approaches."
IIT defines a metric Φ on information states of a system, which is designed to measure the consciousness of that state. It is designed to rule out various trivial information processing systems that we would not attribute consciousness to, including states whose information is overly correlated or overly uncorrelated.
Johnson continues: "Tononi starts with five axiomatic properties that all conscious experiences seem to have. IIT then takes these axioms ('essential properties shared by all experiences') and translates them into postulates, or physical requirements for conscious physical systems -- i.e., 'how does the physical world have to be arranged to account for these properties?' (Tononi and Albantakis 2014)."
IIT implies there may exist multiple "minimum information partitions" within a single brain, indicating multiple conscious entities; Tononi accepts this conclusion. This seems reasonable to me given that a great deal of mental processing is not conscious to the same conscious entity that writes words such as these, and this processing could indicate consciousness of a different sort.
The paper goes on to discuss criticisms of IIT. Among the most important are:
-
There is no formal derivation of Φ from the postulates.
-
"Griffith is clear that the mathematical formula for Φ, and the postulates it's nominally based on, have been changing in each revision."
-
"Griffith echoes a common refrain when he notes that, 'As-is, there has been no argument for why the existing axioms of differentiation, integration, and exclusion fully exhaust the phenomological properties requiring explanation.'"
-
"Indeed, Aaronson identifies a simple-to-define mathematical structure called an 'expander graph' which, according to the math used to calculate Φ, would produce much more consciousness than a human brain."
-
It's unclear what information connection structure to feed into Φ, e.g. should it be a map of which neurons are activated, or take other information into account?
-
"As noted above, Tononi has argued that "in sharp contrast to widespread functionalist beliefs, IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing" (Tononi and Koch 2015). However, he hasn't actually published much on why he thinks this."
Johnson goes on to summarize and review two alternative IIT-like theories: Perceptronium and the "field integrated information hypothesis" (FIIH).
Tegmark's Perceptronium theory attempts to define consciousness in terms of, rather than information states, quantum interactions. He is interested in this as part of an anthropic theory to solve the Quantum Factorization Problem, the problem of why we observe some factorizations of a quantum system but not others.
Adam Barrett's FIIH defines consciousness in terms of a quantum field. He argues that electromagnetic fields are the only ones capable of supporting consciousness, as gravity and strong and weak forces don't support enough complexity. However, he doesn't present a formal theory.
Specifying the problem of consciousness and valence
Johnson considers IIT inadequate, but worth building on. (I don't consider it worth building on; see previous points about bottom-up approaches). He lists eight problems to solve, grouped into three categories:
Step 1 (metaphysics) breaks down into two subproblems:
1. The Reality Mapping Problem: how do we choose a formal ontology for consciousness which can map unambiguously to reality?
2. The Substrate Problem: which subset of objects & processes in our chosen ontology 'count' toward consciousness?
Step 2 (math) breaks down into four subproblems:
1. The Boundary (Binding) Problem: how do we determine the correct boundaries of a conscious system in a principled way?
2. The Scale Problem: how do we bridge the scale gap from our initial ontology (e.g., the Standard Model, string theory, individual computations, etc) to the spatial and temporal scales relevant to consciousness?
3. The Topology of Information Problem: how do we restructure the information inside the boundary of the conscious system into a mathematical object isomorphic to the system's phenomenology?
4. The State Space Problem: what is 'Qualia space'? -- I.e., which precise mathematical object does the mathematical object isomorphic to a system's qualia live in? What are its structures/properties?
Step 3 (interpretation) breaks down into two subproblems:
1. The Vocabulary Problem: what are some guiding principles for how to improve our language about phenomenology so as to "carve reality at the joints"?
2. The Translation Problem: given a mathematical object isomorphic to a system's phenomenology, how do we populate a translation list between its mathematical properties and the part of phenomenology each property or pattern corresponds to?
My claim is that these eight sub-problems are necessary to solve consciousness, and in aggregate, may be sufficient.
Overall, I would agree that these problems are likely necessary, and possibly sufficient. The scale problem is especially important for mapping high-level states of consciousness to low-level physical states, perhaps making the top-down/bottom-up distinction meaningless if it is solved. However, the rest of the paper does not solve these problems, instead discussing valence and simple metrics of it, which don't solve the above problems.
In studying valence, Johnson proposes three postulates:
1. Qualia Formalism: for any given conscious experience, there exists -- in principle -- a mathematical object isomorphic to its phenomenology.
2. Qualia Structuralism: this mathematical object has a rich set of formal structures.
3. Valence Realism: valence is a crisp phenomenon of conscious states upon which we can apply a measure.
Qualia Formalism seems broadly plausible, although I note that it seems at odds with Chalmers' notion of qualia: the "redness of red" is, in his view, only assessable subjectively, not mathematically. Qualia Structuralism seems very likely. Valence Realism seems likely enough to be a worthy hypothesis.
Qualia vary on a number of axes. Qualia can be local (pertaining to only some sensations, e.g. seeing a small patch of green) or global (permeating all sensations, e.g. pleasure). Qualia can be simple or complex, as measured by something like Kolomogorov complexity. Qualia can be atomic or composite (made of multiple parts). And qualia can be intuitively important or intuitively trivial.
Johnson proposes that, as qualia objects are mapped to mathematical objects, their properties along these four axes should match with the properties of the corresponding mathematical properties, e.g. intuitively important qualia properties should be intuitively important mathematical properties of the corresponding mathematical objects. This seems broadly reasonable. The properties depend on the structure of the mathematical objects, not just their data content; the notion of a homomorphism in universal algebra and category theory may be relevant.
Johnson posits that valence is global, simple, and very interesting; this seems to me likely enough to be a worthy hypothesis.
Symmetry
This is where I start actually being annoyed with the paper; not just because the theories presented are overly simple, failing to bridge the ontological stack from physics to phenomenal experience, but also because the metrics of valence presented would straightforwardly be a bad idea to optimize.
Johnson reviews some existing neurological theories for clues as to determining valence:
"Paul Smolensky's 'Computational Harmony' is a multi-level neurocomputational syntax which applies especially in the natural-language processing (NLP) domain... in short, Smolensky's model suggests that the organizing principle of successful neurolinguistic computation is simplicity (-> pleasure?), and our NLP systems avoid needless complexity (-> pain?)."
"Karl Friston's Free Energy model of brain dynamics comes to a roughly similar conclusion: '[A]ny self-organizing system that is at equilibrium with its environment must minimize its free energy. The principle is essentially a mathematical formulation of how adaptive systems (that is, biological agents, like animals or brains) resist a natural tendency to disorder. ... In short, the long-term (distal) imperative --- of maintaining states within physiological bounds --- translates into a short-term (proximal) avoidance of surprise. (Friston 2010)'"
Both of these theories suggest that simplicity is a proxy for pleasure, and complexity for pain. I am already starting to be concerned: I value interestingness, and interestingness is more like complexity than simplicity. Say Not "Complexity" [LW · GW] argues against overly broad identification of complexity with morality, but complexity seems to be to be a better proxy for value than its inverse, simplicity.
Fristonian free energy minimization is famously susceptible to the "dark room problem", than a free energy minimizer may stay in the same dark room so as to minimize the degree to which they are surprised. Scott Alexander discusses this problem and analogizes it to the behavior of depressed people.
Johnson discusses more theories based on simplicity and elegance:
"Cosmides argues that emotions like grief -- or rather, processes like 'working through grief' -- are actually computationally intensive, and the kind of computation involved with this sort of recalibration of internal regulatory variables seems to intrinsically hurt."
"Pleasurable music tends to involve elegant structure when represented geometrically (Tymoczko 2006)."
"To some degree enjoyment of mathematical elegance must be a spandrel, pleasurable because it's been adaptive to value the ability to compress complexity. . . but pleasure does have to resolve to something, and this is an important data point."
"In fact, 'dissonance' and 'pain' are often used as rough synonyms. Usually there's a messy range of valences associated with any given phenomenon -- some people like it, some people don't. But I've never heard anyone say, 'I felt cognitive dissonance and it felt great.'"
He goes on to propose a simple theory of valence:
Given a mathematical object isomorphic to the qualia of a system, the mathematical property which corresponds to how pleasant it is to be that system is that object's symmetry.
...
Symmetry plays an absolutely central role in physics, on many levels. And so if consciousness is a physics problem, then we can say -- a priori -- that symmetry will play an absolutely central role in it, too, and seems likely to be matched with some qualia as important as valence. I'll defer to Max Tegmark and Frank Wilczek for a full-throated defense of this notion:
This seems to me to be skipping too many levels: mapping symmetry of qualia to symmetry of physics requires mapping the ontologies through multiple levels, not simply using physical symmetry as a proxy for symmetry of qualia, which would tend to result in highly symmetric physical systems, such as uniform rocks, being considered "happy". Perhaps this is the product of an affective death spiral [LW · GW]: symmetry as the highest good, as Schmidhumer considered data compression to be the highest good.
Formalizing valence includes formalizing pain: "So -- if symmetry/pleasure is accurately approximated by one of the above metrics, could we also formalize and measure antisymmetry/pain and asymmetry/neutral, and combine these to fully specify a mind's location in valence space? In a word, yes. However, I worry that research into formalizing negative valence could be an information hazard, so I will leave it at this for now."
There is a general problem with formalizing one's terminal values, which is that they reveal how to minimize one's values, which can be a problem in a conflict situation, increasing vulnerability to constructs similar to Roko's basilisk. Conflict is in general a motive for preference falsification.
Johnson may notice some of the problems with maximizing symmetry: "An interesting implication we get if we take our hypothesis and apply it to IIT is that if we attempt to maximize pleasure/symmetry, consciousness/Φ drops very rapidly." While Φ is likely insufficient for consciousness (shown by Aaronson's expander graph), it may be necessary for consciousness, and if so, low Φ is a sign of low consciousness. It is unsurprising if simple, symmetrical states of information are less conscious, as consciousness requires some degree of complexity for representing a complex situation and associated mental content.
Johnson discusses a few ways to test the hypothesis, such as by creating symmetrical or asymmetrical brain states and testing their phenomenological correlates. To me this, again, seems to be skipping ahead too fast: without a more expanded theory of consciousness in general (not just valence) that explains multiple levels of ontological analysis, testing correlations in isolation tells us little, as it does with IIT's correlation with crude measures of consciousness such as wakefulness.
Johnson concludes by discussing applications to treating sentient creatures better and the value problem in artificial intelligence. While I agree that a workable theory of valence would be useful for these applications, I don't believe we are currently close to such a theory, although the broad research program expressed in the paper, especially the eight problems, is probably a good start for developing such a theory.
6 comments
Comments sorted by top scores.
comment by gallabytes · 2023-07-13T23:58:40.338Z · LW(p) · GW(p)
Which brings me to my main disagreement with bottom-up approaches: they assume we already have a physics theory in hand, and are trying to locate consciousness within that theory. Yet, we needed conscious observations, and at least some preliminary theory of consciousness, to even get to a low-level physics theory in the first place. Scientific observations are a subset of conscious experience, and the core task of science is to predict scientific observations; this requires pumping a type of conscious experience out of a physical theory, which requires at least some preliminary theory of consciousness. Anthropics makes this clear, as theories such as SSA and SIA require identifying observers who are in our reference class.
There's something a bit off about this that's hard to quite put my finger on. To gesture vaguely at it, it's not obvious to me that this problem ought to have a solution. At the end of the day, we're thinking meat, and we think because thinking makes the meat better at becoming more meat. We have experiences correlated with our environments because agents whose experiences aren't correlated with their environments don't arise from chemical soup without cause.
My guess is that if we want to understand "consciousness", the best approach would be functionalist. What work is the inner listener doing? It has to be doing something, or it wouldn't be there.
Do you feel you have an angle on that question? Would be very curious to hear more if so.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2023-07-14T03:34:37.806Z · LW(p) · GW(p)
Not sure how satisfying this is, but here's a rough sketch:
Anthropically, the meat we're paying attention to is meat that implements an algorithm that has general cognition including the capacity of building physics theories from observations. Such meat may become more common either due to physics theories being generally useful or general cognition that does physics among other things being generally useful. The algorithm on the meat selects theories of physics that explain their observations. To explain the observations, the physics theory has to bridge between the subject matter of physics and the observational inputs to the algorithm that are used to build and apply the theory. The thing that is bridged to isn't, according to the bridging law, identical to the subject matter of low level physics (atoms or whatever), and there also isn't a very simple translation, although there is a complex translation. The presence of a complex but not simple load-bearing translation motivates further investigation to find a more parsimonious theory. Additionally, there are things the algorithm implemented on the meat does other than building physics theories that use similar algorithmic infrastructure to the infrastructure that builds physics theories from observations. It is therefore parsimonious to posit that there is a broader class of entity than "physical observation" that includes observations not directly used to build physical theories, due to natural category considerations. "Experience" seems a fitting name for such a class.
comment by mishka · 2023-07-13T01:33:29.009Z · LW(p) · GW(p)
Thanks! I have been looking at Principia Qualia a few times, but now I have a much crisper picture because of this review.
Johnson discusses a few ways to test the hypothesis, such as by creating symmetrical or asymmetrical brain states and testing their phenomenological correlates. To me this, again, seems to be skipping ahead too fast: without a more expanded theory of consciousness in general (not just valence) that explains multiple levels of ontological analysis, testing correlations in isolation tells us little, as it does with IIT's correlation with crude measures of consciousness such as wakefulness.
Yes, I feel that we need to make genuine progress in the general problem of consciousness and, in particular, in "solving the qualia" (in this sense, Johnson's "eight problems to solve" look very useful as a starting point).
Focusing mostly on valence is trying to take a shortcut and get a valuable application without understanding the overall theory of qualia. But I think that we do need to understand the overall theory of qualia (for many reasons, including AI existential safety and more).
And moreover, while "rating the quality of subjective experience" is very important, I am not sure it is optimal to use the word valence, because I feel that speaking in terms of valence pushes one a bit too hard to associate value of subjective experience with a scalar parameter, and diversity of experience, novelty, and curiosity do matter a lot here, so it seems that the overall quality of subjective experience would be a multicriterial thing instead.
Symmetry is surely important, but it is an attempt to have a scalar rating that is pushes one to try to find a single aspect and to try to reduce the overall "rating of the quality of subjective experience" to that single aspect.
comment by Charlie Steiner · 2023-07-12T22:37:29.199Z · LW(p) · GW(p)
which has me typing things like "NO PLEASE STOP" in my notes
And now my housemate has looked at my strangely for cackling.
comment by moses_obrien (mgmobrien) · 2023-07-30T22:23:32.774Z · LW(p) · GW(p)
would love to get @Michael Edward Johnson [LW · GW]'s responses here
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2023-07-31T01:41:47.287Z · LW(p) · GW(p)
He responded on Substack:
Hey Jessica, I really appreciate the thoughtful review. I agree with parts and disagree with parts:
One core objection, which I completely understand, is that “Johnson is primarily concerned with identifying conscious agents experiencing pleasure and pain, for the purpose of optimizing the universe for more pleasure.” PQ does not make this claim, although I understand it could be read as such. Since writing PQ I’ve more explicitly stepped away from hedonic utilitarianism as a practical ethics and consider myself a solid believer in virtue ethics. My North Star in this research is knowledge.
Another objection which I think is useful to discuss is the following:
If our high-level phenomenology has no “crisp, intuitive correspondence with the underlying physics and organizational principles which give rise to it”, it’s unclear why a bottom-up theory would succeed either for the purpose of predicting our high-level phenomenology, which is a core objective in my approach. It would seem that we need to move through many layers of abstraction in the stack, from psychology to neuroscience to chemistry to physics.
This is actually something I see as promising for the Symmetry Theory of Valence: that it may offer a bottom-up frame that can reach up to high-level phenomenology, due to symmetry often being expressed at multiple scales.
I’ll note that I recently released a considerably shorter and somewhat updated paper on the Symmetry Theory of Valence: https://opentheory.net/2023/06/new-whitepaper-qualia-formalism-and-a-symmetry-theory-of-valence/
I think the new paper is much more clear about the motivations around borrowing the symmetry aesthetic from physics, and also proposes better empirical tests, though it has narrower ambitions.
Michael