Any taxonomies of conscious experience?
post by JohnDavidBustard · 2021-07-18T18:28:50.444Z · LW · GW · 6 commentsThis is a question post.
Contents
Answers 8 ChristianKl None 6 comments
I have some expertise in machine learning and AI. I broadly believe that human minds are similar to modern AI algorithms such as deep learning and reinforcement learning. I also believe that it is likely that consciousness is present wherever algorithms are executing (a form of panpsychism). I am trying to create theories about how AI algorithms could generate conscious experiences. For example, it may be the case that when an AI is in a situation where it believes that many actions it could take will lead to an improvement in its situation it might feel happiness. If it feels that most choices will lead to a worse situation and it is searching for the least worst option, it might feel fear and sadness. I am trying to find existing research that might give me a taxonomy of conscious experiences (ideally with associated experimental data e.g. surveys etc.) that I could use to define a scope of experiences that I could then try to map onto the execution of machine learning algorithms. Ideally I am looking for taxonomies that are quite comprehensive, I have found other taxonomies very useful in the past for similar goals, such as Wordnet, ConceptNet, TimeUse surveys, DSM (psychology diagnosis) etc.
I have a very limited understanding of phenomenology and believe that its goals in understanding conscious experience may be relevant but I am concerned that it is not a subject that is presented in a systematic textbook style format that I am looking for. I would be very grateful for any suggestions as to where I might find any systematic overview that I might be able to use. Perhaps from teaching materials or something from Wikipedia or any other source that attempts this kind of broad systematic taxonomy.
Answers
Qualia Research Institute is an organization that tries to build such a taxonomy and where some of the AIrisk money goes because of the hope that it might be useful.
↑ comment by gwern · 2021-07-18T20:22:22.394Z · LW(p) · GW(p)
eg https://qualiacomputing.com/2015/06/09/state-space-of-drug-effects-results/ https://qualiacomputing.com/2019/08/10/logarithmic-scales-of-pleasure-and-pain-rating-ranking-and-comparing-peak-experiences-suggest-the-existence-of-long-tails-for-bliss-and-suffering/
Replies from: JohnDavidBustard↑ comment by JohnDavidBustard · 2021-07-19T12:16:33.483Z · LW(p) · GW(p)
Great links, thank you, I hadn't considered the drug effects before that is an interesting perspective on positive sensations. Also I wanted to say I am a big fan of your work, particularly your media synthesis stuff. I use it in teaching of deep learning to show examples of how to use academic source code to explore cutting edge techniques.
↑ comment by JohnDavidBustard · 2021-07-19T12:14:32.816Z · LW(p) · GW(p)
Perfect, thank you
6 comments
Comments sorted by top scores.
comment by Charlie Steiner · 2021-07-18T19:25:43.818Z · LW(p) · GW(p)
Maybe something like various attempts from psychology to list emotions would help?
Overall, though, I think prevailing opinion around here is more along the lines of consciousness having many different moving parts, which means that most questions will not have simple answers.
E.g. saying I am "conscious of the taste of chocolate" tells you a lot of vague but related information about me - not only am I receiving certain signals from my taste receptors and nose and gut and peripheral glucose sensors and the texture on my tongue, but my recent past has primed me in such a way that this all gets interpreted in a particularly chocolatey way, which is then made accessible to the rest of my brain to a vague degree, but probably at least my verbal self-attention and memory formation capabilities become influenced by these interpreted perceptions, as well as priming of my perceptual systems for related stimuli, and relevant evaluations of reward/displeasure from my evaluative capabilities that will cause updates in my thought patterns for the future. All of these together make up being "conscious of the taste of chocolate," but each one could be slightly different in me at different times, or in different people.
In other words, consciousness has no simple essence, nor is it made up of a small number of simple essences. This is especially relevant in animal consciousness - a dog is going to have many (though perhaps not all) of the macro-scale abilities I named in talking about my sensation of taste, but the lower-level details of their implementation are going to be different. "Being conscious" is not like "being more dense than water" - with density there's only one possible dimension of variation, and there's basically always a clear answer for which side of the line something is on, and if we're curious about dogs we can just test a dog to see if it's more dense than water and we'll get a reliable result. With consciousness, all the small parts of it can vary, and which parts we care about may depend on what context we're asking the question in! We might do better than trying to assign a binary value of "conscious or not" by assigning some degree of consciousness, but even that is still a one-dimensional simplification of a high-dimensional pattern.
P.S. Fight me, symmetry theory of valence stans. :P
Replies from: ChristianKl, JohnDavidBustard↑ comment by ChristianKl · 2021-07-18T20:38:38.405Z · LW(p) · GW(p)
I might be biased being from a bioinformatics background but I would rather go for the OBO foundary ontology for emotions as using a Wikipedia list.
Replies from: JohnDavidBustard↑ comment by JohnDavidBustard · 2021-07-19T12:26:14.854Z · LW(p) · GW(p)
Thanks it is very handy to get something that is compatible with SUMO.
↑ comment by JohnDavidBustard · 2021-07-19T12:25:46.487Z · LW(p) · GW(p)
Thank you for the thoughtful comments. I am not certain that the approach that I am suggesting will be successful but I am hoping that more complex experiences may be explainable from simpler essences, similar to the behaviour of fluids from simpler atomic rules. I am currently focused on the assumption that the brain is similar to a modern reinforcement learning algorithm where there is a one or more large learnt structures and a relatively simple learning algorithm. The first thing I am hoping to look at is if all the concious experiences could be explained purely by behaviours associated with the learning algorithm. Even better if in trying to do this it indicates new structures that the learning algorithm should take. For example, we have strong memories of sad events and choices we regret, this implies we rank the importance of past experiences based on these situations and weight them more heavily when learning from them. We might avoid a stategy because our intuition says it makes us sad (it is like other situations that made us sad) rather than it simply being a poor strategy to achieve our goals.
comment by Shmi (shminux) · 2021-07-18T19:59:52.094Z · LW(p) · GW(p)
Happiness is a complicated emotion. It can spring from so many causes. Maybe start with something more primitive. For example, before you can feel happiness or sadness due to having to pick an option, you need to feel the ability to make that choice. So maybe relate the subroutine that considers the options and makes the choice to the feeling of free will or something. Still quite complicated, but seems simpler than what you are attempting. Maybe even try to dissect it even further.
Replies from: JohnDavidBustard↑ comment by JohnDavidBustard · 2021-07-19T12:31:06.083Z · LW(p) · GW(p)
Thanks for the comment, I think it is very interesting to think about the minimum complexity algorithm that could plausibly be able to have each conscious experience. The fact that we remember events and talk about them and can describe how they are similar e.g. blue is cold and sad, implies that our internal mental representations and the connections we can make between them must be structured in a certain way. It is fascinating to think about what the simplest 'feeling' algorithm might be, and exciting to think that we may someday be able to create new conscious sensations by integrating our minds with new algorithms.