It's OK to eat shrimp: EAs Make Invalid Inferences About Fish Qualia and Moral Patienthood

post by Mikhail Samin (mikhail-samin) · 2023-11-13T16:51:53.341Z · LW · GW · 17 comments

Contents

17 comments

17 comments

Comments sorted by top scores.

comment by Jackson Wagner · 2023-11-13T18:41:51.672Z · LW(p) · GW(p)

Why would showing that fish "feel empathy" prove that they have inner subjective experience?  It seems perfectly possible to build a totally mechanical, non-conscious system that nevertheless displays signs of empathy.  Couldn't fish just have some kind of built-in, not-necessarily-conscious instinct to protect other fish (for instance, by swimming together in a large school) in order to obtain some evolutionary benefit?

Conversely, isn't it possible for fish to have inner subjective experience but not feel empathy?  Fish are very simple creatures, while "empathy" is a complicated social emotion.  Especially in a solitary creature (like a shark, or an octopus), it seems plausible that you might have a rich inner world of qualia alongside a wide variety of problem-solving / world-modeling skills, but no social instincts like jealousy, empathy, loyalty, etc.  Fish-welfare advocates often cite studies that seem to show fish having an internal sense of pain vs pleasure (eg, preferring water that contains numbing medication), or that bees can have an internal sense of being optimistic/risky vs pessimistic/cautious -- if you think that empathy proves the existence of qualia, why are these similar studies not good enough for you?  What's special about the social emotion of empathy?

Personally, I am more sympathetic to the David Chalmers "hard problem of consciousness" perspective, so I don't think these studies about behaviors (whether social emotions like jealousy or more basic emotions like optimism/pessimism) can really tell us that much about qualia / inner subjective experience.  I do think that fish / bees / etc probably have some kind of inner subjective experience, but I'm not sure how "strong", or vivid, or complex, or self-aware, that experience is, so I am very uncertain about the moral status of animals.  (Personally, I also happily eat fish & shrimp all the time.)

In general, I think this post is talking about consciousness / qualia / etc in a very confused way -- if you think that empathy-behaviors are ironclad proof of empathy-qualia, you should also think that other (pain-related, etc) behaviors are ironclad proof of other qualia.

Replies from: mikhail-samin
comment by Mikhail Samin (mikhail-samin) · 2023-11-13T19:16:15.271Z · LW(p) · GW(p)

Both (modeling stuff about others by reusing circuits for modeling stuff about yourself without having experience; and having experience without modelling others similarly to yourself) are possible, and the reason why I think the suggested experiment would provide indirect evidence is related to the evolutionary role I consider qualia to possibly play. It wouldn't be extremely strong evidence and certainly wouldn't be proof, but it'd be enough evidence for me to stop eating fish that has these things.

The studies about optimistic/pessimistic behaviour tell us nothing about whether these things experience optimism/pessimism, as they are an adaptation an RL algorithm would implement without the need to implement circuits that would also experience these things, unless you can provide a story for why circuitry for experience is beneficial or a natural side effect of something beneficial.

One of the points of the post is that any evidence we can have except for what we have about humans would be inderect, and people call things evidence for confused reasons. Pain-related behaviour is something you'd see in neural networks trained with RL, because it's good to avoid pain and you need a good explanation for how exactly it can be evidence for qualia.

(Copied from EA Forum [EA(p) · GW(p)])

Replies from: Jackson Wagner
comment by Jackson Wagner · 2023-11-13T20:37:53.790Z · LW(p) · GW(p)

(Copies from EA Forum for the benefit of lesswrongers following the discussion here)

Definitely agree that empathy and other social feelings provide indirect evidence for self-awareness (ie, "modeling stuff about yourself" in your brain) in a way that optimism/pessimism or pain-avoidance doesn't.  (Although wouldn't a sophisticated-enough RL circuit, interacting with other RL circuits in some kind of virtual evolutionary landscape, also develop social emotions like loyalty, empathy, etc?  Even tiny mammals like mice/rats display sophisticated social behaviors...)

I tend to assume that some kind of panpsychism is true, so you don't need extra "circuitry for experience" in order to turn visual-information-processing into an experience of vision.  What would such extra circuitry even do, if not the visual information processing itself?  (Seems like maybe you are a believer in what Daniel Dennet calls the "fallacy of the second transduction"?)
Consequently, I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"!  But of course it would not have any awareness of itself as being a thing-that-sees, nor would those isolated experiences of vision be necessarily tied together into a coherent visual field, etc.

So, I tend to think that fish and other primitive creatures probably have "qualia", including something like a subjective experience of suffering, but that they probably lack any sophisticated self-awareness / self-model, so it's kind of just "suffering happening nowhere" or "an experience of suffering not connected to anything else" -- the fish doesn't know it's a fish, doesn't know that it's suffering, etc, the fish is just generating some simple qualia that don't really refer to anything or tie into a larger system.  Whether you call such a disconnected & shallow experience "real qualia" or "real suffering" is a question of definitions.

I think this personal view of mine is fairly similar to Eliezer's from the Sequences: there are no "zombies" (among humans or animals), there is no "second transduction" from neuron activity into a mythical medium-of-consciousness (no "extra circuitry for experience" needed), rather the information-processing itself somehow directly produces (or is equivalent to, or etc) the qualia.  So, animals and even simpler systems probably have qualia in some sense.  But since animals aren't self-aware (and/or have less self-awareness than humans), their qualia don't matter (and/or matter less than humans' qualia).

...Anyways, I think our core disagreement is that you seem to be equating "has a self-model" with "has qualia", versus I think maybe qualia can and do exist even in very simple systems that lack a self-model.  But I still think that having a self-model is morally important (atomic units of "suffering" that are just floating in some kind of void, unconnected to a complex experience of selfhood, seem of questionable moral relevance to me), so we end up having similar opinions about how it's probably fine to eat fish.

I guess what I am objecting to is that you are acting like these philosophical problems of qualia / consciousness / etc are solved and other people are making an obvious mistake.  I agree that I see a lot of people being confused and making mistakes, but I don't think the problems are solved!

Replies from: mikhail-samin
comment by Mikhail Samin (mikhail-samin) · 2023-11-13T20:59:43.227Z · LW(p) · GW(p)

I appreciate this comment.

Qualia (IMO) certainly is "information processing": there are inputs and outputs. And it is a part of a larger information-processing thing, the brain. What I'm saying is that there's information processing happening outside of the qualia circuits, and some of the results of the information processing outside of the qualia circuits are inputs to our qualia. 

I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"

Well, how do you know that visual information processing produces qualia? You can match when algorithms implemented by other humans' brains to algorithms implemented by your brain, because all of you talk about subjective experience; how do you, inside your neural circuitry, make an inference that a similar thing happens in neurons that just process visual information?

You know you have subjective experience, self-evidently. You can match the computation run by the neural circuitry of your brain to the computation run by the neural circuitry of other humans: because since they talk about subjective experience, you can expect this to be caused by similar computation. This is valid. Thinking that visual information processing is part of what makes qualia (i.e., there's no way to replace a bunch of your neurons with something that outputs the same stuff without first seeing and processing something, such that you'll experience seeing as before) is something you can make theories about but is not a valid inference, you don't have a way of matching the computation of qualia to the whole of your brain.

And, how can you match it to matrix multiplications that don't talk about qualia, did not have evolutionary reasons for experience, etc.? Do you think an untrained or a small convolutional neural network experiences images to some extent, or only large and trained? Where does that expectation come from?

I'm not saying that qualia is solved. We don't yet know how to build it, and we can't yet scan brains and say which circuits implement it. But some people seem more confused than warranted, and they spend resources less effectively than they could've.

And I'm not equating qualia to self-model. Qualia is just the experience of information. It doesn't required a self-model, also on Earth, so far, I expect these things to have been correlated.

If there's suffering and experience of extreme pain, in my opinion, it matters even if there isn't reflectivity.

Replies from: Signer
comment by Signer · 2023-11-14T08:45:19.587Z · LW(p) · GW(p)

You know you have subjective experience, self-evidently. You can match the computation run by the neural circuitry of your brain to the computation run by the neural circuitry of other humans: because since they talk about subjective experience, you can expect this to be caused by similar computation.

Similarity is subjective. There is no fundamental reason that the ethical threshold must be on the level of similarity between humans and not on level of similarity between humans and shrimps.

comment by quila · 2023-11-13T23:57:36.310Z · LW(p) · GW(p)

I've met one who assigned double-digit probabilities to bacteria having qualia and said they wouldn't be surprised if a balloon flying through a gradient of air experiences pain because it's trying to get away from hotter air towards colder air.

though this may be an arguable position (see, e.g., https://reducing-suffering.org/is-there-suffering-in-fundamental-physics/), the way you've used it (and the other anecdotes) in the introduction decontextualized, as a 'statement of position' without justification, is in effect a clown attack [LW · GW] fallacy.

on the post: remember that absence of evidence is not evidence of absence when we do not yet have the technologies to collect relevant evidence. the conclusion in the title does not follow: it should be 'whether shrimp suffer is uncertain'. under uncertainty, eating shrimp is taking a risk whose downsides are suffering, and upsides (for individuals for whom there are any) might e.g taste preference satisfaction, and the former is much more important to me. a typical person is not justified in 'eating shrimp until someone proves to them that shrimp can suffer.' 

Replies from: mikhail-samin
comment by Mikhail Samin (mikhail-samin) · 2023-11-14T00:21:30.864Z · LW(p) · GW(p)

The justification that I've heard for that position wouldn't make the statement better; I'd be able to pass an ITT for the specific person who told me it, and I understand why it is wrong. I consider the mistake they're making and the mistake Rethink Priorities are making to be the same and I try to make an argument why in the post.

I'm separately pretty sure evolutionary reasons for qualia didn't exist in fish evolution (added this to the post, thanks!), and from my experience talking to a couple of EAs about this they agreed with some correlations enough to consider a suggested experiment to be a crux, and I'm pretty certain about the result of the experiment and think they're wrong for reasons described in the post.

It's not obvious how to figure out the priors here, but my point is people update on things that aren't valid evidence. The hope is that people will spend their resources more effectively after correctly considering shrimp welfare to be by orders of magnitude less important and deprioritizing it. Maybe they'll still avoid eating shrimp because they don't have intuitions about evolutionary reasons for qualia similar to my, but that seems less important to me than reducing as much actual suffering as possible, other things being equal.

comment by mishka · 2023-11-14T04:32:41.655Z · LW(p) · GW(p)

I suspect there is no good way to "short-circuit" the fact that the "hard problem of consciousness" and, in particular, its truly hard core, the "hard problem of qualia" is unsolved.

Disclaimer: there has been a LessWrong post Why it's so hard to talk about Consciousness [LW · GW], and that post states that on this groups of issues people are mostly divided into 2 camps which don't really understand each other:

The basic model I'm proposing is that core intuitions about consciousness tend to cluster into two camps, with most miscommunication being the result of someone failing to communicate with the other camp. For this post, we'll call the camp of boldface author Camp #1 and the camp of quotation author Camp #2.

So, the epistemological disclaimer is that this comment (just like the original post) would probably make sense only to people belonging to Camp #2 (like myself), that is, people who think that it makes sense to talk about qualia.

When I ponder the problem of qualia, I usually think that it will eventually be solved by a two-pronged approach.

On the theoretical side, people will start to require that a viable candidate theory predicts some non-trivial subjectively observable novel effects, just like we require that a viable candidate theory for new physics predicts some non-trivial observable effects. For example, a requirement like that might be satisfied by predicting a novel, non-trivial way to induce "visual illusions" (with the condition that this way does not readily follow from the known science).

Basically, instead of engaging in purely philosophical speculations about the "nature of consciousness", people (or collaborations of people and AIs) will start finding ways to ground the new theories in experiments, not just in explaining the existing experiments, but in novel non-trivial predictions of experimental phenomena.

On the experimental side, a natural starting point (which has tons of safety and ethical caveats) would be creation of hybrid systems between biological entities having qualia and electronic circuits of various nature (digital, analog, running verbally intelligent software, running clever fluid simulations, running audio-visual synthesizers, etc). For practical reasons, people would probably aim for technologies based on things like non-invasive BCI to create tight coupling between biological and electronic entities (only if that proves impossible, people would have to resort to drastic Neuralink-like steps, but the more can be done without surgery or other invasive methods the better). While tight coupling of this kind presents formidable ethical and safety issues even with non-invasive interfaces, this route should eventually enable qualia-possessing entities to "look subjectively from the inside at the dynamics of electronic circuits", and that's how we can try to start experimentally assessing which electronic circuits are or are not capable of supporting qualia.

Also, this would likely eventually enable coupling of different biological entities to each other via coupling each of them to an interconnected electronic circuit (ethical and safety issues are getting even more formidable, as we move along this route). If this coupling is tight enough, we might learn something about qualia (or lack thereof) in various biological entities as well.

I think technically this is likely to be eventually doable. Whether a way can be found to do this in an acceptably safe and sufficiently ethical manner is an open question. But if we want to actually figure out qualia, we probably have to do more on both the theoretical and the experimental sides.

comment by mayleaf · 2023-11-13T22:43:45.306Z · LW(p) · GW(p)

Minor quibble, but:

I currently don't eat animals, as I'm uncertain enough about many of them. I eat fish and shrimp

Fish and shrimp are animals; did you mean "mammals"? Or something else?

Replies from: mikhail-samin
comment by Mikhail Samin (mikhail-samin) · 2023-11-13T23:24:16.962Z · LW(p) · GW(p)

Oops, English! Thanks

comment by Slapstick · 2023-11-14T21:42:25.063Z · LW(p) · GW(p)

I don't think the title of this post is consistent with your self professed epistemic status, or the general claims you make.

You seem to be stating that in your (non expert) opinion, some EA's are overconfident in the probabilities they'd assign to shrimp having the capacity to experience qualia?

If we assumed that's correct, that doesn't imply that it's okay to eat shrimp. It just means there's more uncertainty.

comment by Robert Cousineau (robert-cousineau) · 2023-11-14T00:36:07.500Z · LW(p) · GW(p)

I think unless you take a very linguistics heavy understanding of the emergence of qualia, you are over-weighting your arguments around being able to communicate with an agent being highly related to how likely they are to have consciousness.  

___________________________________________________________________________________________

You say:

In short, there are some neural circuits in our brains that run qualia. These circuits have inputs and outputs: signals get into our brains, get processed, and then, in some form, get inputted into these circuits. These circuits also have outputs: we can talk about our experience, and the way we talk about it corresponds to how we actually feel.

And: 

It is valid to infer that, likely, qualia has been beneficial in human evolution, or it is a side effect of something that has been beneficial in human evolution.

I think both of the above statements are very likely true.  From that, it is hard to say that a chimpanzee likely to lack those same circuits.  Neither our mental circuits nor our ancestral environments are that different.  Similarly, it is hard to say "OK, this is what a lemur is missing, as compared to a chimpanzee".  

I agree that as you go down the list of potentially conscious entities (e.g. Humans -> Chimpanzees -> Lemurs -> Rats -> Bees -> Worms -> Bacteria -> Virus -> Balloon) it gets less likely that each has qualia, but I am very hesitant to put anything like an order of magnitude jump at each level.  

comment by AlphaAndOmega · 2023-11-13T22:15:22.141Z · LW(p) · GW(p)

Note that we don't infer that humans have qualia because they all have "pain receptors": mechanisms that, when activated in us, make us feel pain; we infer that other humans have qualia because they can talk about qualia.

The way I decide this, and how presumably most people do (I admit I could be wrong) revolves around the following chain of thought:

  1. I have qualia with very high confidence.*

  2. To the best of my knowledge, the computational substrate as well as the algorithms running on them are not particularly different from other anatomically modern humans. Thus they almost certainly have qualia. This can be proven to most people's satisfaction with an MRI scan, if they so wish.

  3. Mammals, especially the intelligent ones, have similar cognitive architectures, which were largely scaled up for humans, not differing much in qualitative terms (our neurons are still actually more efficient, mice modified to have genes from human neurons are smarter). They are likely to have recognizable qualia.

  4. The further you diverge from the underlying anatomy of the brain (and the implicit algorithms), the lower the odds of qualia, or at least the same type of qualia. An octopus might well be conscious and have qualia, but I suspect the type of consciousness as well as that of their qualia will be very different from our own, since they have a far more distributed and autonomous neurology.

  5. Entities which are particularly simple and don't perform much cognitive computation are exceedingly unlikely to be conscious or have qualia in a non-tautological sense. Bacteria and single transistors, or slime mold.

More speculatively (yet I personally find more likely than not):

  1. Substrate independent models of consciousness are true, and a human brain emulation in-silico, hooked up to the right inputs and outputs, has the exact same kind of consciousness as one running on meat. The algorithms matter more than the matter they run on, for the same reason an abacus or a supercomputer are both Turing Complete.

  2. We simply lack an understanding of consciousness well grounded enough to decide whether or not decidedly non-human yet intelligent entities like LLMs are conscious or have qualia like ours. The correct stance is agnosticism, and anyone proven right in the future is only so by accident.

Now, I diverge from Effective Altruists on point 3, in that I simply don't care about the suffering of non-humans or entities that aren't anatomically modern humans/ intelligent human derivatives (like a posthuman offshoot). This is a Fundamental Values difference, and it makes concerns about optimizing for their welfare on utilitarian grounds moot as far as I'm concerned.

In the specific case of AGI, even highly intelligent ones, I posit it's significantly better to design them so they don't have capability to suffer, no matter what purpose they're put to, rather than worry about giving them rights that we assign to humans/transhumans/posthumans.

But what I do hope is ~universally acceptable is that there's an unavoidable loss of certainty or Bayesian probability in each leap of logic down the chain, such that by the time you get down to fish and prawns, it's highly dubious to be very certain of exactly how conscious or qualia possessing they are, even if the next link, bacteria and individual transistors lacking qualia, is much more likely to be true (it flows downstream of point 2, even if presented in sequence)

*Not infinite certitude, I have a non-negligible belief that I could simply be insane, or that solipsism might be true, even if I think the possibility of either is very small. It's still not zero.

comment by Mikhail Samin (mikhail-samin) · 2023-11-13T20:48:37.772Z · LW(p) · GW(p)

Some people expressed a reaction of scepticism over this:

assigned double-digit probabilities to bacteria having qualia and said they wouldn't be surprised if a balloon flying through a gradient of air experiences pain

Here's something from a comment on the EA Forum [EA(p) · GW(p)]:

I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"

Not sure if they expect a small CNN to possess qualia (and do they then think that when physics makes essentially equivalent matrix multiplications to compute rocks, there are a lot of qualia of random visions in rocks?), but maybe it's easy to underestimate how confused many people are about all that stuff

comment by Signer · 2023-11-13T18:13:03.698Z · LW(p) · GW(p)

When we hear someone about qualia, we can make a valid inference that this is caused by qualia existing or having existed in the past.

When we hear someone talking about a god, we can make a valid inference that this is caused by a god existing or having existed in the past.

we could figure out how qualia actually works, and then scan brains and see whether there are circuits implementing it or not.

Whether circuits implement something is subjective - on the physical level the circuits in other humans' brains don't implement your qualia. If you generalize to other humans' implementations, what's stopping you from generalizing to anything with pain receptors?

something that we think could be a part of how qualia works exists in that species.

So pain receptors?

Replies from: gilch, Aprillion
comment by gilch · 2023-11-13T19:36:40.780Z · LW(p) · GW(p)

When we hear someone talking about a god, we can make a valid inference that this is caused by a god existing or having existed in the past.

It is valid Bayesian evidence, yes. We can't consistently expect that people talking about gods is less likely in worlds where gods do exist. (Of course, other explanations remain far more probable, given background knowledge; it's hardly a proof.)

comment by Aprillion · 2023-11-13T18:37:16.820Z · LW(p) · GW(p)

yeah, I got a similar impression that this line of reasoning doesn't add up...

we interpret other humans as feeling something when we see their reactions

we interpret other eucaryotes as feeling something when we see their reactions 🤷