Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness

post by Vaughn Papenhausen (Ikaxas) · 2018-12-03T08:00:00.000Z · LW · GW · 18 comments

Contents

  I. Introduction
  II. What is it like to be an octopus?
  III. An exercise in updating on surprising facts
  IV. Animal consciousness
  V. Aging
  # Appendix: Answers to True-False Questions
    Notes:
None
18 comments

In this post:

I. Introduction

Peter Godfrey-Smith's Other Minds: the Octopus, the Sea, and the Deep Origins of Consciousness is a phenomenal mishmash of octopus- and consciousness-related topics. It deals with everything from the evolution of octopuses, to their social life, to animal consciousness (including octopus consciousness), to evolutionary theories of aging, and more. All of this is tied together by a palpable fascination with octopuses, which manifests itself in rich descriptions of Godfrey-Smith's own experiences scuba-diving off the coast of Australia to observe them.

The book attempts to fit discussion of an impressive amount of interesting topics all into one slim volume. On the one hand, this is great, as each topic is fascinating in its own right, and several are relevant to EA/rationality. On the other hand, fitting in so many topics is a difficult task which the book only halfway pulls off. There wasn’t enough room to discuss each topic in as much depth as they deserved, and the breadth of topics meant that the book felt somewhat unorganized and disunified. The book as a whole didn’t seem to have any central claim; it was simply a collection of interesting facts, observations, musings, and theories that somehow relate to either octopuses, consciousness, or both, plus a bunch of fascinating first-hand descriptions of octopus behavior.

Do I recommend the book? Yes and no. For general interest, definitely--it’s an interesting, enjoyable read; but for rationalists and EAs, there are probably better things to read on each topic the book discusses that would go into more depth, so it may not be the most effective investment of time for learning about, say, theories of animal consciousness. So in the rest of this review I’ve tried to 80:20 the book a bit, pulling out the insights that I found most interesting and relevant to EA/rationalism (as well as adding my own musings here and there). Because of this, the review is both quite long, and unavoidably reflects a bit of the disjointedness of the book itself--the sections can largely be read independently of each other.


Before I begin, a true-false test. For the following statements about octopuses, write down whether you think they are true or false, and how confident you are in your response. We'll come back to these later in the review, and the answers will be at the end:

  1. Octopuses can squirt jets of ink as an escape tactic.
  2. Octopuses have color vision.
  3. Octopuses have bilateral symmetry.
  4. Octopuses can camouflage themselves nearly perfectly by changing the color and texture of their skin to match whatever surface or object they are trying to blend into.
  5. Octopuses can fit through any hole or gap bigger than their eye.
  6. Octopuses can recognize individual humans.
  7. Most octopus species live for more than 20 years.
  8. Octopuses are mostly solitary animals.
  9. Octopuses have been known to use shards of glass from shattered bottles on the seafloor as weapons to fight other octopuses.

Answers below.


II. What is it like to be an octopus?

The nervous system of an octopus is structured quite differently than a mammalian nervous system. The octopus not only has a high concentration of neurons in its head (a “brain”), but also clusters of neurons throughout their body, particularly in each arm. At one point Godfrey-Smith notes that the number of neurons in the octopus's central brain is only a little over half of the number of neurons in the rest of its body (67). This means that each arm must have, roughly, 1/4 as many neurons as the central brain. What might it feel like to have 8 other "brains" in your body, each 1/4 the size of your "main brain"?[^1]

Most likely, it feels completely different than we're even capable of imagining. Godfrey-Smith doesn’t discuss the question, and there are some philosophers, such as Daniel Dennett, who would deny that it even makes sense. Nevertheless, there are two main points of reference that I can think of that humans might use to imagine something of what this would feel like.

First, we humans have what is sometimes called a "gut brain": ~500 million neurons in our digestive tracts, about as many as in a cat's brain, that control our digestive process. What does it feel like to have this brain? Well, hunger signals, the gag reflex, and some emotions (via the limbic system), are controlled by this brain, so roughly, it feels like what those mental states feel like. Perhaps the octopus has similar signals that come from its arm brains. These likely feel nothing like the signals coming from our gut brain, and would of course have different functions: while our gut brain sends us hunger or disgust reflexes, the octopus's arm signals might function something like our propioceptive sense, informing the "main brain" about the location of the arms, and maybe also relaying touch and taste/smell information from the respective sense organs located on the arms.

Or perhaps the arm brains don't just relay sensory information from the arms, perhaps they also play a part in controlling the arms' movements (Godfrey-Smith posits that this is the main reason for why the octopus's nervous system is so distributed). Sticking with the gut-brain/arm-brain analogy for the moment, what does it feel like for the gut-brain to control the stomach? That is, what does it feel like to digest food? ... Often like nothing at all. We sometimes don't even notice digestion occurring, and we certainly don't have any detailed sensation of what's going on when it does. So perhaps the octopus just tells its arms where to go in broad strokes, and they take it from there, similar to how our gut-brain just "takes it from there."

The idea that the arm brains control the movement of the arms also brings me to my second comparison: split-brain syndrome. Split-brain results when the corpus callosum is surgically severed (usually as a treatment for epilepsy). After this operation, patients can function mostly normally, but it can be experimentally determined that the two sides of their body function somewhat independently of each other. For example:

a patient with split brain is shown a picture of a chicken foot and a snowy field in separate visual fields and asked to choose from a list of words the best association with the pictures. The patient would choose a chicken to associate with the chicken foot and a shovel to associate with the snow; however, when asked to reason why the patient chose the shovel, the response would relate to the chicken (e.g. "the shovel is for cleaning out the chicken coop"). (source) (original source)

Notoriously, the two sides can sometimes get into conflict, as when one split-brain patient violently shook his wife with his left hand while trying to stop himself with his right hand.

What does it feel like for that to happen? Well, I don't actually know. But I wonder, does this person feel touch sensations from their left hand? If so, are these produced by the same processes that patch over our blind spot, or are they actual touch sensations? Suppose you put a split-brain patient’s left hand behind an opaque barrier; would they be able to tell when something touched it?

Assuming that the answer to this question is "no," another possibility for what having "arm-brains" could feel like is "basically nothing": just as the split-brain patient only knows what their left side is doing through external cues, like seeing it move, and doesn't have any control over it, so too with the octopus's arms. It doesn't really "feel" what's going on in its arms, the arms themselves know what's going on and that's enough.

III. An exercise in updating on surprising facts

[Note: this section is largely recycled from one of my shortform posts [LW(p) · GW(p)]; if you’ve already read that, this will be mostly redundant. If you haven’t read that, simply read on.]

I confess, the purpose of the true-false test at the beginning of this review was largely to disguise one question in particular, so that the mere asking of it didn’t provide Bayesian evidence that the answer should be surprising: "6.) Octopuses can recognize individual humans." Take a moment to look back at your answer to that question. What does your model say about whether octopuses should be able to recognize individual humans? Why can humans recognize other individual humans, and what does that say about whether octopuses should be able to?

...

...

...

...

...

...

As it turns out, octopuses can recognize individual humans. For example, in the book it's mentioned that at one lab, one of the octopuses had a habit of squirting jets of water at one particular researcher. Take a moment to let it sink in [LW · GW] how surprising this is: octopuses, which 1.) are mostly nonsocial animals, 2.) have a completely different nervous system structure that evolved on a completely different branch of the tree of life, and 3.) have no evolutionary history of interaction with humans, can recognize individual humans, and differentiate them from other humans. I'm pretty sure humans have a hard time differentiating between individual octopuses.

And since things are not inherently surprising [? · GW], only surprising to models [LW · GW], this means my world model (and yours, if you were surprised by this) needs to be updated. First, generate a couple of updates you might make to your model after finding this out. I'll wait...

...

...

...

...

...

...

...

Now that you've done that, here's what I came up with:

(1) Perhaps the ability to recognize individuals isn't as tied to being a social animal as I had thought (2) Perhaps humans are easier to tell apart than I thought (i.e. humans have more distinguishing features, or these distinguishing features are larger/more visually noticeable, etc., than I thought) (3) Perhaps the ability to distinguish individual humans doesn't require a specific psychological module, as I had thought, but rather falls out of a more general ability to distinguish objects from each other (Godfrey-Smith mentions this possibility in the book). (4) Perhaps I'm overimagining how fine-grained the octopus's ability to distinguish humans is. I.e. maybe that person was the only one in the lab with a particular hair color or something, and they can't distinguish the rest of the people. (Though note, another example given in the book was that one octopus liked to squirt new people, people it hadn't seen regularly in the lab before. This wouldn't mesh very well with the "octopuses can only make coarse-grained distinctions between people" hypothesis.)

To be clear, those were my first thoughts; I don't think all of them are correct. As per my shortform post [LW(p) · GW(p)] about this, I'm mostly leaning towards answer (2) being the correct update -- maybe the reason octopuses can recognize humans but not the other way around is mostly because individual humans are just more visually distinct from each other than individual octopuses, in that humans have a wider array of distinguishing features or these features are larger or otherwise easier to notice. But of course, these answers are neither mutually exclusive nor exhaustive. For example, I think answer (3) also probably has something to do with it. I suspect that humans probably have a specific module for recognizing humans, but it seems clear that octopuses couldn't have such a module, so it must not be strictly necessary in order to tell humans apart. Maybe a general object-recognizing capability plus however visually distinct humans are from each other is enough.[^2]

I'd also love to hear in the comments what updates other people had from this.

IV. Animal consciousness

Something else from the book that I found interesting concerns animal consciousness/subjective experience. I suspect this is old hat for those who have done any significant research into animal suffering, but it added a couple more gears to my model of animal consciousness, so I'll share it here for those whose models were similarly gear-less. Remember blindsight (where people who are blind due to damage in their visual cortex can perform better than chance at vision tasks, because the rest of their brain still gets visual information, even though they don’t have access to it consciously)? A pair of vision scientists (Milner and Goodale) believe, roughly, that that's what's going on in frogs all the time. What convinced them of this is an experiment performed by David Ingle in which he was able to surgically reverse some, but not all, of the visual abilities of some froggy test subjects. Namely, when his frogs saw a fly in one side of their visual field, they would snap as if it were on the other, but they were able to go around barriers perfectly normally. Milner and Goodale take this as evidence that the frog doesn't have an integrated visual experience at all. They write:

So what did these rewired frogs "see"? There is no sensible answer to this. The question only makes sense if you believe that the brain has a single visual representation of the outside world that governs all of an animal's behavior. Ingle's experiments reveal that this cannot possibly be true. (Milner and Goodale 2005, qtd. in Peter Godfrey-Smith, Other Minds, 2016, p. 80)

Godfrey-Smith then goes on to discuss Milner and Goodale's view:

Once you accept that a frog does not have a unified representation of the world, and instead has a number of separate streams that handle different kinds of sensing, there is no need to ask what the frog sees: in Milner and Goodale's words, "the puzzle disappears." Perhaps one puzzle disappears, but another is raised. What does it feel like to be a frog perceiving the world in this situation? I think Milner and Goodale are suggesting that it feels like nothing. There is no experience here because the machinery of vision in frogs is not doing the sorts of things it does in us that give rise to subjective experience. (Godfrey-Smith, pp. 89-90)[^3]

Though he doesn't mention it, there seems to me to be an obvious reply here: the phenomenon of blindsight reveals that there are parts of our visual processing that don't feel like anything to us (or perhaps, as Godfrey-Smith prefers, they feel like something, just not like vision), but this clearly doesn't change the fact that we (most of us) do have visual experience. Why couldn't something similar be going on in the frogs? They have a visual field, but they also have other visual processing going on as well which doesn't make it into their visual field.

Let me try to explain this thought a bit better. One thing that the human blindsight subject described in Other Minds (known as "DF") was able to do was put letters through a mail-slot placed at different angles. Now those of us who have normal sight presumably still do all the same processing as those with blindsight, plus some extra. So, imagine someone performing brain surgery on a person with normal vision, which affected whatever brain circuitry allows people to align a letter at the correct angle to get it through a mail-slot. At the risk of arguing based on evidence I haven't seen yet [LW · GW], one way I could imagine the scenario playing out is the following: this person would wake up and find that, though their visual experience was the same as before, for some reason they couldn't manage to fit letters through mail-slots anymore. They would experience this in a similar way as someone with exceptionally poor balance experiences their inability to walk a tightrope--it's not as though they can't see where the rope is, they just can't manage to put their feet in the right place to stay on it. I'd guess that the same thing would happen for the person, and that the same thing is happening for the frog. Respectively: it's not as though the person can't see where the mail-slot is, they just can't manage to get the letter through it, and it's not as though the frog can't see where the fly is, it just can't seem to get its tongue to move in the right direction to catch it.[^4]

In any case, even if we discount this argument, does Milner and Goodale's argument amount to an argument that most animals don't have inner lives, and in particular that they don't feel pain?

Not so, Godfrey-Smith wants to argue. He includes some discussion of various theories of consciousness/subjective experience and how early or late it arose,[^5] but what interested me was an experiment that tried to test whether an animal, in this case a Zebrafish, actually feels pain, or is only performing instinctive behaviors that look to us like pain.

The experiment goes like this: There are two environments, A and B, and the fish is known to prefer A to B. The experimenter injects the fish with a chemical thought to be painful. Then, the experimenter dissolves painkiller in environment B, and lets the fish choose again which environment it prefers. With the painkiller and the painful chemical, the fish prefers environment B (though with the painful chemical and no painkiller, it still prefers A). The fish seems to be choosing environment B in order to relieve its pain, and this isn't the kind of situation that the fish could have an evolved reflex to react to. Since the fish is behaving as we would expect it to if it felt pain and the opposite of how we would expect it to if it didn't feel pain, and a reflex can't be the explanation, this is evidence that the fish feels pain, rather than simply seeming to feel pain.

What excited me about this was the idea that we could use experiments to tell something about the inner lives of animals. Even though I've been thoroughly disabused of the idea of a philosophical zombie,[^6] I still had the idea that subjective experience is something that can't really be tested "from the outside." Reading about these experiments made me much more optimistic that experiments could be useful to help determine whether and which animals are moral patients.

V. Aging

Another fact that might surprise (and perhaps sadden) you: octopuses, for the most part, only live about 2 years. One might think that intelligence is most advantageous when you live long enough to benefit from the things you learn with it. Nevertheless, octopuses only live about 2 years. Why is this? Godfrey-Smith posits that octopuses evolved intelligence not for the benefits of long-term learning, but simply to control their highly-amorphous bodies. Since an octopus’s body can move so freely, it takes a very large nervous system to control it, which gave rise to what intelligence they possess. Even so, once they had intelligence, shouldn’t this have caused selection pressure towards longer lives? I’m still confused on this count, but this does lead us to another question: why do most living organisms age in the first place? There are organisms that don’t, at least on the timescales we’ve observed them on so far, so why are there any that do? What evolutionary benefit does aging provide, could it provide? One would think that aging, at least once an organism had reached maturity, would be strictly disadvantageous and thus selected against, so why do we mostly observe organisms that age and die?

Godfrey-Smith surveys several standard theories, but the one he presents as most likely to be correct (originated by Peter Medawar and George Williams) is as follows. Imagine an organism that didn’t age; once it reached its prime, it remained that way, able to survive and reproduce indefinitely until it died of e.g. predation, disease, a falling rock, or some other external cause, all of which I’ll call “accidental death.” If we assume the average probability of dying by accidental death is constant each year, then the organism’s probability of surviving to age n decreases as n increases. Thus, for large enough , approaches 0, meaning that there is some age which the organism is almost certain to die before reaching, even without aging. Now imagine that the organism has a mutation with effects that are positive before age , but negative after age . Such a mutation would have almost no selection pressure against it, since the organism would almost certainly die of accidental death before its negative effects could manifest. Thus, such mutations could accumulate, and the few organisms that did survive to age would start to show those negative effects.

The truth is more general than that. In general, as gets lower, so does the selection pressure against any mutation whose negative effects only appear after age . This theory predicts that organisms should exhibit a slow and steady increase of negative symptoms caused by mutations whose negative side effects only show up later, and an age which almost no individuals survive beyond, which is what we in fact observe.

Still though, why should there be any positive pressure towards these mutations, even if there’s little pressure against them? Because, as I mentioned, at least some of these mutations might have positive effects that show up earlier bound up with the negative effects that show up later. This positive selection pressure, combined with the reduced negative selection pressure due to their negative effects only showing up late, after most with the mutation have already died due to accidental death, is enough to get these mutations to fixation. Godfrey-Smith uses the analogy, originally due to George Williams, of putting money in a savings account to be accessed when you’re 120 years old. You’ll almost certainly be dead by then, so it’s rather pointless to save for that far off. In the same way, it’s evolutionarily pointless for organisms to pass up mutations that have positive effects now and negative effects later when those negative effects only show up after the animal is almost certain to be dead by accidental death. So organisms take those mutations, and most do not survive to pay the price; aging is what happens to those who do.

If this is the correct evolutionary account of why aging occurs, it has an interesting implication for anti-aging research: there might be certain routes to eliminating aging that come with unforeseen downsides. If we were to eliminate aging by finding the genes that produce these negative side effects and turning them off (please forgive my utter ignorance of genetics and the science of aging), this could also rob us of whatever benefits those genes provided earlier in life that caused them to be adopted in the first place. This is not to say that we should not pursue anti-aging research (in fact I’m strongly in favor of it), but just that we should be on the lookout for this kind of trap, and avoid it if we can.


# Appendix: Answers to True-False Questions

  1. Octopuses can squirt jets of ink as an escape tactic. True
  2. Octopuses have color vision. False
  3. Octopuses have bilateral symmetry. True
  4. Octopuses can camouflage themselves nearly perfectly by changing the color and texture of their skin to match whatever surface or object they are trying to blend into. True
  5. Octopuses can fit through any hole or gap bigger than their eye. True
  6. Octopuses can recognize individual humans. True
  7. Most octopus species live for more than 20 years. False
  8. Octopuses are mostly solitary animals. True
  9. Octopuses have been known to use shards of glass from shattered bottles on the seafloor as weapons to fight other octopuses. As far as I know, false

Notes:

[^1]: To give a sense of the relationship between the octopus's central brain and its arms, here are some quotes from the book:

How does an octopus's brain relate to its arms? Early work, looking at both behavior and anatomy, gave the impression that the arms enjoyed considerable independence. The channel of nerves that leads from each arm back to the central brain seemed pretty slim. Some behavioral studies gave the impression that octopuses did not even track where their own arms might be. As Roger Hanlon and John Messenger put it in their book Cephalopod Behavior, the arms seemed "curiously divorced" from the brain, at least in the control of basic motions. (67)

Some sort of mixture of localized and top-down control might be operating. The best experimental work I know that bears on this topic comes out of Binyamin Hochner's laboratory at the Hebrew University of Jerusalem. A 2011 paper by Tamar Gutnick, Ruth Byrne, and Michael Kuba, along with Hochner, described a very clever experiment. They asked whether an octopus could learn to guide a single arm along a maze-like path to a specific place in order to obtain food. The task was set up in such a way that the arm's own chemical sensors would not suffice to guide it to the food; the arm would have to leave the water at one point to reach the target location. But the maze walls were transparent, so the target location could be seen. The octopus would have to guide an arm through the maze with its eyes. It took a long while for the octopuses to learn to do this, but in the end, nearly all of the octopuses that were tested succeeded. The eyes can guide the arms. At the same time, the paper also noted that when the octopuses are doing well with this task, the arm that's finding the food appears to do its own local exploration as it goes, crawling and feeling around. So it seems that two forms of control are working in tandem: there is central control of the arm's overall path, via the eyes, combined with a fine-tuning of the search by the arm itself. (68-69)

[^2]: So why do I still think humans have a specific module for it? Here’s one possible reason: I'm guessing octopuses can't recognize human faces--they probably use other cues, though nothing in the book speaks to this one way or the other. If that's the case, then it might be true both that a general object-differentiating capability is enough to recognize individual humans, but that to recognize faces requires a specific module. If I found out that octopuses could recognize human faces specifically, not just individual humans by other means than face-recognition, I would strongly update in favor of humans having no specific face- or other-person-recognition module. In the same vein, the fact that people can lose the ability to recognize faces without it affecting any other visual capacities (known as prosopagnosia or "face-blindness") suggests that a single module is responsible for that ability.

[^3]: After reading Daniel Dennett’s Consciousness Explained, it’s actually not at all clear to me why Godfrey-Smith interprets Milner and Goodale this way. It seems more natural to suppose that they’re suggesting something similar to Dennett’s denial of the “Cartesian Theater” (the idea that there is somewhere where “it all comes together” in the brain, in some sort of “inner movie” to use Chalmers’ phrase) and his replacement, the “Multiple Drafts Model” (which I don’t feel confident to summarize here).

[^4]: Another way this might play out is if the frog saw the fly and only the fly as reversed in its visual field, rather like a hallucination. I don't see any reason why that would be impossible.

[^5]: Godfrey-Smith actually makes a distinction between "subjective experience" and "consciousness." The way Godfrey-Smith uses these words, when we say that something has "subjective experience," we're just saying that there is something that it feels like to be that thing, while the claim that something has "consciousness" is in some unspecified way stronger. So consciousness is a subset of subjective experience. He speculates that subjective experience arose fairly early, in the form of things like hunger signals and pain, while consciousness arose later and involves things like memory, a "global workspace," integrated experience, etc.

[^6]: See Dennett, Intuition Pumps and Other Tools for Thinking, Ch. 55 “Zombies and Zimboes,” and Eliezer Yudkowsky’s essay “Zombies! Zombies?”

18 comments

Comments sorted by top scores.

comment by clone of saturn · 2018-12-04T21:13:17.577Z · LW(p) · GW(p)

You can experience this form of "blindsight" for yourself with a simple experiment. Try standing on one foot for 30 seconds. Pretty easy, right? Do you need to be looking at anything in particular? Now try doing the same thing with your eyes closed.

Replies from: SaidAchmiz, Ikaxas
comment by Said Achmiz (SaidAchmiz) · 2018-12-04T23:12:04.216Z · LW(p) · GW(p)

I tried it. It was quite a bit harder with my eyes closed. Fascinating! (I tried both right and left foot for both conditions, with the same results.)

Replies from: Benito
comment by Ben Pace (Benito) · 2018-12-05T01:22:48.941Z · LW(p) · GW(p)

Wow that was so much harder than I expected. My sense of balance was going haywire! I lasted 25 seconds then had to put my other foot down..

comment by Vaughn Papenhausen (Ikaxas) · 2018-12-08T16:46:45.659Z · LW(p) · GW(p)

Thanks! This was quite interesting to try. Just to make it more explicit, your point is supposed to be that here's a form of visual processing going on that doesn't "feel like anything" to us, right?

Replies from: clone of saturn
comment by clone of saturn · 2018-12-08T22:33:03.508Z · LW(p) · GW(p)

Right.

comment by Richard_Kennaway · 2018-12-05T09:49:12.356Z · LW(p) · GW(p)
The fish seems to be choosing environment B in order to relieve its pain

Given that a child can build a Lego robot that will avoid light, or loud sounds, or whatever else it has a sensor for, it's not clear why this behaviour in a fish is evidence of pain qualia when we don't take it to be so in a robot.

Replies from: Ikaxas, Leafcraft
comment by Vaughn Papenhausen (Ikaxas) · 2018-12-08T17:08:05.349Z · LW(p) · GW(p)

Because, unlike the robot, the cognitive architectures producing the observed behavior (alleviating a pain) are likely to be similar to those producing the similar behavior in us (since evolution is likely to have reused the same cognitive architecture in us and in the fish), and we know that whatever cognitive architecture produces that behavior in us produces a pain quale. The worry was supposed to be that perhaps the underlying cognitive architecture is more like a reflex than like a conscious experience, but the way the experiment was set up precluded that, since it's highly unlikely that a fish would have a reflex built in for this specific situation (unlike, say, the situation of pulling away from a hot object or a sharp object, which could be an unconscious reflex in other animals).

comment by Leafcraft · 2018-12-05T14:03:04.832Z · LW(p) · GW(p)

You are correct. There is no known experiment that can conclusively prove the existence of qualia in other minds (as far as I know). All this prove is that the fish can feel pain (which we already know from neurophysiological research) not that it can experience it.

Although the experience of pain is almost inevitable in every large enough evoluture (from a theoretical point of view).

comment by justinpombrio · 2019-02-03T17:02:22.295Z · LW(p) · GW(p)

I’d also love to hear in the comments what updates other people had from this.

My (distilled and cleaned up) thinking was as follows:

  1. Humans recognize each other mostly by face. I know this because people with face blindness routinely don't recognize people, even if they know them well. I believe that face blindness partially refutes your #3.
  2. Octopuses almost certainly have no particular ability to distinguish human faces. Thus they're probably doing something very different from us.
  3. What are octopuses good at? Mimicking fish and avoiding predators. Maybe they're using some of these skills to recognize humans.
  4. Even so, what are the sensory modalities they might be recognizing us with? Many animals are good with smell, but I presume that doesn't work in the water. Voice, likewise, seems like it might not carry over well. There are a bunch of big visual characteristics: height, skin color, clothing (variable), hair style (may be variable). And gait. I could imagine octopuses being very good at recognizing gait.
comment by Raemon · 2018-12-08T18:04:06.297Z · LW(p) · GW(p)

How confident are we that Octopuses can recognize individual humans, as opposed to "a small number of outlier Octopuses can do so?"

Replies from: Ikaxas, Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2018-12-10T13:27:15.058Z · LW(p) · GW(p)

Good question, I hadn't thought about that. Here's the relevant passage from the book:

In the lab, however, [octopuses] are often quick to get the hang of how life works in their new circumstances. For example, it has long appeared that captive octopuses can recognize and behave differently toward individual human keepers. Stories of this kind have been coming out of different labs for years. Initially it all seemed anecdotal. In the same lab in New Zealand that had the "lights-out" problem [an octopus had consistently been squirting jets of water at the light fixtures to short circuit them], an octopus took a dislike to one member of the lab staff, for no obvious reason, and whenever that person passed by on the walkway behind the tank she received a jet of half a gallon of water in the back of her neck. Shelley Ddamo, of Dalhousie University, had one cuttlefish who reliably squirted streams of water at all new visitors to the lab, and not at people who were often around. In 2010, an experiment confirmed that giant Pacific octopuses can indeed recognize individual humans, and can do this even when the humans are wearing identical uniforms. (56)

On the one hand, if "stories of this kind have been coming out of different labs for years," this suggests these may not exactly be isolated incidents (though of course it kind of depends on how many stories). On the other hand, the book only gives two concrete examples. I went back and checked the 2010 study as well. It looks like they studied 8 octopuses, 4 larger and 4 smaller (with one human always feeding and one human always being irritating towards each octopus), so that's not exactly a whole lot of data; the most suggestive result, I'd say, is that on the last day, 7 of the 8 octopuses didn't aim their funnels/water jets at their feeder, while 6/8 did aim them at their irritator. On the other hand, a different metric, respiration rate, was statistically significant in the 4 large octopuses but not the 4 smaller ones.

Also found a couple of other studies that may be relevant to varying degrees by looking up ones that cited the 2010 study, but haven't had a chance to read them:

  • https://link.springer.com/chapter/10.1007/978-94-007-7414-8_19 (talks about octopuses recognizing other octopuses)
  • https://journals.sagepub.com/doi/abs/10.1177/0539018418785485
  • https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0018710 (octopuses recognizing other octopuses)

tl;dr: I'm not really sure. Most of the evidence seems to be anectodal, but the one study does suggest that most of them probably can to some degree, if you expect those 8 octopuses to be representative.

comment by Vaughn Papenhausen (Ikaxas) · 2018-12-10T13:26:55.938Z · LW(p) · GW(p)

Good question, I hadn't thought about that. Here's the relevant passage from the book:

In the lab, however, [octopuses] are often quick to get the hang of how life works in their new circumstances. For example, it has long appeared that captive octopuses can recognize and behave differently toward individual human keepers. Stories of this kind have been coming out of different labs for years. Initially it all seemed anecdotal. In the same lab in New Zealand that had the "lights-out" problem [an octopus had consistently been squirting jets of water at the light fixtures to short circuit them], an octopus took a dislike to one member of the lab staff, for no obvious reason, and whenever that person passed by on the walkway behind the tank she received a jet of half a gallon of water in the back of her neck. Shelley Ddamo, of Dalhousie University, had one cuttlefish who reliably squirted streams of water at all new visitors to the lab, and not at people who were often around. In 2010, an experiment confirmed that giant Pacific octopuses can indeed recognize individual humans, and can do this even when the humans are wearing identical uniforms. (56)

On the one hand, if "stories of this kind have been coming out of different labs for years," this suggests these may not exactly be isolated incidents (though of course it kind of depends on how many stories). On the other hand, the book only gives two concrete examples. I went back and checked the 2010 study as well. It looks like they studied 8 octopuses, 4 larger and 4 smaller (with one human always feeding and one human always being irritating towards each octopus), so that's not exactly a whole lot of data; the most suggestive result, I'd say, is that on the last day, 7 of the 8 octopuses didn't aim their funnels/water jets at their feeder, while 6/8 did aim them at their irritator. On the other hand, a different metric, respiration rate, was statistically significant in the 4 large octopuses but not the 4 smaller ones.

Also found a couple of other studies that may be relevant to varying degrees by looking up ones that cited the 2010 study, but haven't had a chance to read them:

  • https://link.springer.com/chapter/10.1007/978-94-007-7414-8_19 (talks about octopuses recognizing other octopuses)
  • https://journals.sagepub.com/doi/abs/10.1177/0539018418785485
  • https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0018710 (octopuses recognizing other octopuses)

tl;dr: I'm not really sure. Most of the evidence seems to be anectodal, but the one study does suggest that most of them probably can to some degree, if you expect those 8 octopuses to be representative.

comment by Craig Hunter (craig-hunter) · 2018-12-04T01:05:04.155Z · LW(p) · GW(p)

Cephalopod axon signal velocity is low, which is a good reason to have a distributed nervous system. They have no myelin sheath.

Without color vision, how do they match their skin to the background color?

Replies from: gwern, TheWakalix, Ikaxas
comment by gwern · 2018-12-04T03:44:59.162Z · LW(p) · GW(p)

That's a good question, and the answer may be that they don't have color vision in any normal sense; what they have is the ability to use chromatic aberration to focus their eyes for various colors, and this serial focusing scan lets them decide how to adjust their skin to match surroundings: "Spectral discrimination in color blind animals via chromatic aberration and pupil shape", Stubbs & Stubbs 2016.

Should this be considered color vision? It seems safe to say that whatever the qualia of scanning chromatic-aberration vision would be, it would be very different from our simultaneous realtime trichromatic color vision. And it's worth noting that under the Stubbs model, to explain why behavioral assays (not just the finding that they only have one kind of photoreceptor) found them to be color blind, there's a lot of things they can't do with the chromatic aberration trick:

Second, some behavioral experiments (7⇓⇓⇓–11) designed to test for color vision in cephalopods produced negative results by using standard tests of color vision to evaluate the animal’s ability to distinguish between two or more adjacent colors of equal brightness. This adjacent color comparison is an inappropriate test for our model (Fig. 4R). Tests using rapidly vibrating (8, 9) color cues are also inappropriate. Although these dynamical experiments are effective tests for conventional color vision, they would fail to detect spectral discrimination under our model, because it is difficult to measure differential contrast on vibrating objects. These results corroborate the morphological and genetic evidence: any ability in these organisms for spectral discrimination is not enabled by spectrally diverse photoreceptor types

...In our proposed mechanism, cephalopods cannot gain spectral information from a flat-field background or an edge between two abutting colors of comparable intensity (Fig. 3). This phenomenology would explain why optomotor assays and camouflage experiments using abutting colored substrates (7, 9, 11) fail to elicit a response different from a flat-field background. Similarly, experiments (10) with monochromatic light projected onto a large uniform reflector or training experiments (8, 9) with rapidly vibrating colored cues would defeat a determination of chromatic defocus

...We predict that the animals will fail to match flat-field backgrounds with no spatial structure as previously shown in figure 3B in the work by Mäthger et al. (7) just as a photographer could not determine best focus when imaging a screen with no fine-scale spatial structure. If, for instance, their ability to spectrally match backgrounds was conferred by the skin or another potential unknown mechanism, they would successfully match on flat-field backgrounds. However, under our model, they should succeed when there is a spatial structure allowing for the calculation of chromatically induced defocus, such as in our test patterns (Fig. 4) or the more naturally textured backgrounds by Kühn (21). If, however, cephalopods truly cannot accurately match their background color but solely use luminance and achromatic contrast to determine camouflage, we would expect the response on colored substrates to be identical to that on a gray substrate of similar apparent brightness with identical spatial structure.

comment by TheWakalix · 2018-12-04T03:09:05.363Z · LW(p) · GW(p)

A quick Google search suggests that they see color without having normal color receptors. They do this by exploiting chromatic aberration (diffraction depends on wavelength, resulting in different angles for different colors).

comment by Vaughn Papenhausen (Ikaxas) · 2018-12-08T16:57:17.726Z · LW(p) · GW(p)

The answer given in the book is that, as it turns out, they have color receptors in their skin. The book notes that this is only a partial answer, because they still only have one color receptor in their skin, which still doesn't allow for color vision, so this doesn't fully solve the puzzle, but Godfrey-Smith speculates that perhaps the combination of one color receptor with color-changing cells in front of the color receptor allows them to gain some information about the color of things around it (121-123).

comment by Ninety-Three · 2018-12-11T23:50:16.303Z · LW(p) · GW(p)

I got all of the octopus questions right (six recalled facts, #6 intuitively plausible, #9 seems rare enough that it should be unlikely for humans to observe, and #2 was uncertain until I completed the others then metagamed that a 7/2 split would be "too unbalanced" for a handcrafted test) so the only surprising fact I have to update on is that the recognition thing is surprising to others. My model was that many wild animals are capable of recognizing humans, and octopuses are particularly smart as animals go, no other factors weigh heavily. That octopuses evolved totally separated from humans didn't seem significant because although most wild animals were exposed to humans I see no obvious incentive for most of them to recognize individual humans, so the cases should be comparable on that axis. I also put little weight on octopuses not being social creatures because while there may be social recognition modules, A: animals are able to recognize humans and all of them generalizing their social modules to our species seems intuitively unlikely and B: At some level of intelligence it must be possible to distinguish individuals based on sheer general pattern-recognition, for ten humans an octopus would only need four or five bits of information and animal intelligence in general seems good at distinguishing between a few totally arbitrary bits.

The evolutionary theory of aging is interesting and seems to predict that an animal's maximum age will be proportionate to its time -to-accidental-death. Just thinking of animals and their ages at random this seems plausible but I'm hardly being rigorous, have there been proper analyses done of that?

comment by PeterMcCluskey · 2018-12-04T20:39:26.065Z · LW(p) · GW(p)

Perhaps the ability to recognize individuals isn’t as tied to being a social animal as I had thought

I expect multiple sources of evolutionary pressure for recognizing individuals. E.g. when a human chases an animal to exhaustion, the human needs to track that specific animal even if it disappears into a herd, so as to not make the mistake of chasing an animal that isn't tired.