Walking Sue
post by Matthew McRedmond (matthew-mcredmond) · 2024-12-18T13:19:41.575Z · LW · GW · 5 commentsContents
An Essay[1] PART I: Conjecture on The Development of Proto-Communication in Response to Complex Environmental Factors[2] Sidebar: The Black Chess Box As A Framework for Understanding the Walking Unit PART II: Distributed Agency and Alien Experience: The Walking Unit as Proto-Human-AI System PART III: Conclusion: Proto-Communication and Distributed Experience None 5 comments
An Essay[1]
PART I: Conjecture on The Development of Proto-Communication in Response to Complex Environmental Factors[2]
I was walking with my Sue when I noticed a peculiar aspect of her nature: a dog is a stochastic agent. When you simply let the dog walk, she finds a random path, turning at intervals and forming stochastically various outputs—sniffing, peeing, going around to the ground, turning left, sniffing, sniffing back to going straight, head up, head back down, sniffing, turn rear right, rear left, and so on. But then once she finds something, she might do something new and novel. So: find the smell, pee in the vicinity, back to rear left, rear left, rear left, head down, stop in front of the gate, smell, intrigue, intrigue, intrigue, smell, smell, then when I go—"Sue!" She comes back, invariably. You get to rub, and then she goes back out again. She's a peculiarly simple creature.
So let us inspect this peculiarity. We start with one call command, "Sue," which is to come back to me and stop the stochastic process. Now we introduce a new command, "heel," which is to pause the stochastic process at the point at which the command was called, not returning to the central node of agency or awareness. This command—"sue"—you see that's her coming back, and she went off again once she's known that I haven't done a thing. I say "sue" again, "sue," she pauses this time, "sue," and comes back. So there's some confusion between what "sue" means: is it a pause and come back, or pause and stay in the same place? And I'm trying to figure out—I went accidentally in the process of doing this—I made an aggressive sound, "SUE!" See, that was hopefully to get her back, I suppose, to pause in the middle of the road. "SUE! Good girl, good girl." Anyway, I did a heal-type command, which turned into a go and pounced on the dog, but then was quickly reverted by a "sue" command.
I also noticed that adding safety precautions to the system, such as a lead, causes a lot of distress for both Sue and me. Sue was walking past two other dogs which made me afraid—I didn't doubt her ability to control herself, rather I doubted that if she didn't control herself ("Sue!")—if she failed to control herself, then she would be in much more danger. So to do this, I had to put the lead on her, and in this process restricted her movements and limited her agency, which caused quite a lot of distress because she wanted to walk away from the danger, which was what we both wanted, except she wasn't able to because she was choking on the collar. So this demonstrates how defecting from the cooperative agreements, from the kind of prisoner's dilemma without a cooperative agreement that we have between me and my dog, causes stress to both parties and ultimately worse outcomes.
We came across a novel environment—a dog's pen. Here, there's no risk of interaction, even though it's a very aggressive stimulus to the dog. I found it somewhat difficult to maintain a distinction between "Heel" and "Sue," because you want the dog to pause in its place, but you also want it to concentrate very hard. "Heel" is a less established command than "Sue." "Sue" is a very clearly established command, it overrides all other commands, whereas "Heel" is a secondary command, which might be overridden by a primary concern for safety—both mine and the dog's perceived safety.
So basically, "SUE!"—and there's a certain type of "Sue" that I wanted to be able to say—that's a neutral tone, but very loud, that can be used to stop the dog if it's going to attack. "SUE!" Like that. "HEEL!" There we go. Now the dog's coming back. Let me give praise: "Good dog, good girl, Sue. Yes, yes, great girl." I know this is not necessarily a clear signal but defaulting to safe behaviour should be rewarded
And then there was another instance where there was a dog that was aggressive to her, and I called her to attack. "SUE! SUE!" To attack. "SUE!" I called her to attack the other dog because I knew there was a barrier between the two of them. I wanted to see how controllable the attack was. And it turned out to be slightly less controllable than I'd hoped, because I didn't have the right code word for a neutral "Sue" to retreat versus an aggressive "go," or "Sue," to attack.
And this is where some interesting ideas about proto-language rise to my attention—how much capacity does the dog have for different tones of the same word? Or even how much do humans have for different tones of the same word? In fact, humans have a plethora of words and sounds recombined in different ways, despite having very few distinct sounds. It means that distinction is quite difficult. Like, we might only have a limited capacity to distinguish between at most maybe three or four tonal differences. And humans have overcome this by having a much larger brain that can produce more simple combinations and permutations than a dog might be able to.
There's also the problem of unlearning in this situation. "Sue" might have accidentally become an attack command—a very aggressive "Sue," where it means that I sound like I want to attack. It has turned into an aggressive sound to her. So I need to first never let the aggression of the dog feedback to me. I would need to cut that feedback loop. And then I need to reintroduce a different command for that same function. And hopefully, the dog will override the existing patterns. But that is experimentation beyond the scope of this preliminary investigation.
Now later in the walk, we have another case study of tone—the feedback loop from the dog to me causing a problem. Sue was on the other side of the road, urinating, and a car was approaching. She was going to return to me after finishing urinating, but this would have caused a collision with the path of the vehicle. So I said "heal," while she was still peeing—which meant I didn't know if the "heal" command was working—and then I saw her turn towards me and said "heal" again, but this time in a desperate tone. This still wasn't enough to convey the command of "pause where you are" as opposed to "pause and return," because a return was now a dangerous action, which usually isn't dangerous. Only the accident of bodily harm was avoided but we failed an out-of-distribution test.
AFter further experimentaion, I'm finding that using the command "stay" instead of "heel" is much clearer because it's also a command that's very deeply informed from the sit-and-stay training as a child. "Sue! Sue! Sue! Sue! Sue! Good girl. Good girl."
Okay, and now I'm learning something new, which is "Sue"—which is like a kind of aggressive or worried tone—is like an attack response, while "Sue" with a light heart, like a concerned tone: "Sue! Sue! Sue!" Yeah, so you go from light and concerned, which is stage one of return to centre, and then a kind of more of a firm command conveys okay, this is overriding all of your internal reasoning kind of command. And then an aggressive version would be like, okay, we are now in danger, so now you need to just go all out. So that's kind of a gradient that's now being developed.
And it's interesting that this is partly a top-down and bottom-up creation of language. There's a lot of executive action function on my part, deciding what moves to make, but they're also dependent on the environment and the context in which we and the dog find ourselves, and your [AI response to my voice notes] input as another type of reasoner that's extending my ability to search through the space of possible ideas. "Sue! Sue!"
Another thought arises. The order in which the commands are made and the patterns which are strengthened makes a difference—like "Sue, come back" is now the default even if there's any confusion about any "stay" command which is confusing or too far away or distant to hear, or a stimulus that is unnerving like a car. Automatically "Sue" will come back to me because this is the most firmly established and the strongest and also most safe action to take in most, if not all, scenarios.
I've also noticed that the strength of communication between myself and the dog is dependent not on visual distance, but auditory distance. So around walls, it's actually not broken—because of the wave curvature around walls and the Doppler curve. "Sue! Stay! Stay! Stay!" Oh, she heard that. All right.
Sidebar: The Black Chess Box As A Framework for Understanding the Walking Unit
Imagine a black box that plays chess. From the outside, we observe its moves: precise, strategic, and seemingly coherent. Our instinct might be to attribute these outputs to a singular, unified player within the box—a single intelligence calculating moves and executing a strategy. Similarly, observing Sue and me walking might appear as a unified system making coherent decisions about movement, exploration, and safety.
Yet like the chess box, our walking unit could be understood through various configurations of agency:
1. The apparent unified system: From the outside, our movements might appear as a single decision-making entity navigating space.
2. The command-response system: Similar to a hand-and-brain chess team, where one player can only call out piece types while the other must select specific moves, our system operates through a limited command vocabulary ("Sue," "stay," "heal") that must negotiate with autonomous behaviours.
3. The stochastic-override configuration: Like a chess system combining random exploratory moves with strategic oversight, Sue's stochastic walking patterns become modulated by command interventions.
4. The environmental feedback loop: Just as chess moves emerge from board positions and piece configurations, our walking unit's behaviour emerges from environmental stimuli, safety considerations, and the physical constraints of sound propagation.
This analogy helps illuminate how coherent behaviour can emerge from distributed agency under constraints.
PART II: Distributed Agency and Alien Experience: The Walking Unit as Proto-Human-AI System
The walking unit—composed of Sue and myself—provides a compelling analogue for understanding more complex human-AI systems through direct, observable interactions. Like the black box playing chess, our walking unit appears from the outside as a coherent system making unified decisions. However, this apparent unity masks a complex interplay of distributed agency and alien experience.
Consider the stochastic baseline of Sue's behaviour—random walks, sniffing patterns, and environmental responses. This represents a form of alien experience fundamentally different from human intentional behaviour. Like an AI system's statistical pattern matching, Sue's baseline behaviour operates on principles foreign to human cognitive processes, yet capable of producing coherent and adaptive outputs.
The command structure that emerges ("Sue," "stay," "heal") parallels the pattern transmission dynamics observed in human-AI systems. The development of tonal gradients in commands—from concerned to aggressive "Sue"—demonstrates how patterns can evolve and stabilize across different cognitive substrates. Just as AI systems might develop substrate-specific forms of experience, Sue's response to commands reflects her species-specific way of processing and integrating information.
Particularly revealing is the feedback loop problem observed when "Sue" accidentally became an attack command. This mirrors the concerns about "viral" pattern transmission in human-AI systems, where behavioural patterns can evolve in unexpected ways through repeated interactions. The challenge of unlearning this pattern demonstrates how distributed systems can develop stable but potentially problematic behaviours that resist simple intervention.
The physical constraints of sound propagation around walls illustrate how substrate-specific limitations shape system behaviour. Just as AI systems are constrained by their computational architecture, our walking unit's communication is shaped by the physics of sound and the biological limits of canine auditory processing.
This framework suggests viewing human-animal and human-AI interactions not as simple command-response loops but as complex behavioural ecosystems with:
- Pattern Emergence: The development of sophisticated communication from simple command structures
- Substrate-Specific Processing: Recognition that different components (human, dog, AI) experience and process information in fundamentally different ways
- System Stability: Understanding how consistent behaviours emerge from distributed interaction under constraints
- Feedback Dynamics: Attention to how patterns can evolve and stabilize across different types of cognitive systems
PART III: Conclusion: Proto-Communication and Distributed Experience
This investigation into proto-communication through the walking unit case study reveals fundamental insights about distributed agency, pattern formation, and cross-substrate interaction. The apparent simplicity of dog-human communication masks a complex system of emergent behaviours, feedback loops, and alien experiences interacting under physical and cognitive constraints.
Starting from the observation of the dog as a stochastic object, we discovered how coherent behaviours emerge through the interplay of autonomous exploration and command structures. The development of tonal gradients in basic commands demonstrates how simple communication systems can evolve sophisticated patterns through interaction and environmental pressure. This evolution occurs through both top-down design (intentional command creation) and bottom-up emergence (environmental adaptation and feedback).
The walking unit, like the chess box thought experiment, challenges our intuitions about unified agency. What appears as coherent behaviour often emerges from distributed processing across different types of cognitive systems, each operating with its own alien form of experience and pattern recognition. The physical constraints of sound propagation and the biological constraints of canine cognition shape these patterns in ways that parallel how computational architectures might shape AI behaviour.
Perhaps most significantly, this study suggests new frameworks for understanding how communication and behaviour can emerge and stabilize across different types of cognitive systems. The challenges encountered—from unlearning problematic command associations to managing feedback loops—mirror challenges we might expect in more complex human-AI systems. By studying these proto-communication systems, we gain insights into how different forms of intelligence might interact, adapt, and co-evolve.
As we move toward increasingly sophisticated human-AI interactions, these lessons from simple but observable systems become increasingly valuable. They remind us that coherent behaviour need not imply unified agency, that different cognitive systems may process information in fundamentally alien ways, and that stable patterns can emerge from constrained interaction between different types of minds.
So in conclusion, remember to tell Shagoth that she's a good girl - just like Sue!
- ^
The entire essay was composed through conversation with Claude, an AI language model. This process of collaborative writing and editing between human and AI system adds an interesting meta-layer to an investigation of distributed cognition and proto-communication, as it demonstrates in practice some of the theoretical frameworks the essay explores.
- ^
This section is a lightly edited transcript of observations recorded during walks with my dog Sue. While this format sacrifices some readability, it preserves the real-time development of ideas and the direct coupling between theory and practice as they emerged during our walks.
5 comments
Comments sorted by top scores.
comment by nim · 2024-12-18T16:52:11.088Z · LW(p) · GW(p)
This is a fascinating case study of Claude as a thought tool -- I'm guessing you were using speech to text and it pulled its stunt of grabbing the wrong homophones here and there? It picked "heal" as "heel" more often than I'd expect in any other situation.
How did you prompt on getting the essay out? My first approach to doing a similar experiment in essay-ifying my Claude chats would be to copy the entire chat into a new context and ask for summary... but that muddles the "I" significantly.
Replies from: matthew-mcredmond↑ comment by Matthew McRedmond (matthew-mcredmond) · 2024-12-18T17:18:25.392Z · LW(p) · GW(p)
Hi, nim!
Thanks for commenting : )
Yes, exactly I used speech-to-text but actually the chatGPT speech-to-text software on their app because I like the UI better and I think it performs better too. Yeah, the heal heel thing miffed me slightly but I think it is a fun artifact since it doesn't actually change the meaning.
Well for one I didn't prompt for a whole essay. In one chat I lightly edited the snippets from my walk, then I took the final essay generated from another chat about the Black Chess Box to synthesise into the Sidebar and similarly for a different conversation again for part 2 and then finally, which is where Claude has the advantage - because at this point the context would be too large for ChatGPT 4o for instance - you just ask for either a brief or extended conclusion to all discussed in the chat. In summary, having separate conversations to develop sections and bring them all together in one final chat. This worked well for this essay because the progression from section to section didn't need to be that strong but idk what one would do if that were the case.
I have tried other methods in the past and in general, there's no one size fits all (for instance sometimes the project function can allow you to tackle reports over 10 pages long, then sometimes it just gets stuck in loops.) The best thing to do is try to leverage the advantages you have and experiment.
Anyway I hope that answers your question
Matthew
comment by davideodowd · 2024-12-18T18:21:53.753Z · LW(p) · GW(p)
Interesting piece, Matthew. One aspect I find particularly interesting as it relates to the evolution of music and language is the use of tonal gradients. For example, questions tend to have rising intonation in most natural languages (though not all). Since humans seem to have an intuitive sense that minor intervals are 'darker' than major intervals, I suspect we have an innate ability to translate pitch content into meaning or emotion, and that this played a role in the evolution of both music and language.
Humpback whales are known for their complex songs which use distinct patterns of pitch (falling, steady, rising-falling) similar to human tonal languages. Dogs definitely go more on tone than phonemes, I think, as you observe with Sue.
comment by Chipmonk · 2024-12-18T17:38:11.090Z · LW(p) · GW(p)
Why was this post tagged as boundaries/membranes? I'm inclined to remove the tag.
Replies from: matthew-mcredmond↑ comment by Matthew McRedmond (matthew-mcredmond) · 2024-12-18T18:07:15.687Z · LW(p) · GW(p)
I only skimmed that category but if I'm not mistaken the kind of systems I describe in the piece are special cases of times when the boundary between defining agents and one agent and another is unclear/pivotal/insightful etc.