Autonomy as taking responsibility for reference maintenance
post by Ramana Kumar (ramana-kumar) · 2022-08-17T12:50:30.218Z · LW · GW · 3 commentsContents
Summary Local and distal meanings Local meanings are more robust by default Reference maintenance makes distal meanings more robust Autonomous systems must maintain referents Referents can be entirely subjective None 3 comments
I think semantics – specifically, maintaining reference relationships – is a core component of intelligent behaviour. Consequently, I think a better understanding of semantics would enable a better understanding of what machine intelligence that is “trying to do the right thing” ought to look like and how to build it. Unfortunately I’ve not managed to make these ideas particularly clear (either to myself or others). In the spirit of palimpsest-shaped research [AF · GW] I’m sharing a draft in the hope of provoking useful thoughts.
Summary
- Particular events can have meaningfulness or significance, and can carry multiple meanings at once. For example, the motion of a dial in a meter might mean that there is electricity flowing through a connected wire, while simultaneously meaning that an alarm condition has been triggered.
- The “meanings” (or semantics) that can be ascribed to events depend on regularities (stable mechanisms) in the surrounding context. Large-scale regularities are required to sustain large-scale semantics (something carrying meaning about another thing that is far away in time or space).
- Meanings are generally more robust when their dependencies on the context are fewer and/or more local.
- However, large-scale semantics can be made robust by managing the contextual dependencies. For example, using mechanisms for error detection and correction. There is compensatory work that can be done to sustain a regularity so something local continues to mean a particular thing about something distal.
- To be autonomous is to be robust in this sense. Systems that more robustly traffic in large-scale meanings are more autonomous. Thus, being autonomous can be understood as including all the machinery required to hold stable, in a changing world, all the semantic or referential relationships that are required for continued operation.
- Creating regularities that span larger distances is a way of creating new kinds of semantics, and autonomous systems do this as their conceptual spaces get larger.
Local and distal meanings
The battery indicator on my laptop reads “84%” when I hover my mouse cursor over it. What does this reading mean? There’s some function, I suppose, in some program running as part of my desktop environment that is designed to produce the text for the popup window. The reading means this function received the number 84 as input.
OK, but that’s not all it means. There is, I suppose, a smart battery system in my laptop, and it ran a calculation that produced 84% as an estimate of the battery’s state of charge. More concretely: I imagine there are several quantities in the memory of the smart battery system, such as the last few readings from an ammeter on the battery, the times those readings were made, the last reference point for current when the battery was fully charged, etc. The reading I observed means all these memory locations contained values such that the calculation the smart battery system runs produced the value 84. This is a “more distal” meaning for the reading than “the text rendering function received 84 as input”, because the smart battery system’s calculation happens “before” (or “causally upstream of”) that function’s execution. Most distally, of course, we’d like to say that the reading means that the state of charge of the battery “really is” approximately 84%.
We have a collection of meanings for the reading (..., it means the LEDs on my screen were emitting pixels arranged in a shape recognisable as “84%”, …, it means the text-rendering function received 84 as an input, …, it means the smart battery system calculated the state of charge as 84%, …, it means the battery’s state of charge is 84%), and I have just arranged them on a spectrum from local to distal.
Another example of this is that the leaves on a deciduous tree being orange means that (they emitted light at wavelengths interpreted by my visual system as orange; they are filled with less chlorophyll and more carotenoids than prototypical leaves; there is an excess of cork cambium forming at the bases of the leaves; the ambient air temperature and daylight length have been lower than previously for a while; the latitude where the tree is located is experiencing less insolation; Autumn has begun). All these meanings are valid under normal circumstances, and all the mechanisms can be understood independently, though the later ones in the list involve larger and larger regions of the world in their “normal circumstances”.
Local meanings are more robust by default
The local-to-distal ordering approximately lines up with another ordering that I would call “robustness”. More local meanings are more robust than more distal meanings. What does robustness mean here? Basically, it is less possible to disrupt a more robust relationship. The relationship between the image I observe and the LEDs in my screen is very tight: there are very few points of intervention by which something could break the relationship and cause the observation to fail to mean what I expect it to mean about the LEDs. But it’s easier to cause the observation to fail to mean what I expect it to mean about the battery.
What does disruption involve? Failure of shielding, excessive noise, unanticipated sources of interference. More generally, the world moving on in ways that make the conditions of normal operation for the sensor fail to hold. For example, many devices on planet Earth implicitly assume a gravitational constant, an ambient magnetic field, a working constellation of GPS satellites, a mostly nitrogen-filled atmosphere, etc.
Reference maintenance makes distal meanings more robust
There’s a lot of machinery and work going on to make the abstract quantity (the state of charge) accessible via pixels on the screen. There’s no way to “directly” read it: there will always be a more local thing that the sensor reads “directly”. That the sensor reading also means the more distal thing is due to the intact, working-as-intended, arrangement of all the machinery connecting the local to the distal. The work this machinery is doing is reference maintenance. It maintains the reference relationship between, in this case, the sensor reading and the quantity that reading refers to (the state of charge).
The designers have compensated in advance for many sources of drift, miscalibration, and noise, keeping in mind the intended referent, that the percentage reading should indicate the “actual state of charge” at least for the purpose of being informative regarding how much time is left before the battery depletes. What actually matters to me, the laptop user, is some (not very precise) predictability around when my laptop will turn off due to lack of power if I don’t charge it.
There can be bugs in any number of these systems, and there can be adversaries spoofing the inputs to any of the subsystems, causing the reading to be incorrect in the sense of not having the intended referent. But note that there is a normative correctness condition here. Whether some configuration is “incorrect” or not was in this case an explicit consideration by the designers of the system. But it is also implied by the compensatory mechanisms in place in the artefact. They can all be understood as working to keep the same referent stable, and that stable referent then defines what it means to be broken or spoofed (failing to keep the intended referent). Explicit intention is not necessary: for example, evolved artefacts don’t have explicit designers but many still do reference maintenance.
Autonomous systems must maintain referents
A system that functions “on its own” in the world – that is, a system that is autonomous – needs to orient itself to its circumstances at some scale and take appropriate actions. But action is always mediated only by local mechanisms. When actions string together and become coherent at a larger scale, sustaining meaningful regularities, they do so by maintaining reference relationships between local configurations and the distal things those configurations are about. Thus being autonomous requires reference maintenance.
Referents can be entirely subjective
The work of reference maintenance can create a new subjective world. The things that are the referents of some compensatory activity are in part brought into existence by that activity – more precisely their boundaries and identity and individuation criteria are in part determined by the reference maintenance work.
3 comments
Comments sorted by top scores.
comment by PaulK · 2022-08-19T19:27:56.390Z · LW(p) · GW(p)
Nice essay, makes sense to me! Curious how you see this playing into machine intelligence.
One thought is that "help maintain referential stability", or something in that ballpark, might be a good normative target for an AI. Such an AI would help humans think, clarify arguments, recover dropped threads of meaning. (Of course, done naively, this could be very socially disruptive, as many social arrangements depend on the absence of clear flows of meaning.)
comment by MSRayne · 2022-08-18T14:31:03.197Z · LW(p) · GW(p)
This reminds me of optimization at a distance [LW · GW].
Replies from: ramana-kumar↑ comment by Ramana Kumar (ramana-kumar) · 2022-08-22T09:26:37.484Z · LW(p) · GW(p)
I agree it is related! I hope we as a community can triangulate in on whatever is going on between theories of mental representation and theories of optimisation or intelligence.