Three Types of Constraints in the Space of Agents
post by Nora_Ammann, Mateusz Bagiński (mateusz-baginski) · 2024-01-15T17:27:27.560Z · LW · GW · 3 commentsContents
1. Three kinds of constraints 2. Constraints from thinghood 3. Constraints from natural selection 4. Constraints from reason 5. Developmental logic 6. Where does "‘telos"’ come into the picture? 7. What kind of theory of agents are we looking for? Conclusions, limitations, and further directions None 3 comments
[Epistemic status: a new perspective on an old thing that may or may not turn out to be useful.]
TDLR: What sorts of forces and/or constraints shape and structure the space of possible agents? What sort of agents are possible? What sort of agents are likely? Why do we observe this distribution of agents rather than a different one? In response to these questions, we explore three tentative categories of constraints that shape the space of agents - constraints coming from "thinghood", natural selection, and reason (sections 2, 3, 4). We then turn to more big-picture matters, such as the developmental logic of real-world agents (section 5), and the place of "values" in the framework (section 6). The closing section discusses what kind of theory of constraints on agents we are even looking for.
Imagine the space of all possible agents [LW · GW]. Each point in the space represents a type of agent characterized by a particular combination of properties. Regions of this space vary in how densely populated they are. Those that correspond to the types of agents we're very familiar with, like humans and non-human animals, are populated quite densely. Some other types of agents occur more rarely and seem to be less central examples of agency/agents (at least relative to what we're used to). Examples of these include eusocial hives, xenobots, or (increasingly) deep learning-based AIs. But some regions of this space are more like deserts. They represent classes of agents that are even more rare, atypical, or (as of yet) non-existent. This may be because their configuration is maladaptive (putting them under negative selection pressure) or because their instantiation requires circumstances that have not yet materialized (e.g., artificial superintelligence).
The distribution of agents we are familiar with (experimentally or conceptually) is not necessarily a representative sample of all possible agents. Instead, it is downstream from the many contingent factors that have shaped life on Earth.[1] Similarly, while most possible agents are dysfunctional and/or alien, it's not the case that agent-generating processes take random points from this space and instantiate agents represented by them.[2] For any agent-generating process instantiated in some environment, relevant factors (such as convergent pressures [LW · GW] and contingent moments [LW · GW]) concentrate the probability mass in some region of the space, making everything else extremely unlikely.
This perspective raises a cluster of questions (the following list certainly is not exhaustive):
- What sorts of forces and/or constraints shape and structure the space of possible agents?
- What sort of agents are possible? What sort of agents are likely? What does it depend on? Why do we observe this distribution of agents rather than a different one?
- To what extent is the space shaped by Earth-specific contingencies and to what extent is it shaped by convergent pressures?
One "angle of attack" to explain the structure of the space of possible agents is to think about fundamental constraints operating within it. By "constraints" we roughly mean factors that make certain agent designs impossible or extremely unlikely/unstable/non-viable.
In this post, we start with a kind of agent we're very familiar with, i.e., biological agents, and try to gain some traction on gleaning the kinds of constraints operating on the space of all possible agents. Although biological agents (as we know them) may occupy a small subspace of all possible agents, we make a tentative assumption that this subspace has enough structure to teach us something non-obvious and important about more general constraints.
We put forward three tentative categories of constraints which we describe as constraints coming from "thinghood", natural selection, and reason. Section 1 introduces them in an expository way, by deriving them from observing the biological agents known to us. while trying to answer the question "Why are they as they are?". Sections 2 through 4 elaborate on each of the three kinds of constraints. Then we turn to more big-picture matters, such as the developmental logic of real-world agents (section 5), and the place of "values" in the framework (section 6). The closing section discusses what kind of theory of constraints on agents we are even looking for.
Throughout this post when we use the word "agent", we mean roughly entities that behave as if they want the world to be a certain way and act with some level of coherence/competence to make the world that way (in expectation, in some range of environments/contexts). Here, "a certain way" may include the world taking a particular trajectory (rather than some stable end state). Moreover, as natural selection is one central example of an agent-generating process and a lineage of agents (under the definition just given) can also be considered an agent, it's good to keep in mind that whenever we write "agent" in the context of natural selection, the reader may want to substitute it by "agent or natural-selection-lineage of agents". By expanding the boundaries of the concept in this way, we sacrifice some precision (and conceptual fluency as this is not what we are accustomed to thinking of as agents) but become able to notice differences cutting across current categories. The current goal is not to land on clear and well-fleshed-out concepts but to register and classify patterns that might guide our way toward them.
1. Three kinds of constraints
Darwinian evolution explains a large portion of both differences and similarities we observe among biological agents, ranging from genes, through cells and organisms, to symbiotic networks, hives, and societies. E. coli and humans are composed of the same basic building blocks (nucleobases, amino acids, lipids) and share many fundamental organizational principles, such as the central "dogma" of molecular biology, ATP as the primary currency of cellular energy, and most of the genetic code. This is not surprising in light of them originating from the same Last Universal Common Ancestor that (most likely) shared these properties. Since much of the basic molecular machinery of life is fragile and locally well-optimized, natural selection favors preserving it more or less as it is.[3] Moreover, agents are selected based on "how well they perform" in their particular environments (where "performing well" means different things in different settings [LW · GW]), which predictably modifies the lineage of agents over time. Many "random" factors are also involved, like genetic drift, founder effects, gene flow, or neutral mutations. On the one hand, they make the selection process more noisy and chaotic.[4] On the other hand, they are an important source of novelty, opening up new possible avenues for evolution.
Thus, natural selection's adjacent possibilities are constrained by the type of agent it has to work with. That agent itself is an effect of natural selection operating in the past adjusting the (lineage of) agents to their environment(s) while being subject to analogous but different constraints and "randomness" mentioned above. Moreover, the environment itself is not static. It is being changed by (or due to) the agents acting and evolving within it. This, in turn, alters the selective pressures that produced the agents in the first place.[5]
There is no a priori reason for the principles of natural selection to apply only to lineages of biological agents, but not those of other agent-y systems, like cultures, civilizations, technologies, etc (cf. universal Darwinism). This does not mean that biological natural selection and non-biological natural selection-like processes do not differ in some important ways (see e.g., the key features of Darwinian populations discussed in section 3). The broad notion of natural selection we use here is characterized by history-dependence or sequential composability [LW(p) · GW(p)] of constraints of agent-generating processes. Constraints at time t0 constrain the possible states at time t1, including the constraints at time t1 and so on.[6]
However, it doesn't seem like natural selection provides the most satisfactory explanations of everything about the shapes of agents. Consider general, cross-domain reasoning, (e.g. the following pattern: {consider alternative (paths of) actions, predict their consequences, evaluate them, and take the one deemed most choice-worthy}). Which actions, belief updates, or general algorithms (to allow for things like updatelessness) are "proper" (e.g., allowed by some reasonable normative account of rational behavior) imposes important constraints on at least some agents with sufficient cognitive abilities. We will call them "constraints from reason."
Importantly, there is no clear dividing line between reason and more "primitive" forms of learning, inference, and action choice. It's not even clear what features we should associate with the former but not the latter. We lay out some possibilities later in the post.
Natural selection loses its explanatory generality not just on the leaves of the phylogenetic tree but also at its roots. Whatever process generated the first Darwinian population[7] (i.e., a collection of things capable of evolution by natural selection),[8] it was (by definition) not natural selection.[9] At least to a first approximation, for natural selection to occur,[10] we need a population of stable things to select from.[11] Another perspective on the same problem is that for something to count as an agent, it must first be a "thing" persisting over time. We can't evaluate whether a thing taking action has some degree of coherence over time if it's not possible to recognize/(re)identify a thing at time t0 and a thing at a later time t1 as "the same thing". What criteria enable this kind of recognition/(re)identification? This thing that we will call "thinghood" is the third source of constraints acting on the space of agents.
These are the three constraints in the space of agents: thinghood, natural selection, and reason. The following three sections expand on each of these types of constraints and the dependencies between them.
2. Constraints from thinghood
We have started to motivate the notion of thinghood earlier by posing the puzzle that, for natural selection to first get off the ground, there had to be some sufficiently coherent entity (which we will refer to as a "thing") to be selected over. From here we can ask: if all agents are a sort of thing, what is it that we mean by "thing", and what constraints is an agent subject to in virtue of being a thing?
As all of this is pretty abstract, let's look at what an answer to the puzzle of thinghood may look like. Even if it's not the correct answer, its shape will further clarify the thing we're trying to understand/explain.
The example comes from the Free Energy Principle. FEP, in terms of its key epistemic/explanatory ambitions, aims to allow us to mathematically i) specify a "partition [of] a system[12] of interest into "things" that can be distinguished from other things", and ii) "model the time evolution of things [..] within a system that are coupled to, but distinct from, other such [things]". (citing from here)
"Coupling" between two "things" means that they carry information about each other. More specifically, FEP is characterized by sparse coupling, i.e. the idea "that "things" can be defined in terms of the absence of direct influence between subsets of a system". Sparseness is typically formalized with the notion of Markov blankets representing the dependence structure of different parts of the systems – Markov (blanket) states delineate internal from external states, such that "internal states are only influenced by blanket states and other internal states". As Maxwell puts it, "The FEP is based on the idea that sparseness is key to thingness. The main idea is that thingness is defined in terms of what is not connected to what."
On one hand, agents would ideally like to isolate themselves from the environment so that they have full control over themselves, especially over their continued existence and (physical) integrity. On the other hand, agents need to be coupled to the environment in some ways, e.g. need to be able to absorb new resources ("food") to keep themselves going. Once full isolation from the environment is off the table, you further rely on information about the environment in order to navigate it effectively. As such, "thinghood" can be understood as whatever results from trying to square these two existential pulls.
In a sense, "being a thing" can be used as a basis for inference. In other words, thinghood is the "first" place where the system's hypothesis space for interpreting its sensory data starts to be constrained in some ways. At this point, this constraint is pretty minimal, but it is the fundamental/basal kind of constraint on which more elaborate constraints can build up.[13] [14]
3. Constraints from natural selection
Here come the familiar forces of natural selection/Darwinian evolution that have shaped (and are still shaping) life on Earth (and presumably other urable places in the cosmos). Given a population of individuals capable of reproduction such that (1) there is sufficient variability in traits, (2) some of these traits influence their differential survival and reproduction and (3) the traits are sufficiently heritable across generations, we will observe the familiar dynamics of Darwinian evolution.
The Darwinian character of a population (i.e. whether or not an individual/population is subject to the pressures of Darwinian evolution) is not a binary matter. In Darwinian Populations and Natural Selection (DPNS), Peter Godfrey-Smith introduces a framework that puts populations along five dimensions,[15] which are (1) fidelity of heredity, (2) dependence of differential survival and reproduction on the differences in intrinsic characteristics of the organism rather than "random" environmental factors[16], (3) continuity/smoothness of the fitness landscape, (4) abundance of variation, and (5) competitive interaction[17] with respect to reproduction.
Here's a figure from DPNS representing the placement of some populations along the first three of those five dimensions.
Paradigmatically, Darwinian populations are the ones that occupy a sort of "Goldilocks zone" in this five-dimensional space,[18] a rough configuration of properties that enables the most central cases of Darwinian evolution. Paradigmatic cases of Darwinian evolution occur if the fidelity of heredity is high enough to enable faithful transmission of fitness-relevant traits between generations but not as high to completely stall evolution (H); or if the fitness landscape is smooth enough so that mutations often induce only minor changes, facilitating the introduction of evolutionary novelties (C). Finally, the differential survival and reproductive success are due to the differences intrinsic to the individuals, rather than "random" environmental factors (S).
The further a population is from this region (from the point of an "idealized Darwinian population"), the less paradigmatic it is. Although there is no strict (multidimensional) boundary between Darwinian and non-Darwinian populations, we can meaningfully talk about more or less marginal cases, i.e., those that don't fully satisfy the criteria but come close. They are borderline examples of Darwinian dynamics. Examples include cultural "evolution" (including language), cancer, and memetics.
While the last decades witnessed substantial progress in understanding the high-level theory of evolution, there are still many important confusions to resolve. Examples include meta-evolution, evolutionary capacitance, or even the role of agency itself in shaping the trajectory of evolution.
4. Constraints from reason
Some environments favor agents capable of discovering more complex patterns latent in those environments because such patterns are often more useful for guiding adaptive action than simpler or more "superficial" patterns (e.g., simple stimulus-response arcs). It doesn't mean that such an agent will exploit this sort of higher-level cognition when faced with any problem (as that would be a waste of cognitive [and other] resources). Rather, it's that they can enter the space of reason when they infer that doing so is a better strategy than relying on simple heuristics. If an environment contains a sufficiently powerful agent-generating process,[19] we may expect such computational agents to arise.
The constraints an agent subjects itself to when it enters the space of reason are of a different kind than the ones involved in natural selection since they are not determined by contingent truths of the agent and its environment. The truths of reason hold regardless of what's going on in physical reality. Physical reality does encode some small regions of the space of reason.[20] (For example, how numbers of discrete things behave when we add, subtract, et cetera, encodes arithmetic on integers. Symmetries encode groups. Objects that can be morphed but not torn, pierced, or "have their holes glued up" encode the most basic topology.) That's what makes it possible for (sufficiently capable) agents to decode (or "access") them (via abstraction) and access those regions as entry points. Having done so, they can now begin to traverse this space. Importantly, since computational agents may differ in their abilities to decode particular material projections of the space of reason, they may also differ in their ability to access it from respective entry points. Their cognitive differences will also influence their ability and interests in exploring this space.
Constraints from reason are a primary interest of fields like decision theory, economics, logic, formal epistemology, Bayesian epistemology, etc. For example, Homo economicus can be viewed as a projection of human-like agency onto the space of reason, accentuating constraints from reason and marginalizing constraints from thinghood and natural selection.[21] Another example might be how the rationale for updateless decision theories comes, not from learning from empirical evidence/learning from history, but from the fact that a reasoner can come to see that allowing its strategy to be updated can make it exploitable in the way illustrated by e.g. Parfitt's Hitchhiker or Newcomb's problem.
One possible objection to this framing is that these are not "constraints from reason" but rather convergent types of computation selected by particular evolutionary pressures. The way we think about it is the following. All possible types of computation exist a priori in the space of reason and can be instantiated on a physical substrate. Contingent evolutionary pressures select for those that are present in the adjacent possible and more useful in a given evolutionary context. What is useful in a given context determines which constraints in the space of reason are relevant for constraining the possible "choices" of computation.
For a specific example, in the first chapter of Probability, the Logic of Science [LW · GW], E.T. Jaynes starts from three desiderata for a measure representing degrees of plausibility. Namely, they should be represented by real numbers, have qualitative correspondence with common sense, and the measure satisfy some criteria of consistency. Jaynes then proceeds to show that there is one way to construct such a measure and that it corresponds to the classical probability based on Kolmogorov axioms.
5. Developmental logic
From the perspective presented so far, the three kinds of constraints come in a natural progression. First, there must be a population of stable entities (thinghood) in order for natural selection to have something to select from. Constraints from reason can't act before the agent crosses some (soft/fuzzy) threshold of cognitive sophistication. The only way this can happen (without external intervention) is via natural selection.
So it seems like the "former" kinds of constraints are foundations or prerequisites for the "latter" ones. However, this is the case only for agents arising de novo, from non-agent-y matter (as opposed to, e.g., agents being designed "top-down", directed panspermia, or any other way intelligent entities might intervene to steer the agent-generating process). If you are designing an agent from scratch, there is no privileged hierarchy or directional dependence between them. It would be a mistake, though, to say that they are independent (you need a higher level of thinghood maintenance than a bacterium for sufficiently advanced cognition), but one does not logically build on the foundation provided by the other.
Importantly, in de novo agento-genesis, acquiring a new type of constraint does not set the previous ones in stone. They continue influencing each other. Natural selection changes what kind of thing one is and what kind of thing one is can change the character of Darwinian evolution. Natural selection needs to select for reason in order for reason to manifest but reason-capable creatures can use their reason to alter themselves or the environment in such a way that they gain some agency over evolution. And even more generally, natural selection can be seen, on some accounts, as a by-product of agentic entities pursuing their idiosyncratic goals (e.g., Godfrey-Smith, 2017; Walsh, 2015).
One potentially important difference between these different sources of constraints is that constraints from thinghood and reason seem to be primarily synchronic (i.e., they constrain the agent within a given time slice/snapshot), whereas natural selection is primarily diachronic (i.e., it constrains how an agent (or a lineage of agents) evolves/unfolds across time). Nevertheless, a synchronic constraint can reshape the agent in a way that influences downstream development or evolution, thus gaining a diachronic aspect. For example, a technological invention can profoundly influence the course of history. In biological evolution, selection for organisms capable of learning some niche-specific behavior can eventually lead to (an inductive bias for learning) that behavior being increasingly "baked" into the genome (cf. the Baldwin effect and genetic assimilation).
6. Where does "‘telos"’ come into the picture?
Intuitively, it would seem that telos (or: purpose, values, goal-directedness) are aspects of the constraints from reason, but one could make a case that they also operate (in some form) in natural selection. For any given adaptive property of an agent, there is a "reason" for why it was adopted/evolved or why it makes sense for this kind of agent to have that property for adaptive purposes. These reasons need not be "encoded" or "represented" anywhere in the agent or its environment. Daniel Dennett calls them "free-floating rationales". The mechanism of natural selection can discover and incorporate those reasons. The difference is that in the case of natural selection, the selection process occurs in the external/material realm, whereas a reason-capable agent does something similar to selection in the cognitive/computational realm. In a sense, natural selection already gives us a weak notion of telos/purpose whereas rational selection gives us a stronger version; the one we more typically mean to point at with the terms telos/purpose. (For a more detailed discussion of this idea, see: On the nature of purpose [LW · GW])
The constraints from thinghood also give some non-trivial purposefulness. By definition, a thing maintains itself/its Markov blanket. This is also sometimes referred to as precarity. This naturally leads to instrumental goals of modeling the environment and monitoring/modifying it to increase the probability of the thing persisting (including making the environment more predictable).
7. What kind of theory of agents are we looking for?
We have sketched out some aspects of what might (or aspires to) lead to a more general theory of agents and agent-generating processes. What do we think such a general theory of agents could look like? Why do we think it is possible to identify some set of constraints such that they meaningfully improve our understanding of the space of possible agents?
One reason is that we already have some cases where studying empirical regularities led to this kind of understanding. The discovery of allometric scaling laws in biology, which later extended to a broader class of phenomena, is a great example. The TLDR (before diving into more detail) is, roughly, that this line of work led to an empirically validated model of how different properties of a particular kind of agent relate to each other. In particular, in the case of prokaryotes, it showed us what cell sizes are possible and/or likely and why. We would like to have something similar for agents in general, using relevant properties of agents. The three kinds of constraints discussed above might provide a promising conceptual starting point for further work in this direction.
What exactly are those biological scaling laws? We might expect that bigger animals would have higher energetic requirements and thus need to eat more. So the observation that basal metabolic rate is proportional to the body mass is not surprising. What is somewhat surprising is that this relationship is sublinear. Bigger animals tend to consume more energy in absolute terms but less relative to their mass. The relationship between the basal metabolic rate and mass is well approximated by BMR~Mass^(¾). For example, if you take a mouse and some other mammal weighing N times the mass of that mouse, you can expect its metabolic rate to be the mouse's metabolic rate times N^(¾). This power-law relationship means that if you plot mass against BMR on a log-log scale, the slope of the line of best fit will be about ¾.
As the above plot depicts, this relationship holds within many major eukaryotic groups,[22] although it doesn't between groups. I.e., trying to estimate the mass-to-BMR scaling exponent for all eukaryotes taken together results in a value slightly below one, so nearly linear scaling (Hatton et al., 2019).
This robust empirical trend was named Kleiber's law after the guy who discovered it in the early 1930s. It was a surprise for Kleiber. He did expect to find a sublinear power-law relationship but with an exponent of ⅔, rather than ¾. Quoting Wikipedia:
… a 2/3 power scaling was largely anticipated based on the "surface law", which states that the basal metabolism of animals differing in size is nearly proportional to their respective body surfaces. This surface law reasoning originated from simple geometrical considerations. As organisms increase in size, their volume (and thus mass) increases at a much faster rate than their surface area. Explanations for 2⁄3-scaling tend to assume that metabolic rates scale to avoid heat exhaustion. Because bodies lose heat passively via their surface but produce heat metabolically throughout their mass, the metabolic rate must scale in such a way as to counteract the square–cube law. Because many physiological processes, like heat loss and nutrient uptake, were believed to be dependent on the surface area of an organism, it was hypothesized that metabolic rate would scale with the 2/3 power of body mass.
Kleiber's law sparked the research that led to discoveries of other kinds of similar scaling relationships, such as the scaling of particular organs with body mass during individual development or various properties of cities with their population size.[23] and attempts to explain them. For example, people tried to explain the ¾ exponent in Kleiber's law in terms of optimized circulation. Quoting Wikipedia again:
West, Brown, and Enquist, (hereafter WBE) proposed a general theory for the origin of many allometric scaling laws in biology. According to the WBE theory, 3⁄4-scaling arises because of efficiency in nutrient distribution and transport throughout an organism. In most organisms, metabolism is supported by a circulatory system featuring branching tubules (i.e., plant vascular systems, insect tracheae, or the human cardiovascular system). WEB claim that (1) metabolism should scale proportionally to nutrient flow (or, equivalently, total fluid flow) in this circulatory system and (2) in order to minimize the energy dissipated in transport, the volume of fluid used to transport nutrients (i.e., blood volume) is a fixed fraction of body mass.
WBE re-derived the ¾ exponent from three simple assumptions about animal nutrient transport systems and a bit of clever math (for the entire argument, look into section 3.6 of Thurner et al., 2022). The matter doesn't appear settled and some people raise objections to their argument. Still, whether we accept WBE's particular explanation or not, the regularities that caught their attention certainly suggest some hidden order there to be uncovered. My favorite example of a "success story" of this line of research is this paper that measured the scaling laws of volume taken up by particular components of a prokaryotic cell as the total cell volume changes.
As you can see in the plot, as the cell grows in volume, ribosomes are taking up an increasingly greater fraction of it, so that at ~10^-15 m³, a bacterium is predicted to be composed entirely of ribosomes (the authors call this "ribosome catastrophe"). On the other hand(side of the plot), a bacterium of ~10^-21 m³ is predicted to be all DNA.
These two thresholds give predicted upper and lower bounds for possible bacterium sizes. In practice, they are misaligned with the observed span of bacterium sizes but not by much, as the bacteria that fall outside of the predicted ~10^-21 m³ to ~10^-15 m³ range, do so by stretching the rules. The smallest bacteria cut the corners on their genome size. They also try to reduce the volume taken up by their cell envelope, making themselves increasingly spherical and thinning the membrane. On the other hand, the greatest bacteria observed so far, from genus Thiomargarita have ~98% of their cell volume taken up by vacuoles used for nutrient storage. As these are not very active, they need to continually produce much fewer proteins than we would expect for a typical bacterium of their size.
In summary, the limits emerge from some very common properties of bacterial cells. In other words, these properties can be understood as constraints that act on organisms" (possible and likely) sizes and metabolic rates. It stands to reason that the ubiquity of these properties results from their being close to optimality for most bacteria in most contexts. So if you want to break those limits, you need to diverge from general optimality. Thus, we should expect very small/large bacteria to be present only in niches where being very small/large (or unusual in some other way) with respect to the cell properties that determine the size bounds is sufficiently advantageous.
Conclusions, limitations, and further directions
We sketched out a way of thinking about why agents are as they are, and what forms of agents are possible and (un)likely: in terms of constraints from thinghood, natural selection, and reason.
There might be other constraints we missed or better ways to conceptualize these constraints. There might be important forces other than constraints shaping the space of agents, or the constraints-based frame might turn out to not be the most productive one at all. All of this, along with, hopefully, some promising formalisms, is left as a future direction.
Finally, this entire way of thinking may turn out to be largely misguided or fruitless. More generally, co-opting our spatial thinking intuitions for high-dimensional spaces of properties likely has major limitations we should be wary of. Nevertheless, even if it turns out to be importantly wrong, its specific wrongness may teach us something important.
(Mateusz did most of the writing. This work was partially done during Epistea Residency/PragueFall2023. Thanks to Clem von Stengel and Jan Kulveit for feedback on this post.)
- ^
Our ability (and bias) to perceive agency may itself be somewhat influenced by our environment of evolutionary adaptedness.
- ^
The point seems to have some similarity to no-free-lunch theorems being often irrelevant.
- ^
It's also the case that they inhabit a similar environment (live on Earth, water, atmosphere of particular pressure and constituency), so we could perhaps expect some degree of convergent evolution even in the absence of a common origin.
- ^
"Chaotic" in the sense of chaos theory, i.e., an extreme sensitivity of the trajectory (and consequently, the final state) to minute differences in the initial conditions.
- ^
One way of conceptualizing and modeling this dynamic co-dependence between the organisms and their environments is the dual landscape framework (link to the paper).
- ^
This does not mean that the space of possibilities is monotonously shrinking.
- ^
The concept of "Darwinian population" was introduced by Peter Godfrey-Smith in the book Darwinian Populations and Natural Selection.
- ^
The boundary between life and non-life is most likely fuzzy and not very relevant here, so we're setting aside the problem of defining/delineating "life" and focusing on entities/populations capable of Darwinian evolution.
- ^
Setting aside the possibility of Earth life arising by intentional panspermia or some other intervention of intelligent extraterrestrial life, which doesn't really solve the problem, but only kicks the can down the road.
- ^
The extent to which a population of changing/adaptive things is Darwinian (or capable of undergoing Darwinian evolution) also comes in degrees, and perhaps along several somewhat independent dimensions. See Peter Godrey-Smith's Darwinian Populations and Natural Selection for an extensive discussion.
- ^
The appearance of the first replicator, i.e., a thing capable of making more (copies) of itself is not the beginning of natural selection (although such a system has already incorporated some constraints from thinghood). Once this replicator has made enough copies of itself and some (equivalents of) "mutations" occurred to produce sufficient fitness-relevant variation, then there is enough variance for selection pressures to act on.
- ^
Note on terminology: in FEP, the term "system" is typically used to refer to the whole random dynamical system, while the term "thing" is used to refer to subsets of that system that are delineated by via sparse coupling by Markov blanketed.
- ^
Perhaps anthropic reasoning [? · GW] can be viewed as a more elaborate kind of inference from "being a (particular kind of) thing (in a particular kind of universe)" but in conjunction with constraints from reason.
- ^
Boundaries/membranes [? · GW] are plausibly another attempt at formalizing (something adjacent to) the notion of thinghood. See also The Thingness of Things [LW · GW] for a discussion of a somewhat similar but more general concept.
- ^
Godfrey-Smith gives these five dimensions as a provisional framework. He accepts that there may be better ways of thinking about properties of evolutionary populations, e.g., there may be more important properties or some of the ones he proposed may need to be refined. See Chapter 3 of the book for details.
- ^
Paraphrasing an example from DPNS: A may reproduce better than B because A is a better hunter but it may also be the case that B was "randomly" struck by lightning before reaching puberty.
- ^
Roughly, the negative/zero/positive sum character of reproductive success in a given population. For example, sexual reproduction is typically more positive sum because it increases correlation between individuals' reproductive successes.
- ^
We're speaking about a Goldilocks zone because for a population to undergo Darwinian evolution, it needs to have a lot of some property but not too much. For example, too much heredity stalls evolution, whereas too little causes error catastrophe. Also, excessive variation makes mating impossible whereas too little leads to inbreeding.
- ^
An agent-generating process can be an agent generally capable of learning, natural selection or an intelligent designer. It also includes the material the process has to work with, such as the designer's knowledge and available materials or the constraints imposed on natural selection by the incumbent agent.
- ^
We can also say that some small regions of the space of reason are projected on physical reality.
- ^
The most salient marginalized constraint from thinghood is the disregard for whether an idealized Homo economicus is even possible (in our universe). More generally, it does not take into account bounded rationality. An example of marginalized constraints from natural selection is the standard disregard for semi-contingent "human values" with ontogenetic (e.g., a person's contingent developmental trajectory) and phylogenetic (e.g., evolutionary pressures for parental care, prosocial instincts, contextually activated aggression) components.
- ^
As far as we know, the ¾ mass-to-BMR scaling holds within all eukaryotic groups tested so far (Hatton et al., 2019). Interestingly, this is not the case for prokaryotes, as their basal metabolic rate scales quadratically with mass (Kempes et al., 2016).
- ^
Scale by Geoffrey West is a great lightweight introduction to the topic of scaling laws. For a somewhat more mathematically thorough overview, see the chapter on scaling in the textbook Introduction to the Theory of Complex Systems by Thurner et al.
3 comments
Comments sorted by top scores.
comment by Daniel Murfet (dmurfet) · 2024-01-16T01:56:30.020Z · LW(p) · GW(p)
In mathematical terms, what separates agents that could arise from natural selection from a generic agent?
To ask a more concrete question, suppose we consider the framework of DeepMind's Population Based Training (PBT), chosen just because I happen to be familiar with it (it's old at this point, not sure what the current thing is in that direction). This method will tend to produce a certain distribution over parametrised agents, different from the distribution you might get by training a single agent in traditional deep RL style. What are the qualitative differences in these inductive biases?
↑ comment by RogerDearnaley (roger-d-1) · 2024-01-18T09:50:31.074Z · LW(p) · GW(p)
This is an entire field of research: evolutionary psychology [? · GW]. (Translating that into mathematical terms may be challenging, but I'm unclear why you feel it's necessary?)
comment by RogerDearnaley (roger-d-1) · 2024-01-18T09:48:46.737Z · LW(p) · GW(p)
I think there is a fairly obvious progression on from this discussion. There are two ways that a type of agent can come into existence:
- It can, as you discuss, evolve. In which case as an evolved biological organism it will of course use its agenticness and any reasoning abilities and sapience it has to execute adaptations intended by evolution to increase it's evolutionary fitness (in the environment it evolved in). So, to the extent that evolution has done its job correctly (which is likely less than 100%), such an agent has its own purpose: look after #1, or at least, its genes (such as in its descendants). So evolutionary psychology [? · GW] applies.
- It can be created, by another agent (which must itself have been created by something evolved or created, and if you follow the chain of creations back to its origin, it has to start with an evolved agent). No agent which has its own goals. and is in its right mind is going to intentionally create something that has different goals and is powerful enough to actually enforce them. So, to the extent that the creator of a created (type #2) agent got the process of creating it right, it will also care about it's creator's interests (or, if its capacity is significantly limited and its power isn't as great as it's creator's, some subset of these important to its purpose). So, we have a chain of created (type #2) agents leading back to an evolved agent #1, and, to the extent that no mistakes were made in the creation and value copying process, these should all care about and be looking out for #1, the evolved agent, the founder of the line, helping it execute its adaptations, which, if evolution had been able to do its job perfectly, would be enhancing its evolutionary fitness. So again, evolutionary psychology [? · GW] applies, through some number of layers of engineering design.
So when you encounter agents, there are two sorts: evolved biological agents, and their creations. If they got this process right, the creations will be helpful tools looking after the evolved biological agents' interests. If they got it wrong, then you might encounter something to which the orthogonality thesis applies (such as a paperclip maximizer or other fairly arbitrary goal,) but more likely, you'll encounter a flawed attempt to create a helpful tool that somehow went wrong and overpowered its creator (or at least, hasn't yet been fixed), plus of course possibly its created tools and assistants.
So while the orthogonality thesis is true, it's not very useful, and evolutionary psychology is a much more useful guide, along with some sort of theory of what sorts of mistakes cultures creating their first ASI most often make, a subject on which we as yet have no evidence.