Rationalists are missing a core piece for agent-like structure (energy vs information overload)

post by tailcalled · 2024-08-17T09:57:19.370Z · LW · GW · 9 comments

Contents

  Radical computationalism is killed by information overload
  "Energy"-orientation solves information overload
  ... kinda
None
9 comments

The agent-like structure problem [LW · GW] is a question about how agents in the world are structured. I think rationalists generally have an intuition that the answer looks something like the following:

There is a fairly-obvious gap in the above story, in that it lacks any notion of energy (or entropy, temperature, etc.). I think rationalists mostly feel comfortable with that because:

I've come to think of this as "the computationalist worldview" because functional input/output relationships are the thing that is described very well with computations, whereas laws like conservation of energy are extremely arbitrary from a computationalist point of view. (This should be obvious if you've ever tried writing a simulation of physics, as naive implementations often lead to energy exploding.)

Radical computationalism is killed by information overload

Under the most radical forms of computationalism, the "ideal" prior is something that can range over all conceivable computations. The traditional answer to this is Solomonoff induction [? · GW], but it is not computationally tractable because it has to process all observed information in every conceivable way.

Recently with the success of deep learning and the bitter lesson and the Bayesian interpretations of deep double descent and all that, I think computationalists have switched to viewing the ideal prior as something like a huge deep neural network, which learns representations of the world and functional relationships which can be used by some sort of decision-making process.

Briefly, the issue with these sorts of models is that they work by trying to capture all the information that is reasonably non-independent of other information (for instance, the information in a picture that is relevant for predicting information in future pictures). From a computationalist point of view, that may seem reasonable since this is the information that the functional relationships are about, but outside of computationalism we end up facing two problems:

To some extent, human-provided priors (e.g. labels) can reduce these problems, but that doesn't seem scalable, and really humans also sometimes struggle with these problems too. Plus, philosophically, this would kind of abandon radical computationalism.

"Energy"-orientation solves information overload

I'm not sure to what extent we merely need to focus on literal energy versus also on various metaphorical kinds of energy like "vitality [LW(p) · GW(p)]", but let me set up an example of a case where we can just consider literal energy:

Suppose you have a bunch of physical cubes whose dynamics you want to model. Realistically, you just want the rigid-body dynamics of the cubes. But if your models are supposed to capture information, then they have to model all sorts of weird stuff like scratches to the cubes, complicated lighting scenarios, etc.. Arguably, more of the information about (videos of) the cubes may be in these things than in the rigid-body dynamics (which can be described using only a handful of numbers).

The standard approach is to say that the rigid-body dynamics constitute a low-dimensional component that accounts for the biggest chunk of the dynamics. But anecdotally this seems very fiddly and basically self-contradictory (you're trying to simultaneously maximize and minimize information, admittedly in different parts of the model, but still). The real problem is that scratches and lighting and so on are "small" in absolute physical terms, even if they carry a lot of information. E.g. the mass displaced in a scratch is orders of magnitude smaller than the mass of a cube, and the energy in weird light phenomena is smaller than the energy of the cubes (at least if we count mass-energy).

So probably we want representation that maximizes the correlation with the energy of the system, at least moreso than we want a representation that maximizes the mutual information with observations of the system.

... kinda

The issue is that we can't just tell a neural network to model the energy in a bunch of pictures, because it doesn't have access to the ground truth. Maybe by using the correct loss function [? · GW], we could fix it, but I'm not sure about that, and at the very least it is unproven so far.

I think another possibility is that there's something fundamentally wrong with this framing:

An agent is characterized by a Markov blanket in the world that has informational input/output channels for the agent to get information to observe the world and send out information to act on it.

As humans, we have a natural concept of e.g. force and energy because we can use our muscles to apply a force, and we take in energy through food. That is, our input/output channels are not simply about information, and instead they also cover energetic dynamics.

This can, technically speaking, be modelled with the computationalist approach. You can say the agent has uncertainty over the size of the effects of its actions, and as it learns to model these effect sizes, it gets information about energy. But actually formalizing this would require quite complex derivations with a recursive structure based on the value of information, so it's unclear what would happen, and the computationalist approach really isn't mathematically oriented towards making it easy.

9 comments

Comments sorted by top scores.

comment by quetzal_rainbow · 2024-09-14T21:55:27.601Z · LW(p) · GW(p)

There is a fairly-obvious gap in the above story, in that it lacks any notion of energy (or entropy, temperature, etc.).

I think this is as far away from truth as it can possibly be [? · GW].

Also, conservation of energy is a consequence of pretty much simple and nice properties of environment, not arbitrary. The reason why it's hard to keep in physics simulations is because accumulating errors in numerical approximations violate said properties (error accumulation is obviously not symmetric in time).

I think you are wrong in purely practical sense. We don't care about most of energy. Oceans have a lot of energy in them, but we don't care because 99%+ of it is unavailable, because it is in high-entropy state. We care about exploitation of free energy, which is present only low-entropy high-information states. And, as expected, we learn to notice such states very quickly because they are very cheap sources of uncertainty reduction in world model. 

Replies from: tailcalled
comment by tailcalled · 2024-09-15T06:32:12.964Z · LW(p) · GW(p)

I don't mean that rationalists deny thermodynamics, just that it's not taking a sufficient center-stage, in particular when reasoning on large-scale phenomena than physics or chemistry where it's hard to precisely quantify the energies, or especially when considering mathematical models of agency (as mentioned rationalists usually use argmax + Bayes).

I think this is as far away from truth as it can possibly be.

This post takes a funky left turn at the end, making it a lesson that forming accurate beliefs requires observations. That's a strange conclusion because that also applies to systems where thermodynamics doesn't hold.

Also, conservation of energy is a consequence of pretty much simple and nice properties of environment, not arbitrary.

Conservation of energy doesn't just follow from time symmetry (as otherwise it would be pretty nice). It follows from time symmetry combined with either Lagrangian/Hamiltonian mechanics or quantum mechanics. There's several problems here:

  1. The usual representations used in rationalist toy models, e.g. MDPs, do not get conservation of energy.
  2. Lagrangian/Hamiltonian/quantum mechanics don't really model dissipative phenomena. I've heard that there are some extensions that do, but they seem obscure.
  3. Partly the above but also partly just the intrinsic reductionism of these models imply that we don't have anything even resembling these models for higher phenomena like politics, nutrition or programming, even though the point about energy and agency holds in those areas too.
  4. Energy accounting is uninteresting unless it can be localized to specific phenomena, which is not guaranteed by this theorem.

I think you are wrong in purely practical sense. We don't care about most of energy. Oceans have a lot of energy in them, but we don't care because 99%+ of it is unavailable, because it is in high-entropy state. We care about exploitation of free energy, which is present only low-entropy high-information states. And, as expected, we learn to notice such states very quickly because they are very cheap sources of uncertainty reduction in world model.

It's true that free energy is especially important, but I'm unconvinced rationalists jump as strongly onto it as you say. Free energy is pretty cheap, so between your power outlet and your snack cabinet you are pretty unconstrained by it.

comment by tailcalled · 2024-08-17T18:51:00.365Z · LW(p) · GW(p)

Wrote a followup that maybe adds more clarity to it: The causal backbone conjecture [LW · GW].

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-08-18T02:41:54.554Z · LW(p) · GW(p)

I think this ties into modeling invariant abstractions of objects, and coming up with models that generalize to probable future states.

I think partly this is addressed in animals (including humans) by having a fraction of their brain devoted to predicting future sensations and forming a world model out of received sensations, but also having an action model that attempts to influence the world and self-models its own actions and their effects. So things like the cubes, we learn a model of the motions of the cubes not just from watching video of them, but by stacking them up and knocking them over. We play and explore, and these manipulations allow us to test hypotheses.

I expect that having a portion of a model's training be interactive exploration of a simulation would help close this gap.

Replies from: tailcalled
comment by tailcalled · 2024-08-18T08:32:50.691Z · LW(p) · GW(p)

The thing is, your actions can lead to additional scratches to the cubes, so actions aren't causally separated from scratches. And the scratches will be visible on future states too, so if your model attempts to predict future states, it will attempt to predict the scratches.

I suspect ultimately one needs to have an explicit bias in favor of modelling large things accurately. Actions can help nail down the size comparisons, but they don't directly force you to focus on the larger things.

comment by Noosphere89 (sharmake-farah) · 2024-08-18T18:34:18.743Z · LW(p) · GW(p)

While I agree that physical laws like conservation of energy are extremely arbitrary from a computational standpoint, I do think that once we try to exhaust all the why questions of why our universe has the physical laws and constants that it does, a lot of the answer is "it's arbitrary, and we just happen to live in this universe instead of a different one.

comment by Noosphere89 (sharmake-farah) · 2024-08-18T18:31:00.843Z · LW(p) · GW(p)

Also, about this point in particular:

It captures a lot of unimportant information, which makes the models more unweildy. Really, information is a cost: the point of a map is not to faithfully reflect the territory, because that would make it really expensive to read the map. Rather, the point of a map is to give the simplest way of thinking about the most important features of the territory. For instance, literal maps often use flat colors (low information!) to represent different kinds of terrain (important factors!).

Yeah, this is probably one of the biggest differences that comes up between idealized notions of computation/intelligence like AIXI (at the weak end) and The Universal Hypercomputer model in their paper The Universal Hypercomputer (at the strong end) and real agents because of computation costs.

For idealized agents, they can often treat their maps as equivalent to a given territory, at least with full simulation/computation, while real agents must have differences between the map and the territory they're trying to model, so the saying "the map is not the territory" is true for us.

comment by Noosphere89 (sharmake-farah) · 2024-08-18T18:24:26.539Z · LW(p) · GW(p)

I think a lot of this post can be boiled down to "Computationalism does not scale down well, and thus it's not generally useful to try to capture all the information that is reasonably non-independent of other information, even if it's philosophically correct to be a computationalist."

And yeah, this is extremely unsurprising: Even theoretically correct models/philosophies can often be intractable to actually implement, so you have to look for approximations or use a different theory, even if not philosophically/mathematically justified in the limit.

And yeah, trying to have a prior over all conceivable computations is ridiculously intractable, especially if we want the computational model to be very expressive/general like these computational models, with abstracts. primarily due to the fact that it can express almost everything in theory (ignore their physical plausibility for now, because this isn't intended to show we can actually build these):

https://arxiv.org/abs/1806.08747

This paper describes a type of infinitary computer (a hypercomputer) capable of computing truth in initial levels of the set theoretic universe, V. The proper class of such hypercomputers is called a universal hypercomputer. There are two basic variants of hypercomputer: a serial hypercomputer and a parallel hypercomputer. The set of computable functions of the two variants is identical but the parallel hypercomputer is in general faster than a serial hypercomputer (as measured by an ordinal complexity measure). Insights into set theory using information theory and a universal hypercomputer are possible, and it is argued that the Generalised Continuum Hypothesis can be regarded as a information-theoretic principle, which follows from an information minimisation principle.

https://www.semanticscholar.org/paper/The-many-forms-of-hypercomputation-Ord/2e1acfc8fce8ef6701a2c8a5d53f59b4fdacab3a

This paper surveys a wide range of proposed hypermachines, examining the resources that they require and the capabilities that they possess.

https://arxiv.org/abs/math/0209332

Due to common misconceptions about the Church-Turing thesis, it has been widely assumed that the Turing machine provides an upper bound on what is computable. This is not so. The new field of hypercomputation studies models of computation that can compute more than the Turing machine and addresses their implications. In this report, I survey much of the work that has been done on hypercomputation, explaining how such non-classical models fit into the classical theory of computation and comparing their relative powers. I also examine the physical requirements for such machines to be constructible and the kinds of hypercomputation that may be possible within the universe. Finally, I show how the possibility of hypercomputation weakens the impact of Godel's Incompleteness Theorem and Chaitin's discovery of 'randomness' within arithmetic.

So yes, it is ridiculously intractable to focus on the class of all computational experiences ever, as well as their non-independent information.

So my guess is you're looking for a tractable model of the agent-like structure problem while still being very general, but willing to put restrictions on it's generality.

Is that right?

Replies from: tailcalled
comment by tailcalled · 2024-08-18T19:43:31.767Z · LW(p) · GW(p)

So my guess is you're looking for a tractable model of the agent-like structure problem while still being very general, but willing to put restrictions on it's generality.

Is that right?

I think everyone is doing that, my point is more about what the appropriate notion of approximation is. Most people think the appropriate notion of approximation is something like KL-divergence, and I've discovered that to be false and that information-based definitions of "approximation" don't work.