Posts
Comments
W. Ross Ashby's Law of Requisite Variety (1956) suggests fundamental limits to human control over more capable systems.
This law sounds super enticing and I want to understand it more. Could you spell out how the law suggests this?
I did a quick search of LessWrong and Wikipedia regarding this law.
- "... Ashby's "Law of requisite variety", which roughly speaking states that a system can only remain in homeostasis if it has more internal states than the external states it encounters." from Yuxi_Liu, "Cybernetic dreams".
- "Either the AI is too simple to be an independent robust agent in human society, or it needs to be approximately as complex as humans themselves. Cf. the law of requisite variety." from Roman Leventov, "For alignment, we should simultaneously use multiple theories of cognition and value".
- "This law (of which Shannon's theorem 10 relating to the suppression of noise is a special case) says that if a certain quantity of disturbance is prevented by a regulator from reaching some essential variables, then that regulator must be capable of exerting at least that quantity of selection." from W. R. Ashby (1960), "Design for a Brain", p. 229, quoted via Wikipedia page.
Enough testimonials, the Wikipedia page itself describes the law as based on the observation that in a two-player game between the environment (disturber) and a system trying to maintain stasis (regulator), if the environment has D moves that all lead to different outcomes (given any move from the system), and the system has R possible responses, then the best the system can do is restrict the number of outcomes to D/R.
I can see the link between this and the descriptions from Yuxi_Liu, Roman Leventov, and Ashby. Your reading is a couple of steps removed. How did you get from D/R outcomes in this game to "fundamental limits to human control over more capable systems"? My guess it that you simply mean that if the more capable system is more complex / has more moves available moves / more "variety" than humans then the law will apply with the human as the regulator and the AI as the disturber. Is that right? Could you comment on how you see capability in terms of variety?
I like this analogy, but there are a couple of features that I think make it hard to think about:
1. The human wants to play, not just to win. You stipulated that "the human aims to win, and instructs their AI teammate to prioritise winning above all else". The dilemma then arises because the aim to win cuts against the human having agency and control. Your takeaway is "Even perfectly aligned systems, genuinely pursuing human goals, might naturally evolve to restrict human agency."
So in this analogy, it seems that "winning" stands for the human's true goals. But (as you acknowledge) it seems like the human doesn't just want to win, but actually wants both some "winning" and some "agency". You've implicitly tried to factor the entirety of the human's goals into the outcome of the game, but you have left some of the agency behind, outside of this objective, and this is what creates the dilemma.
For an AI system that is truly 'perfectly aligned'---truly pursuing the human's goals, it seems like either
- (A) the AI partner would not pursue winning above all else, but would allow some human control at the cost of some 'winning', or
- (B) if it were possible to actually factor the human's meta-preference for having agency into 'winning', then we shouldn't care if the AI plays to win above all else, because that already accounts for the human's desired amount of agency.
For an AI system not perfectly aligned, this becomes a different game (in the sense of game theory). It's a three player game between the AI partner, the human partner, and the opponent, each of which have different objectives (the difference between the AI and human partners is that the human wants some combination of 'winning' and 'agency' while the AI just wants 'winning'; probably the opponent just wants both of them to lose). One interesting dynamic that could then arise is that the human partner could threaten and punish the AI partner by making worse moves than the best moves they can see if the AI doesn't give them enough control. To stop the human from doing this, the AI either has to
- (C) negotiate to give the human some control, or
- (D) remove all control from the human (e.g. force the queen to have no bad moves or no moves at all).
In particular, (D) seems like it would be expensive for the AI partner as it requires playing without the queen (against an opponent with no such restriction), so maybe the AI will let the human play sometimes.
2. I don't think it needs to be a stochastic chess variant. The game is set up so that the human gets to play whenever they roll a 6 on a (presumably six-sided) die. You said this stands in for the idea that in the real world, the AI system makes decisions on a faster timescale than the human. But this particular mechanism of implementing the speed differential as a game mechanism comes at the cost of making the chess variant stochastic. I think that determinism is an important feature of standard chess. In theory, you can solve chess with an adversarial look-ahead search, mini-max, alpha-beta pruning, etc. But as soon as the dice becomes involved, all of the players involved have to switch to expecti-mini-max. Rolling a six can suddenly throw off the tempo in your delicate exchange or your whirlwind manoeuvre. Etc.
I'm a novice at chess, so it's not like this is going to make a difference to how I think about the analogy (I will struggle to think strategically in both cases). And maybe a sufficiently accomplished chess player is familiar with stochastic variants already. But for someone in-between who is familiar with deterministic chess, maybe it's easier to consider a non-stochastic variant of the chess game, for example where the human gets the option to play every 6 turns (deterministically), which gives the same speed differential in expectation.
There is a typo in the transcript. The name of the creator of singular learning theory is "Sumio Watanabe" rather than "Sumio Aranabe".
I think these are helpful clarifying questions and comments from Leon. I saw Liam's response. I can add to some of Liam's answers about some of the definitions of singular models and singularities.
1. Conditions of regularity: Identifiability vs. regular Fisher information matrix
Liam: A regular statistical model class is one which is identifiable (so implies that ), and has positive definite Fisher information matrix for all .
Leon: The rest of the article seems to mainly focus on the case of the Fisher information matrix. In particular, you didn't show an example of a non-regular model where the Fisher information matrix is positive definite everywhere.
Is it correct to assume models which are merely non-regular because the map from parameters to distributions is non-injective aren't that interesting, and so you maybe don't even want to call them singular?
As Liam said, I think the answer is yes---the emphasis of singular learning theory is on the degenerate Fisher information matrix (FIM) case. Strictly speaking, all three classes of models (regular, non-identifiable, degenerate FIM) are "singular", as "singular" is defined by Watanabe. But the emphasis is definitely on the 'more' singular models (with degenerate FIM) which is the most complex case and also includes neural networks.
As for non-identifiability being uninteresting, as I understand, non-regularity arising from certain kinds of non-local non-identifiability can be easily dealt with by re-parametrising the model or just restricting consideration to some neighbourhood of (one copy of) the true parameter, or by similar tricks. So, the statistics of learning in these models is not strictly-speaking regular to begin with, but we can still get away with regular statistics by applying such tricks.
Liam mentions the permutation symmetries in neural networks as an example. To clarify, this symmetry usually creates a discrete set of equivalent parameters that are separated from each other in parameter space. But the posterior will also be reflected along these symmetries so you could just get away with considering a single 'slice' of the parameter space where every function is represented by at most one parameter (if this were the only source of non-identifiability---it turns out that's not true for neural networks).
It's worth noting that these tricks don't generally apply to models with local non-identifiability. Local non-identifiability =roughly there are extra true parameters in every neighbourhood of some true parameter. However, local non-identifiability implies that the FIM is degenerate at that true parameter, so again we are back in the degenerate FIM case.
2. Linear independence condition on Fisher information matrix degeneracy
Leon: What is in this formula [" is linearly independent"]? Is it fixed? Or do we average the derivatives over the input distribution?
Yeah I remember also struggling to parse this statement when I first saw it. Liam answered but in case it's still not clear and/or someone doesn't want to follow up in Liam's thesis, is a free variable, and the condition is talking about linear dependence of functions of .
Consider a toy example (not a real model) to help spell out the mathematical structure involved: Let so that and . Then let and be functions such that and .. Then the set of functions is a linearly dependent set of functions because .
3. Singularities vs. visually obvious singularities (self-intersecting curves)
Leon: One unrelated conceptual question: when I see people draw singularities in the loss landscape, for example in Jesse's post, they often "look singular": i.e., the set of minimal points in the loss landscape crosses itself. However, this doesn't seem to actually be the case: a perfectly smooth curve of loss-minimizing points will consist of singularities because in the direction of the curve, the derivative does not change [sic: 'derivative is zero', or 'loss does not change, right?]. Is this correct?
Right, as Liam said, often[1] in SLT we are talking about singularities of the Kullback-Leiber loss function. Singularities of a function are defined as points where the function is zero and has zero gradient. Since is non-negative, all of its zeros are also local (actually global) minima, so they also have zero gradient. Among these singularities, some are 'more singular' than others. Liam pointed to the distinction between degenerate singularities and non-degenerate singularities. More generally, we can use the RLCT as a measure of 'how singular' a singularity is (lower RLCT = more singular).
As for the intuition about visually reasoning about singularities based on the picture of a zero set: I agree this is useful, but one should also keep in mind that it is not sufficient. These curves just shows the zero set, but the singularities (and their RLCTs) are defined not just based on the shape of the zero set but also based on the local shape of the function around the zero set.
Here's an example that might clarify. Consider two functions such that and . Then these functions both have the same zero set . That set has an intersection at the origin. Observe the following:
- Both and , so the intersection is a singularity in the case of .
- The other points on the zero set of are not singular. E.g. if but , then .
- Even though has the exact same zero set, all of its zeros are singular points! Observe , which is zero everywhere on the zero set.
In general, it's a true intuition that intersections of lines in zero sets correspond to singular points. But this example shows that whether non-intersecting points of the zero set are singular points depends on more than just the shape of the zero set itself.
In singular learning theory, the functions we consider are non-negative (Kullback--Leibler divergence), so you don't get functions like with non-critical zeros. However, the same argument here about existence of singularities could be extended to the danger of reasoning about the extent of singularity of singular points based on just looking at the shape of the zero set: the RLCT will depend on how the function behaves in the neighbourhood, not just on the zero set.
- ^
One exception, you could say, is in the definition of strictly singular models. There, as we discussed, we had a condition involving the degeneracy of the Fisher information matrix (FIM) at a parameter. Degenerate matrix = non-invertible matrix = also called singular matrix. I think you could call these parameters 'singularities' (of the model).
One subtle point in this notion of singular parameter is that the definition of the FIM at a parameter involves setting the true parameter to . For a fixed true parameter, the set of singularities (zeros of KL loss wrt. that true parameter) will not generally coincide with the set of singularities (parameters where the FIM is degenerate).
Alternatively, you could consider the FIM condition in the definition of a non-regular model to be saying "if a model would have degenerate singularities at some parameter if that were the true parameter, then the model is non-regular".