Does Bayes Beat Goodhart?

post by abramdemski · 2019-06-03T02:31:23.417Z · score: 45 (14 votes) · LW · GW · 26 comments

Contents

  If approximate solutions are concerning, why would mixtures of them be unconcerning?
  Overcoming regressional Goodhart requires calibrated learning.
  Bayesian solutions assume realizability.
    "We quantified our uncertainty as best we could!"
    "The True Utility Function Has Enough Weight"
  Overcoming adversarial Goodhart seems to require randomization.
  In summary:
None
26 comments

Stuart Armstrong has [LW · GW] claimed [LW · GW] to beat Goodhart with Bayesian uncertainty -- rather than assuming some particular objective function (which you try to make as correct as possible), you represent some uncertainty. A similar claim was made in The Optimizer's Curse and How to Beat It [LW · GW], the essay which introduced a lot of us to ... well, not Goodhart's Law itself (the post doesn't make mention of Goodhart), but, that kind of failure. I myself claimed that Bayes beats regressional Goodhart, in Robust Delegation [LW · GW]:

I now think this isn't true -- Bayes' Law doesn't beat Goodhart fully. It doesn't even beat regressional Goodhart fully. (I'll probably edit Robust Delegation to change the claim at some point.)

(Stuart makes some more detailed claims [LW · GW] about AI and the nearest-unblocked-strategy problem [LW · GW] which aren't exactly claims about Goodhart, at least according to him. I don't fully understand Stuart's perspective, and don't claim to directly address it here. I am mostly only addressing the question of the title of my post: does Bayes beat Goodhart?)

If approximate solutions are concerning, why would mixtures of them be unconcerning?

My first argument is a loose intuition: Goodhartian phenomena suggest that somewhat-correct-but-not-quite-right proxy functions are not safe to optimize (and in some sense, the more optimization pressure is applied, the less safe we expect it to be). Assigning weights to a bunch of somewhat-but-not-quite-right possibilities just gets us another somewhat-but-not-quite-right possibility. Perhaps it is able to bear more optimization pressure before breaking down, by virtue of being less incorrect. But why would we get anything stronger than that?

My intuition there doesn't address the gears of the situation adequately, though. Let's get into it.

Overcoming regressional Goodhart requires calibrated learning.

In Robust Delegation, I defined regressional Goodhart through the predictable-dissapointment idea. Does Bayesian reasoning eliminate predictable disappointment?

Well, it depends on what is meant by "predictable". You could define it as predictable-by-bayes, in which case it follows that Bayes solves the problem. However, I think it is reasonable to at least add a calibration requirement: there should be no way to systematically correct estimates up or down as a function of the expected value.

Calibration seems like it does, in fact, significantly address regressional Goodhart. You can't have seen a lot of instances of an estimate being too high, and still accept that too-high estimate. It doesn't address extremal Goodhart, because calibrated learning can only guarantee that you eventually calibrate, or converge at some rate, or something like that -- extreme values that you've rarely encountered would remain a concern.

(Stuart's "one-in-three" example in the Defeating Goodhart [LW · GW] post, and his discussion of human overconfidence more generally, is somewhat suggestive of calibration.)

Bayesian methods are not always calibrated. Calibrated learning is not always Bayesian. (For example, logical induction has good calibration properties, and so far, hasn't gotten a really satisfying Bayesian treatment.)

This might be confusing if you're used to thinking in Bayesian terms. If you think in terms of the diagram I copied from Robust Delegation, above: you have a prior which stipulates probability of true utility given observation ; your expectation is the expected value of for a particular value of ; is not predictably correctable with respect to your prior. What's the problem?

The problem is that this line of reasoning assumes that your prior is objectively correct. This doesn't generally make sense (especially from a Bayesian perspective). So, it is perfectly consistent for you to collect many observations, and see that has some systematic bias. This may remain true even as you update on those observations (because Bayesian learning doesn't guarantee any calibration property in general!).

The faulty assumption that your probability distribution is correct is often replaced with the (weaker, but still problematic) assumption that at least one hypothesis within your distribution is objectively correct -- the realizability assumption.

Bayesian solutions assume realizability.

As discussed in Embedded World Models [LW · GW], the realizability assumption is the assumption that (at least) one of your hypotheses represents the true state of affairs. Bayesian methods often (though not always) require a realizability assumption in order to get strong guarantees. Frequentist methods rarely require such an assumption (whatever else you may say about frequentist methods). Calibration is an example of that -- a Bayesian can get calibration under the assumption of realizability, but, we might want a stronger guarantee of calibration which holds even in absence of realizability.

"We quantified our uncertainty as best we could!"

One possible bayes-beats-goodhart argument is: "Once we quantify our uncertainty with a probability distribution over possible utility functions, the best we can possibly do is to choose whatever maximizes expected value. Anything else is decision-theoretically sub-optimal."

Do you think that the true utility function is really sampled from the given distribution, in some objective sense? And the probability distribution also quantifies all the things which can count as evidence? If so, fine. Maximizing expectation is the objectively best strategy. This eliminates all types of Goodhart by positing that we've already modeled the possibilities sufficiently well: extremal cases are modeled correctly; adversarial effects are already accounted for; etc.

However, this is unrealistic due to embeddedness: the outside world is much more complicated than any probability distribution which we can explicitly use, since we are ourselves a small part of that world.

Alternatively, do you think the probability distribution really codifies your precise subjective uncertainty? Ok, sure, that would also justify the argument.

Realistically, though, an implementation of this isn't going to be representing your precise subjective beliefs (to the extent you even have precise subjective beliefs). It has to hope to have a prior which is "good enough".

In what sense might it be "good enough"?

An obvious problem is that a distribution might be overconfident in a wrong conclusion, which will obviously be bad. The fix for this appears to be: make sure that the distribution is "sufficiently broad", expressing a fairly high amount of uncertainty. But, why would this be good?

Well, one might argue: it can only be worse that our true uncertainty to the extent that it ends up assigning too little weight to the correct option. So, if the probability function isn't too small for any of the possibilities which we intuitively assign non-negligible weight, things should be fine.

"The True Utility Function Has Enough Weight"

First, even assuming the framing of "true utility function" makes sense, it isn't obvious to me that the argument makes sense.

If there's a true utility function which is assigned weight , and we apply a whole lot of optimization pressure to the overall mixture distribution, then it is perfectly possible that gets compromised for the sake of satisfying a large number of other . The weight determines a ratio at which trade-offs can occur, not a ratio of the overall resources which we will get or anything like that.

A first-pass analysis is that has to be more than 1/2 to guarantee any consideration; any weight less than that, and it's possible that is as low as it can go in the optimized solution, because some outcome was sufficiently good for all other potential utility functions that it made sense to "take the hit" with respect to . We can't formally say "this probably won't happen, because the odds that the best-looking option is specifically terrible for are low" without assuming something about the distribution of highly optimized solutions.

(Such an analysis might be interesting; I don't know if anyone has investigated from that angle. But, it seems somewhat unlikely to do us good, since it doesn't seem like we can make very nice assumptions about what highly-optimized solutions look like.)

In reality, the worst-case analysis is better than this, because many of the more-plausible should have a lot of "overlap" with ; after all, they were given high weight because they appeared plausible somehow (they agreed with human intuitions, or predicted human behavior, etc). We could try to formally define "overlap" and see what assumptions we need to guarantee better-than-worst-case outcomes. (This might have some interesting learning-theoretic implications for value learning, even.)

However, this whole framing, where we assume that there's a and think about its weight, is suspect. Why should we think that there's a "true" utility function which captures our preferences? And, if there is, why should we assume that it has an explicit representation in the hypothesis space?

If we drop this assumption, we get the classical problems associated with non-realizability in Bayesian learning. Beliefs may not converge at all, as evidence accumulates; they could keep oscillating due to inconsistent evidence. Under the interpretation where we still assume a "true" utility function but we don't assume that it is explicitly representable within the hypothesis space, there isn't a clear guarantee we can get (although perhaps the "overlap" analysis can help here). If we don't assume a true utility function at all, then it isn't clear how to even ask questions about how well we do (although I'm not saying there isn't a useful analysis -- I'm just saying that it is unclear to me right now).

Stuart does address this question, in the end [LW · GW]:

I've argued that an indescribable hellworld cannot exist [LW · GW]. There's a similar question as to whether there exists human uncertainty about U that cannot be included in the AI's model of Δ. By definition, this uncertainty would be something that is currently unknown and unimaginable to us. However, I feel that it's far more likely to exist, than the indescribable hellworld.
Still despite that issue, it seems to me that there are methods of dealing with the Goodhart problem/nearest unblocked strategy problem. And this involves properly accounting for all our uncertainty, directly or indirectly. If we do this well, there no longer remains a Goodhart problem at all.

Perhaps I agree, if "properly accounting for all our uncertainty" includes robustness properties such as calibrated learning, and if we restrict our attention to regressional Goodhart, ignoring the other three [LW · GW].

Well... what about the others, then?

Overcoming adversarial Goodhart seems to require randomization.

The argument here is pretty simple: adversarial Goodhart enters into the domain of game theory, in which mixed strategies tend to be very useful. Quantilization [LW · GW] is one such mixed strategy, which seems to usefully address Goodhart to a certain extent. I'm not saying that quantilization is the ultimate solution here. But, it does seem to me like quantilization is significant enough that a solution to Goodhart should say something about the class of problems which quantilization solves.

In particular, a property of quantilization which I find appealing is the way more certainty about the utility function implies that more optimization power can be safely applied to making decisions. This informs my intuition that applying arbitrarily high optimization power does not become safe simply because you've explicitly represented uncertainty about utility functions -- no matter how accurately, short of "perfectly accurately" (which isn't even a meaningful concept), it only seems to justify a limited amount of optimization pressure. This story may be an incorrect one, but if so, I'd like to really understand why it is incorrect.

Unlike the previous sections, this doesn't necessarily step outside of typical Bayesian thought, since this kind of game-theoretic thinking is more or less within the purview of Bayesianism. However, the simple "Bayes solves Goodhart" story doesn't explicitly address this.

(I haven't addressed causal Goodhart anywhere in this essay, since it opens up the whole decision-theoretic can of worms, which seems somewhat beside the main point. (I suppose, arguably, game-theoretic concerns could be beside the point as well -- but, they feel more directly relevant to me, since quantilization is fairly directly about solving Goodhart.))

In summary:

26 comments

Comments sorted by top scores.

comment by Stuart_Armstrong · 2019-06-03T15:19:40.692Z · score: 21 (5 votes) · LW · GW

Thanks for this post! Good insights that refined my arguments.

I'll present three points:

  1. I fully agree that realizability is needed here. In practice, for the research I'm doing, I'm defining the desired utility as being defined by a constructive process [LW · GW]. Therefore the correct human preference set is in there by definition. This requires that the set of possible utilities be massive enough that we're confident we didn't miss anything. Then, because the process is constructive, it has to be realizable once we've defined the "normative assumptions" that map observations to updates of value functions.
  2. One (partial) rejoinder to "the outside world is much more complicated than any probability distribution which we can explicitly use, since we are ourselves a small part of that world" is that our meta-preferences are conditional [LW · GW]; we don't need to fully take into account all future problems we might encounter, we simply have to defined something that, conditional on encountering those problems, will see them as problems (though beware this problem [LW · GW]).
  3. Finally, on "A first-pass analysis is that has to be more than 1/2 to guarantee any consideration; any weight less than that, and it's possible that is as low as it can go in the optimized solution": this is only correct if is roughly linear in resources. If we assume that has steep losses if we lose some key variable (otherwise known as the "value is fragile" assumption), then losses to will likely be limited; see this post [LW · GW]. This is the kind of information that you need to include if you have any chance of defeating Goodhart.

I think we agree that Goodhart can be ameliorated by adding this extra information/uncertainty; it's not clear whether it can be completely resolved.

comment by abramdemski · 2019-06-03T23:52:23.953Z · score: 17 (6 votes) · LW · GW
I fully agree that realizability is needed here. In practice, for the research I'm doing, I'm defining the desired utility as being defined by a constructive process [LW · GW]. Therefore the correct human preference set is in there by definition. This requires that the set of possible utilities be massive enough that we're confident we didn't miss anything. Then, because the process is constructive, it has to be realizable once we've defined the "normative assumptions" that map observations to updates of value functions.

My current intuition is that thinking in terms of non-realizable epistemology will give a more robust construction process, even though the constructive way of thinking justifies a kind of realizability assumption. This is partly because it allows us to do without the massive-enough set of hypotheses (which one may have to do without in practice), but also because it seems closer to the reality of "humans don't really have a utility function, not exactly".

However, I think I haven't sufficiently internalized your point about utility being defined by a constructive process, so my opinion on that may change as I think about it more.

comment by abramdemski · 2019-06-04T07:48:46.907Z · score: 6 (3 votes) · LW · GW

Concerning #3: yeah, I'm currently thinking that you need to make some more assumptions. But, I'm not sure I want to make assumptions about resources. I think there may be useful assumptions related to the way the hypotheses are learned -- IE, we expect hypotheses with nontrivial weight to have a lot of agreement because they are candidate generalizations of the same data, which makes it somewhat hard to entirely dissatisfy some while satisfying others. This doesn't seem quite helpful enough, but, perhaps something in that direction.

In any case, I agree that it seems interesting to explore assumptions about the mutual satisfiability of different value functions.

comment by Stuart_Armstrong · 2019-06-04T11:35:59.064Z · score: 6 (3 votes) · LW · GW

"resources" is more of shorthand for "the best utility function looks like a smoothmin of a subset of the different features. Given that assumption, the best fuzzy approximation looks like a smoothmin of all the features, with different weights".

comment by John_Maxwell (John_Maxwell_IV) · 2019-06-14T03:28:09.922Z · score: 12 (3 votes) · LW · GW

By the way I just want to note that expected value isn't the only option available for aggregating utility functions. There's also stuff like Bostrom's parliament idea. I expect there are many opportunities for cross fertilization between AI safety and philosophical work on moral uncertainty.

comment by Davidmanheim · 2019-06-03T06:38:38.341Z · score: 9 (3 votes) · LW · GW

See my much shorter and less developed note to a similar effect: https://www.lesswrong.com/posts/QJwnPRBBvgaeFeiLR/uncertainty-versus-fuzziness-versus-extrapolation-desiderata#kZmpMGYGfwGKQwfZs [LW · GW] - and I agree that regressional and extremal goodhart cannot be fixed purely with his solution.

I will, however, defend some of Stuart's suggestions as they relate to causal Goodhart in a non-adversarial setting. - I'm also avoiding the can of worms of game theory. In that case, both randomization AND mixtures of multiple metrics can address Goodhart-like failures, albeit in different ways. I had been thinking about this in the context of policy - https://mpra.ub.uni-muenchen.de/90649/ - rather than AI alignment, but some of the arguments still apply. (One critical argument that doesn't fully apply is that "good enough" mitigation raises the cognitive costs of cheating to a point where aligning with the true goal is cheaper. I also noted in the paper that satisficing is useful for limiting the misalignment from metrics, and quantilization seems like one promising approach for satisficing for AGI.)

The argument for causal goodhart is that randomization and mixed utilities are both effective in mitigating causal structure errors that lead to causal Goodhart in the one-party case. That's because the failure occurs when uncertainty or mistakes about causal structure leads to choice of metrics that are corrrelated with the goal, rather than causal of the goal. However, if even some significant fraction or probability of the metric is causally connected to the metrics in ways that cannot be gamed, it can greatly mitigate this class of failure.

To more clearly apply this logic to human utility, if we accidentally think that endorphins in the brain are 100% of human goals, AGI might want to tile the universe with rats on happy drugs, or the moral equivalent. If we assign this only 50% weight, of have a 50% probability that it will be the scored outcome, and we define something that requires a different way of creating what we actually think of as happiness / life satisfaction, it does not just shift the optimum from 50% of the universe tiled with rat brains. This is because the alternative class of hedonium will involve a non-trivial amount of endorphins as well, as long as other solutions have anywhere close to as much endorphins, they will be preferred. (In this case, admittedly, we got the endorphin goal so wrong that 50% of the universe tiled in rats on drugs is likely - bad enough utility functions can't be fixed with either randomization or weighting. But if a causal mistake can be fixed with either a probabilistic or a weighting solution, it seems likely it can be fixed with the other.)

comment by abramdemski · 2019-06-04T16:55:22.770Z · score: 5 (3 votes) · LW · GW

If there's 50% on a paperclips-maximizing utility function and 50% on staples, there's not really any optimization pressure put toward satisfying both.

  • As you say, there's no reason to make 50% of the universe into paperclips; that's just not what 50% probability on paperclips means.
  • It could be that there's a sorta-paperclip-sorta-staple (let's say 'stapleclip' for short), which the AGI will be motivated to find in order to get a moderately high rating according to both strategies.
  • However, it could be that trying to be both paperclip and staple at the same time reduces the overall efficiency. Maybe the most efficient nanometer-scale stapleclip is significantly larger than the most efficient paperclip or staple, as a result of having to represent the critical features of both paperclips and staples. In this case, the AGI will prefer to gamble, tiling the universe with whatever is most efficient, and giving no consideration at all to the other hypothesis.

That's the essence of my concern: uncertainty between possibilities does not particularly push toward jointly maximizing the possibilities. At least, not without further assumptions.

comment by Davidmanheim · 2019-06-06T09:16:47.236Z · score: 1 (1 votes) · LW · GW

That's all basically right, but if we're sticking to causal Goodhart, the "without further assumptions" may be where we differ. I think that if the uncertainty is over causal structures, the "correct" structure will be more likely to increase all metrics than most others.

(I'm uncertain how to do this, but) it would be interesting to explore this over causal graphs, where a system has control over a random subset of nodes, and a metric correlated to the unobservable goal is chosen. In most cases, I'd think that leads to causal goodhart quickly, but if the set of nodes potentially used for the metric includes some that are directly causing the goal, and others than can be intercepted creating causal goodhart, uncertainty over the metric would lead to less Causal-goodharting, since targeting the actual cause should improve the correlated metrics, while the reverse is not true.

comment by Gurkenglas · 2019-06-03T10:54:55.540Z · score: 7 (2 votes) · LW · GW

Aren't they just averaging together to yield yet another somewhat-but-not-quite-right function?

Indeed we don't want such linear behavior. The AI should preserve the potential for maximization of any candidate utility function - first so it has time to acquire all the environment's evidence about the utility function, and then for the hypothetical future scenario of us deciding to shut it off.

comment by abramdemski · 2019-06-04T16:30:48.169Z · score: 2 (1 votes) · LW · GW

See this comment. [LW · GW] Stuart and I are discussing what happens after things have converged as much as they're going to [LW · GW], but there's still uncertainty left.

comment by John_Maxwell (John_Maxwell_IV) · 2019-06-05T00:54:44.928Z · score: 2 (1 votes) · LW · GW

Why not just tell the AI the truth? Which, in this case, is: Although we might not be able to give it useful information to differentiate between certain complex candidate hypotheses at this point in time, as we reflect and enhance our intelligence, this will become possible. This process of us reflecting & enhancing our intelligence will take an eyeblink in cosmic time. The amount of time from now until the heat death of the universe is so large that instead of maximizing EU according to a narrow conception of our values in the short term, it's better for the AI's actions to remain compatible with a broad swath of potential values that we might discover are the correct values on reflection.

comment by John_Maxwell (John_Maxwell_IV) · 2019-06-03T04:31:12.435Z · score: 5 (3 votes) · LW · GW

However, I think it is reasonable to at least add a calibration requirement: there should be no way to systematically correct estimates up or down as a function of the expected value.

Why is this important? If the thing with the highest score is always the best action to take, why does it matter if that score is an overestimate? Utility functions are fictional anyway right?

Calibration seems like it does, in fact, significantly address regressional Goodheart. You can't have seen a lot of instances of an estimate being too high, and still accept that too-high estimate. It doesn't address extremal Goodheart, because calibrated learning can only guarantee that you eventually calibrate, or converge at some rate, or something like that -- extreme values that you've rarely encountered would remain a concern.

If I understand correctly, extremal Goodhart is essentially the same as distributional shift from the Concrete Problems in AI Safety paper.

In any case... I'm not exactly sure what you mean by "calibration", but when I say "calibration", I refer to "knowing what you know". For example, when I took this online quiz, it told me that when I said I was extremely confident something was true, I was always right, and when said I was a little confident something was true, I was only right 66% of the time. I take this as an indicator that I'm reasonably "well-calibrated"; that is, I have a sense of what I do and don't know.

A calibrated AI system, to me, is one that correctly says "this thing I'm looking at is an unusual thing I've never encountered before, therefore my 95% credible intervals related to it are very wide, and the value of clarifying information from my overseer is very high".

Your complaints about Bayesian machine learning seem correct. My view is that addressing these complaints & making some sort of calibrated learning method competitive with deep learning is the best way to achieve FAI. I haven't yet seen an FAI problem which seems like it can't somehow be reduced to calibrated learning.

I'm not super hung up on statistical guarantees, as I haven't yet seen a way to make them in general which doesn't require making some sort of unreasonable or impractical assumption about the world (and I'm skeptical such a method exists). The way I see it, if your system is capable of self-improving in the right way, it should be able to overcome deficiencies in its world-modeling capabilities for itself. In my view, the goal is to build a system which gets safer as it self-improves & becomes better at reasoning.

If there's a true utility function which is assigned some weight, and we apply a whole lot of optimization pressure to the overall mixture distribution, then it is perfectly possible that the true utility function gets compromised for the sake of satisfying a large number of other possible utility functions.

If our AI system assigns high subjective credence to a large variety of utility functions, then the value of information which helps narrow things down is high.

To oversimplify my preferred approach: The initial prior acts as a sort of net which should have the true utility function in it somewhere. Clarifying questions to the overseer let the AI pull this net tight around a much smaller set of possible utility functions. It does this until the remaining utility functions can't easily be distinguished through clarifying questions, and/or the remaining utility functions all say to do the same thing in scenarios of near-term interest. If we find ourselves in some unusual unanticipated situation, the utility functions will likely disagree on what to do, and then the clarifying questions start again.

Why should we think that there's a "true" utility function which captures our preferences? And, if there is, why should we assume that it has an explicit representation in the hypothesis space?

Technically, you don't need this assumption. As I wrote in this comment [LW · GW]: "it's not necessary for our actual preferences to be among the ensemble of models if for any veto that our actual preferences would make, there's some model in the ensemble that also makes that veto."

(I haven't read a lot about quantilization so I can't say much about that. However, a superintelligent adversary seems like something to avoid.)

comment by abramdemski · 2019-06-03T05:44:40.100Z · score: 5 (3 votes) · LW · GW

However, I think it is reasonable to at least add a calibration requirement: there should be no way to systematically correct estimates up or down as a function of the expected value.

Why is this important? If the thing with the highest score is always the best action to take, why does it matter if that score is an overestimate? Utility functions are fictional anyway right?

If there's a systematic bias in the score, the thing with the highest score may not always be the best action to take. Calibrating the estimates may change the ranking of options.

For example, it could be that expected values above 0.99 are almost always significant overestimates, with an average true value of 0.5. A calibrated learner would observe this and systematically correct such items downwards. The new top choices would probably have values like 0.989 (if that's the only correction applied).

This provides something of a guarantee that systematic Goodhart-type problems will eventually be recognized and corrected, to the extent which they occur.

A meta-rule like that, which corrects observed biases in the aggregate scores, isn't easy to represent as a direct object-level hypothesis about the data. That's why calibrated learning may not be Bayesian. And, without a calibration guarantee, you'd need some other argument as to why representing uncertainty helps to avoid Goodhart.

comment by jimrandomh · 2019-06-04T23:23:09.592Z · score: 2 (1 votes) · LW · GW

However, I think it is reasonable to at least add a calibration requirement: there should be no way to systematically correct estimates up or down as a function of the expected value.

Why is this important? If the thing with the highest score is always the best action to take, why does it matter if that score is an overestimate? Utility functions are fictional anyway right?

As a very high level, first-pass approximation, I think the right way to think of this is as a sort of unit test; even if we can't directly see a reason why systematically incorrect estimates would cause problems in an AI design, this is an obvious enough desiderata that we should by default assume a system which breaks it is bad, unless we can prove otherwise.

Closer to the object level--yes, the highest-scoring action is the correct action to take, and if you model miscalibration as a single, monotonic function applied as the last step before deciding, then it can't change any decisions. But if miscalibration can affect any intermediate steps, then this doesn't hold. As a simple example: suppose the AI is deciding whether to pay to preserve its access to a category of options which it knows are highly subject to Regressional Goodhart.

comment by abramdemski · 2019-06-03T06:26:59.874Z · score: 2 (1 votes) · LW · GW

If our AI system assigns high subjective credence to a large variety of utility functions, then the value of information which helps narrow things down is high.

To oversimplify my preferred approach: The initial prior acts as a sort of net which should have the true utility function in it somewhere. Clarifying questions to the overseer let the AI pull this net tight around a much smaller set of possible utility functions. It does this until the remaining utility functions can’t easily be distinguished through clarifying questions, and/or the remaining utility functions all say to do the same thing in scenarios of near-term interest. If we find ourselves in some unusual unanticipated situation, the utility functions will likely disagree on what to do, and then the clarifying questions start again.

I agree that this general picture seems to make sense, but, it does not alleviate the concerns which you are responding to. To reiterate: if there are serious Goodhart-shaped concerns about mostly-correct-but-somewhat-wrong utility functions breaking under optimization pressure, then why do those concerns go away for mixture distributions?

I agree that the uncertainty will cause the AI to investigate, but at some point there will be diminishing returns to investigation; the remaining hypotheses might be utility functions which can't be differentiated by the type of evidence which the AI is able to gather. At that point, the AI will then put a lot of optimization pressure on the mixture distribution which remains. Then, what is the argument that things go well? Won't this run into siren worlds [LW · GW] and so on, by default?

Technically, you don’t need this assumption. As I wrote in this comment: “it’s not necessary for our actual preferences to be among the ensemble of models if for any veto that our actual preferences would make, there’s some model in the ensemble that also makes that veto.”

Yeah, it seems possible and interesting to formalize an argument like that.

(I haven’t read a lot about quantilization so I can’t say much about that. However, a superintelligent adversary seems like something to avoid.)

The "adversary" can be something like a mesa-optimizer arising from a search which the system runs in order to solve a problem [LW · GW]. If you've got rich enough of a hypothesis space (due to using a rich hypothesis space of world-models, or a rich set of possible human utility functions, etc etc), then you'll have some of those lurking in the hypothesis space. Reasoning in an appropriate way about the possibility, even if you manage to avoid mesa-optimizers in reality, could require game-theoretic reasoning.

OTOH, although quantilization can be justified by a story involving an actual adversary, that's not necessarily the best way to think about what it is really doing. Robustness properties [LW · GW] tend to involve some kind of universal quantifier over a bunch of possibilities. Maintaining a property under such a universal quantification is like adversarial game theory; you're trying to do well no matter what strategy the other player uses. So, robustness properties tend to be conveniently described in adversarial terms. That's basically what's going on in the case of quantilization.

Similarly, "adversarial Goodhart" doesn't have to be about superintelligent adversaries, in general. It can be about cases where we want stronger guarantees, and so, are willing to compromiso some decision-theoretic optimality in return for better worst-case guarantees.

comment by John_Maxwell (John_Maxwell_IV) · 2019-06-03T07:29:50.699Z · score: 4 (2 votes) · LW · GW

I agree that this general picture seems to make sense, but, it does not alleviate the concerns which you are responding to. To reiterate: if there are serious Goodhart-shaped concerns about mostly-correct-but-somewhat-wrong utility functions breaking under optimization pressure, then why do those concerns go away for mixture distributions?

I agree that the uncertainty will cause the AI to investigate, but at some point there will be diminishing returns to investigation; the remaining hypotheses might be utility functions which can't be differentiated by the type of evidence which the AI is able to gather. At that point, the AI will then put a lot of optimization pressure on the mixture distribution which remains. Then, what is the argument that things go well? Won't this run into siren worlds and so on, by default?

The siren world scenario posits an AI that is "actually evil" and is an agent which makes plans to manipulate the user.

  • If the AI assigns decent credence to a utility function that assigns massive negative utility to "evil and unmitigated suffering", that will cause its subjective expected utility estimate of the siren world to take a big hit. It would be better off implementing the exact same world, minus the evil and unmitigated suffering. The only way it would think that world was actually better with the evil and unmitigated suffering in it is if something went very wrong during the data-gathering process.

  • I also don't think we should create an agent which makes plans to manipulate the user. The only question it should ever ask the user is the one that maximizes its subjective value of information.

The marketing world problem is very related to the discussion I had with Paul Christiano here [LW · GW]. The problem is that the overseer has insufficient time to reflect on their true values. I don't think there is any way of getting around this issue in general: Creating FAI is time-sensitive, which means we won't have enough time to reflect on our true values to be 100% sure that all the input we give the AI is good. In addition to the things I mentioned in that discussion, I think we should:

  • Make a system that's capable of changing its values "online" in response to our input. Corrigibility lets us procrastinate on moral philosophy.

  • Instead of trying to build eutopia right off the bat, build an "optimal ivory tower" for doing moral philosophy in. Essentially, implement coherent extrapolated volition in the real world.

Anyway, the reason the Goodhart-shaped concerns go away is because the thing that maximizes the mixture is likely to be something that is approved of by a diverse range of utility functions that are all semi-compatible with the input the user has provided. If there's even a single plausible utility function which strongly disapproves, the value of information of requesting clarification from the overseer regarding that particular plan is high. For a worked example, see "Smile maximization case study" in this essay.

As I said, I think Goodhart's law is largely about distributional shift. My scheme incentivizes the AI to mostly take "on-distribution" plans: plans it is confident are good, because many different ways of looking at the data all point to them being good. "Off-distribution" plans will tend to benefit from clarification first: Some ways of extrapolating the data say they are good, others say they are bad, so VoI is high.

the remaining hypotheses might be utility functions which can't be differentiated by the type of evidence which the AI is able to gather

Thanks for bringing this up, I'll think about it. Part of me wants to say "if the AI has wrung all the information it possibly can from the user, and it is well-calibrated [in the sense I defined the term above], then it should just maximize its subjective expected utility at that point, because maximizing expected utility is just what you do!" Or: "If the overseer isn't capable of evaluating plans anymore because they are too complex, maybe it is time for the AI to help the overseer upgrade their intelligence!" But maybe there's an elegant way to implement a more conservative design. (You could, for example, disallow the execution of any plan that the AI thought there was at least a 5% chance was below some utility threshold. But that involves the use of two arbitrary parameters, which seems inelegant.)

comment by abramdemski · 2019-06-03T23:43:38.831Z · score: 4 (2 votes) · LW · GW

I am a little frustrated with your reply (particularly the first half), but I'm not sure if you're really missing my point (perhaps I'll have to think of a different way of explaining it) vs addressing it, but not giving me enough of an argument for me to connect the dots. I'll have to think more about some of your points.

Many of your statements seem true for moderately-intelligent systems of the sort you describe, but, don't clearly hold up when a lot of optimization pressure is applied.

If there's even a single plausible utility function which strongly disapproves, the value of information of requesting clarification from the overseer regarding that particular plan is high.

The VOI incentive can't be so strong that the AI is willing to pay arbitrarily high costs (commit the resources of the whole galaxy to investigating ever-finer details of human preferences, deconstruct each human atom by atom, etc...). So, at some point, it can be worthwhile to entirely compromise one somewhat-plausible for the sake of others.

This would be untrue if, for example, the system maximized the weighted product (the weight is used as an exponent of the hypothesis ). It would then actually never be worth it to entirely zero out one possible utility function for the sake of optimizing others. That proposal likely has its own issues, but I mention it just to make clear that I'm not bemoaning an inevitable fact of decision theory -- there are alternatives.

As I said, I think Goodhart's law is largely about distributional shift. My scheme incentivizes the AI to mostly take "on-distribution" plans: plans it is confident are good, because many different ways of looking at the data all point to them being good.

This is one of the assertions which seems generally true of moderately intelligent systems optimizing under value uncertainty, but doesn't seem to hold up as a lot of optimization pressure is applied. Good plans will tend to be on-distribution, because that's a good way to reap the gains of many different remaining hypotheses which agree for on-distribution things but disagree elsewhere. Why would the best plans tend to be on-distribution? Why wouldn't they find weird corner cases where many of the hypotheses give extremely high scores not normally achievable?

Part of me wants to say "if the AI has wrung all the information it possibly can from the user, and it is well-calibrated [in the sense I defined the term above], then it should just maximize its subjective expected utility at that point, because maximizing expected utility is just what you do!" Or: "If the overseer isn't capable of evaluating plans anymore because they are too complex, maybe it is time for the AI to help the overseer upgrade their intelligence!" But maybe there's an elegant way to implement a more conservative design.

Yeah, that's the direction I'm thinking in. By the way -- I'm not even trying to say that maximizing subjective expected utility is actually the wrong thing to do (particularly if you've got calibration properties, or knows-what-it-knows properties, or some other learning-theoretic properties which we haven't realized we want yet). I'm just saying that the case is not clear, and it seems like we'd want the case to be clear.

comment by John_Maxwell (John_Maxwell_IV) · 2019-06-04T03:49:44.496Z · score: 2 (1 votes) · LW · GW

Why wouldn't they find weird corner cases where many of the hypotheses give extremely high scores not normally achievable?

Why would a system of more-than-moderate intelligence find such incorrect hypotheses to be the most plausible ones? There would have to be some reason why all the hypotheses which strongly disliked this corner case were ruled out.

I know I'm being a little fuzzy about realizability. Let's consider how humans solve these problems. Suppose you had a pet alien, with alien values, which is capable of limited communication regarding its preferences. The goal of corrigibility is to formalize your good-faith efforts take care of your alien to the best of your ability into an algorithm that a computer can follow. Suppose you think of some very unusual idea for taking care of your alien which, according to a few hypotheses you've come up with for what it likes, would make it extremely happy. If you were reasonably paranoid, you might address the issue of unrealized hypotheses on the spot, and attempt to craft a new hypothesis which is compatible with most/all of the data you've seen and also has your unusual idea inadvertently killing the alien. (This is a bit like "murphyjitsu" from CFAR.) If you aren't able to generate such a hypothesis, but such a hypothesis does in fact exist, and is the correct hypothesis, and the alien dies after your idea... then you probably aren't super smart.

I'm just saying that the case is not clear, and it seems like we'd want the case to be clear.

You have to start somewhere. Discussions like this can help make things clear :) I'm getting value from it... you've given me some things to think about, and I think the murphyjitsu idea is something I hadn't thought of previously :)

I think it often makes sense to reason at an informal level before proceeding to a formal one.

Edit: related discussion here [LW · GW].

comment by abramdemski · 2019-06-04T07:26:22.775Z · score: 2 (1 votes) · LW · GW
Why would a system of more-than-moderate intelligence find such incorrect hypotheses to be the most plausible ones? There would have to be some reason why all the hypotheses which strongly disliked this corner case were ruled out.

That's not the case I'm considering. I'm imagining there are hypotheses which strongly dislike the corner cases. They just happen to be out-voted.

Think of it like this. There are a bunch of hypotheses. All of them agree fairly closely with high probability on plans which are "on-distribution", ie, similar to what it has been able to get feedback from humans about (however it does that). The variation is much higher for "off-distribution" plans.

There will be some on-distribution plans which achieve somewhat-high values for all hypotheses which have significant probability. However, the AI will look for ways to achieve even higher expected utility if possible. Unless there are on-distribution plans which max out utility, it may look off-distribution. This seems plausible because the space of on-distribution plans is "smaller"; there's room for a lot to happen in the off-distribution space. That's why it reaches weird corner cases.

And, since the variation is higher in off-distribution space, there may be some options that really look quite good, but which achieve very low value under some of the plausible hypotheses. In fact, because the different remaining hypotheses are different, it seems quite plausible that highly optimized plans have to start making trade-offs which compromise one value for another. (I admit it is possible the search finds a way to just make everything better according to every hypothesis. But that is not what the search is told to do, not exactly. We can design systems which do something more like that, instead, if that is what we want.)

When I put it that way, another problem with going off-distribution is apparent: even if we do find a way to get better scores according to every plausible hypothesis by going off-distribution, we trust those scores less because they're off-distribution. Of course, we could explicitly try to build a system with the goal of remaining on-distribution. Quantilization follows fairly directly from that :)

comment by John_Maxwell (John_Maxwell_IV) · 2019-06-05T01:06:14.008Z · score: 3 (2 votes) · LW · GW

When I put it that way, another problem with going off-distribution is apparent: even if we do find a way to get better scores according to every plausible hypothesis by going off-distribution, we trust those scores less because they're off-distribution.

I realize I'm playing fast and loose with realizability again, but it seems to me that a system which is capable of being "calibrated", in the sense I defined calibration above, should be able to reason for itself that it is less knowledgable about off-distribution points and have some kind of prior belief that the score for any particular off-distribution point is equal to the mean score for the entire (off-distribution?) space, and it should need a fair amount of evidence to shift this prior. I'm not necessarily specifying how concretely to achieve this, just saying that it seems like a desideratum for a "calibrated" ML system in the sense that I'm using the term.

Maybe effects like this could be achieved partially through e.g. having different hypotheses be defined on different subsets of the input space, and always including a baseline hypothesis which is just equal to the mean of the entire space.

If you want a backup system that also attempts to flag & veto any action that looks off-distribution for the sake of redundancy, that's fine by me too. I think some safety-critical software systems for e.g. space shuttles have been known to do this (do a computation in multiple different ways & aggregate them somehow to mitigate errors in any particular subsystem).

Quantilization follows fairly directly from that :)

My current understanding of quantilization is "choose randomly from the top X% of actions". I don't see how this helps very much with staying on-distribution... as you say, the off-distribution space is larger, so the majority of actions in the top X% of actions could still be off-distribution.

In any case, quantilization seems like it shouldn't work due to the fragility of value thesis. If we were to order all of the possible configurations of Earth's atoms from best to worst according to our values, the top 1% of those configurations is still mostly configurations which aren't very valuable.

comment by abramdemski · 2019-06-14T08:42:28.221Z · score: 2 (1 votes) · LW · GW
My current understanding of quantilization is "choose randomly from the top X% of actions". I don't see how this helps very much with staying on-distribution... as you say, the off-distribution space is larger, so the majority of actions in the top X% of actions could still be off-distribution.

The base distribution you take the top X% of is supposed to be related to the "on-distribution" distribution, such that sampling from the base distribution is very likely to keep things on-distribution, at least if the quantilizer's own actions are the main potential source of distributional shift. This could be the case if the quantilizer is the only powerful AGI in existence, and the actions of a powerful AGI are the only thing which would push things into sufficiently "off-distribution" possibilities for there to be a concern. (I'm not saying these are entirely reasonable assumptions; I'm just saying that this is one way of thinking about quantilization.)

In any case, quantilization seems like it shouldn't work due to the fragility of value thesis. If we were to order all of the possible configurations of Earth's atoms from best to worst according to our values, the top 1% of those configurations is still mostly configurations which aren't very valuable.

The base distribution quantilization samples from is about actions, or plans, or policies, or things like that -- not about configurations of atoms.

So, you should imagine a robot sending random motor commands to its actuators, not highly intelligently steering the planet into a random configuration.

comment by abramdemski · 2019-06-03T06:07:35.845Z · score: 2 (1 votes) · LW · GW

If I understand correctly, extremal Goodhart is essentially the same as distributional shift from the Concrete Problems in AI Safety paper.

I think that's right. Perhaps there is a small distinction to be brought out, but basically, extremal Goodhart is distributional shift brought about by the fact that the AI is optimizing hard.

In any case… I’m not exactly sure what you mean by “calibration”, but when I say “calibration”, I refer to “knowing what you know”. For example, when I took this online quiz, it told me that when I said I was extremely confident something was true, I was always right, and when said I was a little confident something was true, I was only right 66% of the time. I take this as an indicator that I’m reasonably “well-calibrated”; that is, I have a sense of what I do and don’t know.

A calibrated AI system, to me, is one that correctly says “this thing I’m looking at is an unusual thing I’ve never encountered before, therefore my 95% credible intervals related to it are very wide, and the value of clarifying information from my overseer is very high”.

Here's what I mean by calibration: there's a function from the probability you give to the frequency observed (or from the expected value you give to the average value observed), and the function approaches a straight x=y line as you learn. That's basically what you describe in the example of the online test. However, in ML, there's a difference between knows-what-it-knows learning [LW · GW] (KWIK learning) and calibrated learning. KWIK learning is more like what you describe in the second paragraph above. Calibrated learning is focused on the idea that a system should learn when it is systematically over/under confident, correcting such predictable biases. KWIK learning is more focused on not making claims when you have insufficient evidence to pinpoint the right answer.

Your complaints about Bayesian machine learning seem correct. My view is that addressing these complaints & making some sort of calibrated learning method competitive with deep learning is the best way to achieve FAI. I haven’t yet seen an FAI problem which seems like it can’t somehow be reduced to calibrated learning.

I don't think the inner alignment problem, or the unintended optimization problem [LW · GW], reduce to calibrated learning (or KWIK learning). However, my reasons are somewhat complex. I think it is reasonable to try to make those reductions, so long as you grapple with the real issues.

I’m not super hung up on statistical guarantees, as I haven’t yet seen a way to make them in general which doesn’t require making some sort of unreasonable or impractical assumption about the world (and I’m skeptical such a method exists). The way I see it, if your system is capable of self-improving in the right way, it should be able to overcome deficiencies in its world-modeling capabilities for itself. In my view, the goal is to build a system which gets safer as it self-improves & becomes better at reasoning.

Statistical guarantees are just a way to be able to say something with confidence. I agree that they're often impractical, and therefore only a toy model of how things can work (at best). However, I'm not very sympathetic to attempts to solve the problem without actually-quite-strong arguments for alignment-relevant properties being made somehow. The question is, how?

comment by Zack_M_Davis · 2019-06-03T03:40:05.320Z · score: 5 (3 votes) · LW · GW

(Spelling note: it's apparently [LW · GW] supposed to be "Goodhart" with no e.)

comment by abramdemski · 2019-06-03T04:06:10.223Z · score: 2 (1 votes) · LW · GW

D: Wow, I'm surprised I made such a mistake so repeatedly. Thanks!

comment by Davidmanheim · 2019-06-03T06:44:01.816Z · score: 16 (4 votes) · LW · GW

Also, I keep feeling bad that we're perpetuating giving Goodhart credit, rather than Campbell, since Campbell was clearly first - https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2018.01205.x - and Goodhart explicitly said he was joking in a recent interview.

comment by TurnTrout · 2019-06-03T20:23:22.498Z · score: 4 (2 votes) · LW · GW

If optimizing an arbitrary somewhat-but-not-perfectly-right utility function gives rise to serious Goodhart-related concerns

One thing I’ve been thinking about recently is: why does this happen? Could we have predicted the general phenomenon in advance, without imagining individual scenarios? What aspect of the structure of optimal goal pursuit in an environment reliably produces this result?