Posts

Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI 2024-01-26T07:22:06.370Z
Thomas Kwa's MIRI research experience 2023-10-02T16:42:37.886Z
AISC team report: Soft-optimization, Bayes and Goodhart 2023-06-27T06:05:35.494Z
Soft optimization makes the value target bigger 2023-01-02T16:06:50.229Z
Jeremy Gillen's Shortform 2022-10-19T16:14:43.693Z
Neural Tangent Kernel Distillation 2022-10-05T18:11:54.687Z
Inner Alignment via Superpowers 2022-08-30T20:01:52.129Z
Finding Goals in the World Model 2022-08-22T18:06:48.213Z
The Core of the Alignment Problem is... 2022-08-17T20:07:35.157Z
Project proposal: Testing the IBP definition of agent 2022-08-09T01:09:37.687Z
Broad Basins and Data Compression 2022-08-08T20:33:16.846Z
Translating between Latent Spaces 2022-07-30T03:25:06.935Z
Explaining inner alignment to myself 2022-05-24T23:10:56.240Z
Goodhart's Law Causal Diagrams 2022-04-11T13:52:33.575Z

Comments

Comment by Jeremy Gillen (jeremy-gillen) on EJT's Shortform · 2024-04-03T02:27:19.231Z · LW · GW

I sometimes name your work in conversation as an example of good recent agent foundations work, based on having read some of it and skimmed the rest, and talked to you a little about it at EAG. It's on my todo list to work through it properly, and I expect to actually do it because it's the blocker on me rewriting and posting my "why the shutdown problem is hard" draft, which I really want to post.

The reasons I'm a priori not extremely excited are that it seems intuitively very difficult to avoid either of these issues:

  • I'd be surprised if an agent with (very) incomplete preferences was real-world competent. I think it's easy to miss ways that a toy model of an incomplete-preference-agent might be really incompetent.
  • It's easy to shuffle around the difficulty of the shutdown problem, e.g. by putting all the hardness into an assumed-adversarially-robust button-manipulation-detector or self-modification-detector etc.

It's plausible you've avoided these problems but I haven't read deeply enough to know yet. I think it's easy for issues like this to be hidden (accidentally), so it'll take a lot of effort for me to read properly (but I will, hopefully in about a week).

The part where it works for a prosaic setup seems wrong (because of inner alignment issues (although I see you cited my post in a footnote about this, thanks!)), but this isn't what the shutdown problem is about so it isn't an issue if it doesn't apply directly to prosaic setups.

Comment by Jeremy Gillen (jeremy-gillen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-02-21T07:41:49.873Z · LW · GW

I would be excited to read this / help with a draft. 

We can meet in person one afternoon and work out some cruxes and write them up?

Is the claim here that the AI performs well on ~all the human-checkable tasks and then reverts to being absolutely useless or sabotaging on the hard stuff?

Yes, approximately, as I believe you and I are capable of doing. [...PhD student crux]

The analogy is strained due to not being able to gradient update my brain with arbitrary training data. It's pretty unclear to me if I could pull off deception like this, it seems pretty hard imo.

Are you saying you buy my side of the PhD student crux, but that it isn't closely analogous because you think we have more fine-grained control over the thoughts or goals of the AI? And in your second sentence my guess is that you are saying a second reason you disagree is that it looks difficult for a research AI to know when to perform well and when to perform less well? I think we might be miscommunicating about this because it doesn't look like the sort of situation where the "deception" is a relatively difficult task.

I think if you took a human research team, stuck them on an island, and had them get up to speed in some subfield they're not part of, they would be unable to distinguish between research that had just been done on the mainland or is just past the frontier. This just seems pretty hard.

I agree with this. But the research we need isn't just past the frontier. At some point not that far past the frontier, you get to a situation where the research team can say "given our guesses about mainland researchers and their time constraints, we'd put <1% on them having made discovery X {or explored idea Y, or done experiment Z}. Let's {delay giving it to them, not work on that direction, insert a subtle false assumption, instead share our 8th best idea, just say we're stuck, communicate it poorly}".

I think this example is closer to what we'll get with AIs rather than the bar being at "human level" it's more like "what has been accomplished by the humans+AIs in the outside world, for which I've received untrustworthy information". 

Sure, kinda, but the important part here is that a trapped agentic AI can selectively mislead you about the most important {knowledge, tasks, strategies} that you aren't capable of doing with just humans. If you're capable of doing it with humans, or if you're close to being able to do it with humans, then why not just do that? Seems safer.

And every time the AIs do useful work we can add this to our supervised training corpus. 

Only if you spend the human resources to verify it. Are you aiming for 30x research acceleration? Because this by itself might be enough limit you below that level. (and it also directly provides a way for the AI to tell "what time it is" by sending messages to future iterations, and plausibly also provide a bunch of information about human research capability and capacity).

One version of this is: you have GPT-5 (potentially untrustworthy) do some research, you have humans carefully check this research using the best available tools (e.g., AI debate / other 'scalable oversight'), and then you use this research as supervised / labeled data for training GPT-6 to do good work. 

Mostly my objection to this is above.  On scalable oversight, I buy that it might be possible to get 20% improvements in the accuracy of your evaluations of (adversarialy generated!) research, compared to team of humans trying really hard to evaluate research, given a human level research AI. This is so far from being enough. Are you predicting much better scalable oversight that scales much further than I am predicting it will? (I'm assuming you're using "GPT-5/6" to represent "some powerful future consequentialist AI that can do research", because otherwise we should be having a different argument).

While it's probably non-trivial, I expect you could use this internal activation to train models which do a better job paying attention to the tokens you want them to.

It'd be cool if there were a demonstration of manipulating internal activations during training, in a way that fixes non-trivial unwanted OOD behavior. If such techniques existed, and also worked on consequentialist agents (I expect would be harder), I would consider them very useful tools that would help with building an aligned agent.

While it's probably non-trivial, I expect you could use this internal activation to train models which do a better job paying attention to the tokens you want them to.

This is very plausible, but I think you're stretching the analogy between LMs and future research AIs way too far. "Paying attention to the tokens we want them to" doesn't add much/any control over the properties we want control over (compared to e.g. changing the input string to remove information we don't want it paying attention to).

Similarly, I think it's probable (but far from guaranteed) that deceptively aligned models are representing training/deployment in their activations, or that sandbagging models are representing human-level / super-human-level difficulty for tasks. If so, it it possible in principle to do all kinds of things with these activations which may help us. e.g., get the model to solve tasks it otherwise would have intentionally failed at, or giving us hints about the accuracy of a research result. I expect this is difficult implementation wise, but still a reason for hope. IMO activation engineering is off to a good start. 

I think you might be centrally thinking about an LLM completing a few paragraphs, where I'm thinking of an AI doing a large scale scientific project? I buy that for many concepts X, you can work out the "neural correlates of X" in your AI. But that kind of thing is noisy (and plausibly can be optimized against by the AI) unless you have a deep understanding of what you are measuring. And optimizing against such imperfect metrics obviously wouldn't do much beyond destroying the metric. I do think research in this direction has a chance of being useful, but mainly by being upstream of much better understanding.

By leaning more on generalization, I mean leaning more on the data efficiency thing

Sorry for misinterpreting you, but this doesn't clarify what you meant. 

also weak-to-strong generalization ideas.

I think I don't buy the analogy in that paper, and I don't find the results surprising or relevant (by my current understanding, after skimming it). My understanding of the result is "if you have a great prior, you can use it to overcome some label noise and maybe also label bias". But I don't think this is very relevant to extracting useful work from a misaligned agent (which is what we are talking about here), and based on the assumptions they describe, I think they agree? (I just saw appendix G, I'm a fan of it, it's really valuable that they explained their alignment plan concisely and listed their assumptions).

I could imagine starting with a deceptively aligned AI whose goal is "Make paperclips unless being supervised which is defined as X, Y, and Z, in which case look good to humans". And if we could change this AI to have the goal "Make paperclips unless being supervised which is defined as X, Y, and Q, in which case look good to humans", that might be highly desirable. In particular, it seems like adversarial training here allows us to expand the definition of 'supervision', thus making it easier to elicit good work from AIs (ideally not just 'looks good').

If we can tell we are have such an AI, and we can tell that our random modifications are affecting the goal, and also the change is roughly one that helps us rather than changing many things that might or might not be helpful, this would be a nice situation to be in.

I don't feel like I'm talking about AIs which have "taking-over-the-universe in their easily-within-reach options". I think this is not within reach of the current employees of AGI labs, and the AIs I'm thinking of are similar to those employees in terms of capabilities, but perhaps a bit smarter, much faster, and under some really weird/strict constraints (control schemes). 

Section 6 assumes we have failed to control the AI, so it is free of weird/strict constraints, and free to scale itself up, improve itself, etc. So my comment is about an AI that no longer can be assumed to have human-ish capabilities.

Comment by Jeremy Gillen (jeremy-gillen) on PIBBSS Speaker events comings up in February · 2024-02-20T23:05:45.594Z · LW · GW

Do you have recordings? I'd be keen to watch a couple of the ones I missed.

Comment by Jeremy Gillen (jeremy-gillen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-02-20T01:08:48.782Z · LW · GW

I feel like you’re proposing two different types of AI and I want to disambiguate them. The first one, exemplified in your response to Peter (and maybe referenced in your first sentence above), is a kind of research assistant that proposes theories (after having looked at data that a scientist is gathering?), but doesn’t propose experiments and doesn’t think about the usefulness of its suggestions/theories. Like a Solomonoff inductor that just computes the simplest explanation for some data? And maybe some automated approach to interpreting theories?

The second one, exemplified by the chess analogy and last paragraph above, is a bit like a consequentialist agent that is a little detached from reality (can’t learn anything, has a world model that we designed such that it can’t consider new obstacles).

Do you agree with this characterization?

What I'm saying is "simpler" is that, given a problem that doesn't need to depend on the actual effects of the outputs on the future of the real world […], it is simpler for the AI to solve that problem without taking into consideration the effects of the output on the future of the real world than it is to take into account the effects of the output on the future of the real world anyway.

I accept chess and formal theorem-proving as examples of problem where we can define the solution without using facts about the real-world future (because we can easily write down formally a definition of what the solution looks like). 

For a more useful problem (e.g. curing a type of cancer) we (the designers) only know how to define a solution in terms of real world future states (patient is alive, healthy, non traumatized, etc). I’m not saying there doesn’t exist a definition of success that doesn’t involve referencing real-world future states. But the AI designers don’t know it (and I expect it would be relatively complicated).

My understanding of your simplicity argument is that it is saying that it is computationally cheaper for a trained AI to discover during training a non-consequence definition of the task, despite a consequentialist definition being the criterion used to train it? If so, I disagree that computation cost is very relevant here, generalization (to novel obstacles) is the dominant factor determining how useful this AI is.

Comment by Jeremy Gillen (jeremy-gillen) on The Pointer Resolution Problem · 2024-02-20T00:12:43.472Z · LW · GW

Geometric rationality ftw!

(In normal planning problems there are exponentially many plans to evaluate (in the number of actions). So that doesn't seem to be a major obstacle if your agent is already capable of planning.)

Comment by Jeremy Gillen (jeremy-gillen) on The Pointer Resolution Problem · 2024-02-16T22:56:12.350Z · LW · GW

Might be much harder to implement, but could we maximin "all possible reinterpretations of alignment target X"?

Comment by Jeremy Gillen (jeremy-gillen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-02-12T06:54:44.890Z · LW · GW

In my view, in order to be dangerous in a particularly direct way (instead of just misuse risk etc.), an AI's decision to give output X depends on the fact that output X has some specific effects in the future.

Agreed.

Whereas, if you train it on a problem where solutions don't need to depend on the effects of the outputs on the future, I think it much more likely to learn to find the solution without routing that through the future, because that's simpler.

The "problem where solutions don't need to depend on effects" is where we disagree. I agree such problems exist (e.g. formal proof search), but those aren't the kind of useful tasks we're talking about in the post. For actual concrete scientific problems, like outputting designs for a fusion rocket, the "simplest" approach is to be considering the consequences of those outputs on the world. Otherwise, how would it internally define "good fusion rocket design that works when built"? How would it know not to use a design that fails because of weaknesses in the metal that will be manufactured into a particular shape for your rocket? A solution to building a rocket is defined by its effects on the future (not all of its effects, just some of them, i.e. it doesn't explode, among many others).

I think there's a (kind of) loophole here, where we use an "abstract hypothetical" model of a hypothetical future, and optimize for consequences our actions for that hypothetical. Is this what you mean by "understood in abstract terms"? So the AI has defined "good fusion rocket design" as "fusion rocket that is built by not-real hypothetical humans based on my design and functions in a not-real hypothetical universe and has properties and consequences XYZ" (but the hypothetical universe isn't the actual future, it's just similar enough to define this one task, but dissimilar enough that misaligned goals in this hypothetical world don't lead to coherent misaligned real-world actions). Is this what you mean? Rereading your comment, I think this matches what you're saying, especially the chess game part.

The part I don't understand is why you're saying that this is "simpler"? It seems equally complex in kolmogorov complexity and computational complexity.

Comment by Jeremy Gillen (jeremy-gillen) on What are the known difficulties with this alignment approach? · 2024-02-12T06:23:46.761Z · LW · GW

I think the overall goal in this proposal is to get a corrigible agent capable of bounded tasks (that maybe shuts down after task completion), rather than a sovereign?

One remaining problem (ontology identification) is making sure your goal specification stays the same for a world-model that changes/learns.

Then the next remaining problem is the inner alignment problem of making sure that the planning algorithm/optimizer (whatever it is that generates actions given a goal, whether or not it's separable from other components) is actually pointed at the goal you've specified and doesn't have any other goals mixed into it. (see Context Disaster for more detail on some of this, optimization daemons, and actual effectiveness). Part of this problem is making sure the system is stable under reflection.

Then you've got the outer alignment problem of making sure that your fusion power plant goal is safe to optimize (e.g. it won't kill people who get in the way, doesn't have any extreme effects if the world model doesn't exactly match reality, or if you've forgotten some detail). (See Goodness estimate bias, unforeseen maximum). 

Ideally here you build in some form of corrigibility and other fail-safe mechanisms, so that you can iterate on the details.

That's all the main ones imo. Conditional on solving the above, and actively trying to foresee other difficult-to-iterate problems, I think it'd be relatively easy to foresee and fix remaining issues.

Comment by Jeremy Gillen (jeremy-gillen) on Updatelessness doesn't solve most problems · 2024-02-11T06:39:31.365Z · LW · GW

A first problem with this is that there is no sharp distinction between purely computational (analytic) information/observations and purely empirical (synthetic) information/observations.

I don't see the fuzziness here, even after reading the two dogmas wikipedia page (but not really understanding it, it's hidden behind a wall of jargon). If we have some prior over universes, and some observation channel, we can define an agent that is updateless with respect to that prior, and updateful with respect to any calculations it performs internally. Is there a section of Radical Probablism that is particularly relevant? It's been a while.
It's not clear to me why all superintelligences having the same classification matters. They can communicate about edge cases and differences in their reasoning. Do you have an example here?

A second and more worrying problem is that, even given such convergence, it's not clear all other agents will decide to forego the possible apparent benefits of logical exploitation. It's a kind of Nash equilibrium selection problem: If I was very sure all other agents forego them (and have robust cooperation mechanisms that deter exploitation), then I would just do like them.

I think I don't understand why this is a problem. So what if there are some agents running around being updateless about logic? What's the situation that we are talking about a Nash equilibrium for? 

As mentioned in the post, Counterfactual Mugging as presented won't be common, but equivalent situations in multi-agentic bargaining might, due to (the naive application of) some priors leading to commitment races.

Can you point me to an example in bargaining that motivates the usefulness of logical updatelessness? My impression of that section wasn't "here is a realistic scenario that motivates the need for some amount of logical updatelessness", it felt more like "logical bargaining is a situation where logical updatelessness plausibly leads to terrible and unwanted decisions".

It's not looking like something as simple as that will solve, because of reasoning as in this paragraph:

Unfortunately, it’s not that easy, and the problem recurs at a higher level: your procedure to decide which information to use will depend on all the information, and so you will already lose strategicness. Or, if it doesn’t depend, then you are just being updateless, not using the information in any way.

Or in other words, you need to decide on the precommitment ex ante, when you still haven't thought much about anything, so your precommitment might be bad.

Yeah I wasn't thinking that was a "solution", I'm biting the bullet of losing some potential value and having a decision theory that doesn't satisfy all the desiderata. I was just saying that in some situations, such an agent can patch the problem using other mechanisms, just as an EDT agent can try to implement some external commitment mechanism if it lives in a world full of transparent newcomb problems.

Comment by Jeremy Gillen (jeremy-gillen) on Updatelessness doesn't solve most problems · 2024-02-11T03:00:48.908Z · LW · GW

To me it feels like the natural place to draw the line is update-on-computations but updateless-on-observations. Because 1) It never disincentivizes thinking clearly, so commitment races bottom out in a reasonable way, and 2) it allows cooperation on common-in-the-real-world newcomblike problems.

It doesn't do well in worlds with a lot of logical counterfactual mugging, but I think I'm okay with this? I can't see why this situation would be very common, and if it comes up it seems that an agent that updates on computations can use some precommitment mechanism to take advantage of it (e.g. making another agent).

Am I missing something about why logical counterfactual muggings are likely to be common?

Looking through your PIBBS report (which is amazing, very helpful), I intuitively feel the pull of Desiderata 4 (No existential regret), and also the intuition of wanting to treat logical uncertainty and empirical uncertainty in a similar way. But ultimately I'm so horrified by the mess that comes from being updateless-on-logic that being completely updateful on logic is looking pretty good to me.

(Great post, thanks)

Comment by Jeremy Gillen (jeremy-gillen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-02-11T01:43:53.509Z · LW · GW

Thanks for reading it, it's good to know exactly where you think the argument is weakest and I appreciate the effort of going through and noting differences.

On section 4:

I definitely don't feel confident that any of the mentioned problems will arise in practice. I don't see why I should believe in an inner/outer shell breakdown of constraints — this section seemed quite speculative. 

This surprises me actually, I thought this section was solid conditional on the previous assumptions. I think you shouldn't think of them as problems that might arise in practice, instead they should be thought of as reasons why behavioral training under-specifies some parts of the AI algorithm.
I'll give some more quick examples of "outer-shell constraints", because examples are very common and it seems odd for this to be called speculative: Unendorsed habits or addictions in humans; the instinct to breathe in when you're trying to hold your breath; appending a checker algorithm C to any algorithm A, to double check the output of A; any biases in a search ordering (as in the alphago example).

On section 5:
It's on my todo list to write a comment responding to some of the specifics of Redwood's control post. This post wasn't intending to engage with their detailed arguments, just give the top-level reasons why we believe control is difficult.

Is the claim here that the AI performs well on ~all the human-checkable tasks and then reverts to being absolutely useless or sabotaging on the hard stuff?

Yes, approximately, as I believe you and I are capable of doing. The crux here might be "if we had 20 phd students, and incentivize them to both look like they are doing checkable research, and also incentivize them not to make any real useful progress on solving a particular difficult problem, would this be very useful without costing a lot of expert supervision?". If I remember correctly, this was a crux between myself and Ryan during an in person discussion. It seems likely to me that the phd students, trying not to be useful while looking good in checkable ways, will succeed at both. More intelligent agent are even more likely to succeed at both. It does depend on the type of research, I agree that formal theorem-proving would be much easier to incentivize properly. I'm talking about the least-easily-checkable 20% of useful research.

Predicting where that line is seems pretty difficult, especially given the AI cannot trust sensory inputs, does not know what time it is

Seems easy enough to predict given roughly human-scientist-team level of capabilities. And seems hard to get useful work out of human-scientist-team without them also working out ways to (maybe imperfectly) distinguish "faked sensory inputs" from real ones, and without leaking any information about what time it is. E.g. we need the AI to build on previous work

"There needs to be sufficient human-generated research in the domain where we are attempting to elicit the AI’s capabilities." Well, no. You either need sufficient labeled data in the domain we're trying to elicit capabilities, or good generalization from other domains. 

Nice, this is my other crux with Ryan, which we phrased roughly as "how data efficient can we make training". Where my argument against very high data efficiency is that current long-horizon RL isn't very data-efficient, and intentional exploration hacking seems plausible in this situation (e.g. bad analogy but for the sake of describing exploration hacking: someone is training you to solve rubik's cubes, you can avoid succeeding at solving a rubik's cube in the first place (and use other strategies to mess with their reward signal). This works as long as the trainers don't know the specific steps required to solve the rubik's cube, but even then I think there's some strategies that could work to reduce generalization).

The data need not be human generated, it could be generated by other AIs, as long as we trust it. 

?? This seems to be assuming a solution to the problem.

I think there are still numerous directions of hope though, e.g., leaning more on generalization, latent adversarial training. 

I agree there are numerous directions of hope, but disagree that either of your examples count, if I understand them correctly. "leaning more on generalization" ≈ "leaning more on agency", which doesn't seem good to me. Latent adversarial training seems like it could make an agent robust to a few extra categories of internal perturbation, but doesn't seem like it would help with any problem that we were focused on in this post.
 

I agree that we don't have strong reasons to think one thing or another here, but I think goal modification is reasonably likely: humans can iterate a bunch and maybe have okay interpretability tools (assuming decent but not fundamental advances). Also, as discussed, goals might be changing a bunch during training — that's not totally asymmetric, it also gives us hope about our ability to modify AI goals.

If we are using okay interpretability tools to understand whether the AI has the goal we intended, and to guide training, then I would consider that a fundamental advance over current standard training techniques.
I agree that goals would very likely be hit by some modifications during training, in combination with other changes to other parts of the algorithm. The problem is shaping the goal to be a specific thing, not changing it in unpredictable or not-fully-understood ways.

Many of the issues in this section are things that, if we're not being totally idiots, it seems we'll get substantial warning about. e.g., AIs colluding with their AI monitors. That's definitely a positive, though far from conclusive.

I think that there is a lot of room for the evidence to be ambiguous and controversial, and for the obvious problems to look patchable. For this reason I've only got a little hope that people will panic at the last minute due to finally seeing the problems and start trying to solve exactly the right problems. On top of this, there's the pressure of needing to "extract useful work to solve alignment" before someone less cautious builds an unaligned super-intelligence, which could easily lead to people seeing substantial warnings and pressing onward anyway.

Section 6:

I think a couple of the arguments here continue to be legitimate, such as "Unclear that many goals realistically incentivise taking over the universe", but I'm overall fine accepting this section. 

That argument isn't really what it says on the tin, it's saying something closer to "maybe taking over the universe is hard/unlikely and other strategies are better for achieving most goals under realistic conditions". I buy this for many environments and levels of power, but it's obviously wrong for AIs that have taking-over-the-universe in their easily-within-reach options. And that's the sort of AI we get if it can undergo self-improvement.

Overall I think your comment is somewhat representative of what I see as the dominant cluster of views currently in the alignment community. (Which seems like a very reasonable set of beliefs and I don't think you're unreasonable for having them).

Comment by Jeremy Gillen (jeremy-gillen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-31T10:05:04.788Z · LW · GW

I agree that it'd be extremely misleading if we defined "catastrophe" in a way that includes futures where everyone is better off than they currently are in every way (without being very clear about it). This is not what we mean by catastrophe.

Comment by Jeremy Gillen (jeremy-gillen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-31T08:33:14.558Z · LW · GW

Trying to find the crux of the disagreement (which I don't think lies in takeoff speed):

If we assume a multipolar, slow-takeoff, misaligned AI world, where there are many AIs that slowly takeover the economy and generally obey laws to the extent that they are enforced (by other AIs). And they don't particularly care about humans, in a similar manner to the way humans don't particularly care about flies. 

In this situation, humans eventually have approximately zero leverage, and approximately zero value to trade. There would be much more value in e.g. mining cities for raw materials than in human labor.

I don't know much history, but my impression is that in similar scenarios between human groups, with a large power differential and with valuable resources at stake, it didn't go well for the less powerful group, even if the more powerful group was politically fragmented or even partially allied with the less powerful group.

Which part of this do you think isn't analogous?
My guesses are either that you are expecting some kind of partial alignment of the AIs. Or that the humans can set up very robust laws/institutions of the AI world such that they remain in place and protect humans even though no subset of the agents is perfectly happy with this, and there exist laws/institutions that they would all prefer.

Comment by Jeremy Gillen (jeremy-gillen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-31T00:30:55.507Z · LW · GW

(2) is the problem that the initial ontology of the AI is insufficient to fully capture human values

I see, thanks! I agree these are both really important problems.

Comment by Jeremy Gillen (jeremy-gillen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-30T05:05:07.319Z · LW · GW

Yeah specifying goals in a learned ontology does seem better to me, and in my opinion is a much better approach than behavioral training.
But there's a couple of major roadblocks that come to mind:

  • You need really insanely good interpretability on the learned ontology.
  • You need to be so good at specifying goals in that ontology that they are robust to adversarial optimization.

Work on these problems is great. I particularly like John's work on natural latent variables which seems like the sort of thing that might be useful for the first two of these.

Keep in mind though there are other major problems that this approach doesn't help much with, e.g.:

  • Standard problems arising from the ontology changing over time or being optimized against.
  • The problem of ensuring that no subpart of your agent is pursuing different goals (or applying optimization in a way that may break the overall system at some point).
Comment by Jeremy Gillen (jeremy-gillen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-30T02:55:52.773Z · LW · GW

We aren't implicitly assuming (1) in this post. (Although I agree there will be economic pressure to expand the use of powerful AI, and this adds to the overall risk).

I don't understand what you mean by (2). I don't think I'm assuming it, but can't be sure.

One hypothesis: That AI training might (implicitly? Through human algorithm iteration?) involve a pressure toward compute efficient algorithms? Maybe you think that this a reason we expect consequentialism? I'm not sure how that would relate to the training being domain-specific though.
 

Comment by Jeremy Gillen (jeremy-gillen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-30T02:33:15.759Z · LW · GW

I think you and Peter might be talking past each other a little, so I want to make sure I properly understand what you are saying. I’ve read your comments here and on Nate’s post, and I want to start a new thread to clarify things.

I’m not sure exactly what analogy you are making between chess AI and science AI. Which properties of a chess AI do you think are analogous to a scientific-research-AI?

- The constraints are very easy to specify (because legal moves can be easily locally evaluated). In other words, the set of paths considered by the AI is easy to define, and optimization can be constrained to only search this space.
- The task of playing chess doesn’t at all require or benefit from modelling any other part of the world except for the simple board state.

I think these are the main two reasons why current chess AIs are safe.

Separately, I’m not sure exactly what you mean when you’re saying “scientific value”. To me, the value of knowledge seems to depend on the possible uses of that knowledge. So if an AI is evaluating “scientific value”, it must be considering the uses of the knowledge? But you seem to be referring to some more specific and restricted version of this evaluation, which doesn’t make reference at all to the possible uses of the knowledge? In that case, can you say more about how this might work?
Or maybe you’re saying that evaluating hypothetical uses of knowledge can be safe? I.e. there’s a kind of goal that wants to create “hypothetically useful” fusion-rocket-designs, but doesn’t want this knowledge to have any particular effect on the real future.

You might be reading us as saying that “AI science systems are necessarily dangerous” in the sense that it’s logically impossible to have an AI science system that isn’t also dangerous? We aren’t saying this. We agree that in principle such a system could be built.

Comment by Jeremy Gillen (jeremy-gillen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-29T23:44:00.902Z · LW · GW

Yep ontological crises are a good example of another way that goals can be unstable.
I'm not sure I understood how 2 is different from 1.

I'm also not sure that rebinding to the new ontology is the right approach (although I don't have any specific good approach). When I try to think about this kind of problem I get stuck on not understanding the details of how an ontology/worldmodel can or should work. So I'm pretty enthusiastic about work that clarifies my understanding here (where infrabayes, natural latents and finite factored sets all seem like the sort of thing that might lead to a clearer picture).

Comment by Jeremy Gillen (jeremy-gillen) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-26T20:36:07.112Z · LW · GW

Thanks! 
I think that our argument doesn't depend on all possible goals being describable this way. It depends on useful tasks (that AI designers are trying to achieve) being driven in large part by pursuing outcomes. For a counterexample, behavior that is defined entirely by local constraints (e.g. a calculator, or "hand on wall maze algorithm") aren't the kind of algorithm that is a source of AI risk (and also isn't as useful in some ways).


Your example of a pointer to a goal is a good edge case for our way of defining/categorizing goals. Our definitions don't capture this edge case properly. But we can extend the definitions to include it, e.g. if the goal that ends up eventually being pursued is an outcome, then we could define the observing agent as knowing that outcome in advance. Or alternatively, we could wait until the agent has uncovered its consequentialist goal, but hasn't yet completed it. In both these cases we can treat it as consequentialist. Either way it still has the property that leads to danger, which is the capacity to overcome large classes of obstacles and still get to its destination.

I'm not sure what you mean by "goal objects robust to capabilities not present early in training". If you mean "goal objects that specify shutdownable behavior while also specifying useful outcomes, and are robust to capability increases", then I agree that such objects exist in principle. But I could argue that this isn't very natural, if this is a crux and I'm understanding what you mean correctly?
 

Comment by Jeremy Gillen (jeremy-gillen) on A Shutdown Problem Proposal · 2024-01-22T02:09:57.388Z · LW · GW

I think you're right that the central problems remaining are in the ontological cluster, as well as the theory-practice gap of making an agent that doesn't override its hard-coded false beliefs.

But less centrally, I think one issue with the proposal is that the sub-agents need to continue operating in worlds where they believe in a logical contradiction. How does this work? (I think this is something I'm confused about for all agents and this proposal just brings it to the surface more than usual).

Also, agent1 and agent2 combine into some kind of machine. This machine isn't VNM rational. I want to be able to describe this machine properly. Pattern matching, my guess is that it violates independence in the same way as here. [Edit: Definitely violates independence, because the combined machine should choose a lottery over <button-pressed> over certainty of either outcome. I suspect that it doesn't have to violate any other axioms].

Comment by Jeremy Gillen (jeremy-gillen) on TurnTrout's shortform feed · 2024-01-01T23:43:53.311Z · LW · GW

I think the term is very reasonable and basically accurate, even more so with regard to most RL methods. It's a good way of describing a training process without implying that the evolving system will head toward optimality deliberately. I don't know a better way to communicate this succinctly, especially while not being specific about what local search algorithm is being used.

Also, evolutionary algorithms can be used to approximate gradient descent (with noisier gradient estimates), so it's not unreasonable to use similar language about both.

I'm not a huge fan of the way you imply that it was chosen for rhetorical purposes.

Comment by Jeremy Gillen (jeremy-gillen) on Some Rules for an Algebra of Bayes Nets · 2023-12-19T01:04:48.709Z · LW · GW

This is one of my favorite posts because it gives me tools that I expect to use.

A little while ago, John described his natural latent result to me. It seemed cool, but I didn't really understand how to use it and didn't take the time to work through it properly. I played around with similar math in the following weeks though; I was after a similar goal, which was better ways to think about abstract variables.

More recently, John worked through the natural latent proof on a whiteboard at a conference. At this point I felt like I got it, including the motivation. A couple of weeks later I tried to prove it as an exercise for myself (with the challenge being that I had to do it from memory, rigorously, and including approximation). This took me two or three days, and the version I ended up with used a slightly different version of the same assumptions, and got weaker approximation results. I used the graphoid axioms, which are the standard (but slow and difficult) way of formally manipulating independence relationships (and I didn't have previous experience using them).

This experience caused me to particularly appreciate this post. It turns lots of work into relatively little work.

Comment by Jeremy Gillen (jeremy-gillen) on Evolution provides no evidence for the sharp left turn · 2023-12-13T23:39:31.864Z · LW · GW

My understanding of the first part of your argument: The rapid (in evolutionary timescales) increase in human capabilities (that led to condoms and ice cream) is mostly explained by human cultural accumulation (i.e. humans developed better techniques for passing on information to the next generation).


My model is different. In my model, there are two things that were needed for the rapid increase in human capabilities. The first was the capacity to invent/create useful knowledge, and the second was the capacity to pass it on.
To me it looks like the human rapid capability gains depended heavily on both.

Comment by Jeremy Gillen (jeremy-gillen) on Jeremy Gillen's Shortform · 2023-11-14T21:07:09.011Z · LW · GW

Ah I see, I was referring to less complete abstractions. The "accurately predict all behavior" definition is fine, but this comes with a scale of how accurate the prediction is. "Directions and simple functions on these directions" probably misses some tiny details like floating point errors, and if you wanted a human to understand it you'd have to use approximations that lose way more accuracy. I'm happy to lose accuracy in exchange for better predictions about behavior in previously-unobserved situations. In particular, it's important to be able to work out what sort of previously-unobserved situation might lead to danger. We can do this with humans and animals etc, we can't do it with "directions and simple functions on these directions".

Comment by Jeremy Gillen (jeremy-gillen) on Jeremy Gillen's Shortform · 2023-11-14T19:33:50.720Z · LW · GW

There aren't really any non-extremely-leaky abstractions in big NNs on top of something like a "directions and simple functions on these directions" layer. (I originally heard this take from Buck)

Of course this depends on what it's trained to do? And it's false for humans and animals and corporations and markets, we have pretty good abstractions that allow us to predict and sometimes modify the behavior of these entities.

I'd be pretty shocked if this statement was true for AGI.

Comment by Jeremy Gillen (jeremy-gillen) on Jeremy Gillen's Shortform · 2023-11-08T20:30:18.190Z · LW · GW

Yeah I think I agree. It also applies to most research about inductive biases of neural networks (and all of statistical learning theory). Not saying it won't be useful, just that there's a large mysterious gap between great learning theories and alignment solutions and inside that gap is (probably, usually) something like the levels-of-abstraction mistake.

Comment by Jeremy Gillen (jeremy-gillen) on Deconfusing “ontology” in AI alignment · 2023-11-08T20:11:53.854Z · LW · GW

its notion of regulators generally does not line up with neural networks.

When alignment researchers talk about ontologies and world models and agents, we're (often) talking about potential future AIs that we think will be dangerous. We aren't necessarily talking about all current neural networks.

A common-ish belief is that future powerful AIs will be more naturally thought of as being agentic and having a world model. The extent to which this will be true is heavily debated, and gooder regulator is kinda part of that debate.

Biphasic cognition might already be an incomplete theory of mind for humans

Nothing wrong with an incomplete or approximate theory, as long as you keep an eye on the things that it's missing and whether they are relevant to whatever prediction you're trying to make.

Comment by Jeremy Gillen (jeremy-gillen) on Jeremy Gillen's Shortform · 2023-11-08T19:43:48.239Z · LW · GW

Here's a mistake some people might be making with mechanistic interpretability theories of impact (and some other things, e.g. how much Neuroscience is useful for understanding AI or humans).

When there are multiple layers of abstraction that build up to a computation, understanding the low level doesn't help much with understanding the high level. 


Examples:
1. Understanding semiconductors and transistors doesn't tell you much about programs running on the computer. The transistors can be reconfigured into a completely different computer, and you'll still be able to run the same programs. To understand a program, you don't need to be thinking about transistors or logic gates. Often you don't even need to be thinking about the bit level representation of data.

2. The computation happening in single neurons in an artificial neural network doesn't have have much relation to the computation happening at a high level. What I mean is that you can switch out activation functions, randomly connect neurons to other neurons, randomly share weights, replace small chunks of network with some other differentiable parameterized function. And assuming the thing is still trainable, the overall system will still learn to execute a function that is on a high level pretty similar to whatever high level function you started with.[1]

3. Understanding how neurons work doesn't tell you much about how the brain works. Neuroscientists understand a lot about how neurons work. There are models that make good predictions about the behavior of individual neurons or synapses. I bet that the high level algorithms that are running in the brain are most naturally understood without any details about neurons at all. Neurons probably aren't even a useful abstraction for that purpose. 

 

Probably directions in activation space are also usually a bad abstraction for understanding how humans work, kinda analogous to how bit-vectors of memory are a bad abstraction for understanding how program works.

Of course John has said this better.

  1. ^

    You can mess with inductive biases of the training process this way, which might change the function that gets learned, but (my impression is) usually not that much if you're just messing with activation functions.

Comment by Jeremy Gillen (jeremy-gillen) on Related Discussion from Thomas Kwa's MIRI Research Experience · 2023-10-02T21:30:32.869Z · LW · GW

"they should clearly communicate their non-respectful/-kind alternative communication protocols beforehand, and they should help the other person maintain their boundaries;"

Nate did this.

By my somewhat idiosyncratic views on respectful communication, Nate was roughly as respectful as Thomas Kwa. 

I do seem to be unusually emotionally compatible with Nate's style of communication though.

Comment by Jeremy Gillen (jeremy-gillen) on Instrumental Convergence? [Draft] · 2023-07-23T09:51:27.966Z · LW · GW

Section 4 then showed how those initial results extend to the case of sequential decision making.

[...]

If she's a resolute chooser, then sequential decisions reduce to a single non-sequential decisions.

Ah thanks, this clears up most of my confusion, I had misunderstood the intended argument here. I think I can explain my point better now:

I claim that proposition 3, when extended to sequential decisions with a resolute decision theory, shouldn't be interpreted the way you interpret it. The meaning changes when you make A and B into sequences of actions.

Let's say action A is a list of 1000000 particular actions (e.g. 1000000 small-edits) and B is a list of 1000000 particular actions (e.g. 1 improve-technology, then 999999 amplified-edits).[1]

Proposition 3 says that A is equally likely to be chosen as B (for randomly sampled desires). This is correct. Intuitively this is because A and B are achieving particular outcomes and desires are equally likely to favor "opposite" outcomes.

However this isn't the question we care about. We want to know whether action-sequences that contain "improve-technology" are more likely to be optimal than action-sequences that don't contain "improve-technology", given a random desire function. This is a very different question to the one proposition 3 gives us an answer to.

Almost all optimal action-sequences could contain "improve-technology" at the beginning, while any two particular action sequences are equally likely to be preferred to the other on average across desires. These two facts don't contradict each other. The first fact is true in many environments (e.g. the one I described[2]) and this is what we mean by instrumental convergence. The second fact is unrelated to instrumental convergence.


I think the error might be coming from this definition of instrumental convergence: 

could we nonetheless say that she's got a better than  probability of choosing  from a menu of  acts?

When  is a sequence of actions, this definition makes less sense. It'd be better to define it as something like "from a menu of  initial actions, she has a better than  probability of choosing a particular initial action ". 

 

 

I'm not entirely sure what you mean by "model", but from your use in the penultimate paragraph, I believe you're talking about a particular decision scenario Sia could find herself in.

Yep, I was using "model" to mean "a simplified representation of a complex real world scenario".

  1. ^

    For simplicity, we can make this scenario a deterministic known environment, and make sure the number of actions available doesn't change if "improve-technology" is chosen as an action. This way neither of your biases apply.

  2. ^

    E.g. we could define a "small-edit" as  to any location in the state vector. Then an "amplified-edit" as  to any location. This preserves the number of actions, and makes the advantage of "amplified-edit" clear. I can go into more detail if you like, this does depend a little on how we set up the distribution over desires.

Comment by Jeremy Gillen (jeremy-gillen) on Instrumental Convergence? [Draft] · 2023-07-14T19:53:54.895Z · LW · GW

I read about half of this post when it came out. I didn't want to comment without reading the whole thing, and reading the whole thing didn't seem worth it at the time. I've come back and read it because Dan seemed to reference it in a presentation the other day.

The core interesting claim is this:

My conclusion will be that most of the items on Bostrom's laundry list are not 'convergent' instrumental means, even in this weak sense. If Sia's desires are randomly selected, we should not give better than even odds to her making choices which promote her own survival, her own cognitive enhancement, technological innovation, or resource acquisition.

This conclusion doesn't follow from your arguments. None of your models even include actions that are analogous to the convergent actions on that list. 

The non-sequential theoretical model is irrelevant to instrumental convergence, because instrumental convergence is about putting yourself in a better position to pursue your goals later on. The main conclusion seems to come from proposition 3, but the model there is so simple it doesn’t include any possibility of Sia putting itself in a better position for later.

Section 4 deals with sequential decisions, but for some reason mainly gets distracted by a Newcomb-like problem, which seems irrelevant to instrumental convergence. I don't see why you didn't just remove Newcomb-like situations from the model? Instrumental convergence will show up regardless of the exact decision theory used by the agent.

Here's my suggestion for a more realistic model that would exhibit instrumental convergence, while still being fairly simple and having "random" goals across trajectories. Make an environment with 1,000,000 timesteps. Have the world state described by a vector of 1000 real numbers. Have a utility function that is randomly sampled from some Gaussian process (or any other high entropy distribution over functions) on . Assume there exist standard actions which directly make small edits to the world-state vector. Assume that there exist actions analogous to cognitive enhancement, making technology and gaining resources. Intelligence can be used in the future to more precisely predict the consequences of actions on the future world state (you’d need to model a bounded agent for this). Technology can be used to increase the amount or change the type of effect your actions have on the world state. Resources can be spent in the future for more control over the world state. It seems clear to me that for the vast majority of the random utility functions, it's very valuable to have more control over the future world state. So most sampled agents will take the instrumentally convergent actions early in the game and use the additional power later on. 

The assumptions I made about the environment are inspired by the real world environment, and the assumptions I've made about the desires are similar to yours, maximally uninformative over trajectories.

Comment by Jeremy Gillen (jeremy-gillen) on What money-pumps exist, if any, for deontologists? · 2023-06-29T06:19:40.205Z · LW · GW

I'm not sure how to implement the rule "don't pay people to kill people". Say we implement it as a utility function over world-trajectories, and any trajectory that involves any causally downstream of your actions killing gets MIN_UTILITY. This makes probabilistic tradeoffs so it's probably not what we want. If we use negative infinity, but then it can't ever take actions in a large or uncertain world. We need to add the patch that the agent must have been aware at the time of taking its actions that the actions had  chance of causing murder. I think these are vulnerable to blackmail because you could threaten to cause murders that are causally-downstream-from-its-actions.

Maybe I'm confused and you mean "actions that pattern match to actually paying money directly for murder", in which case it will just use a longer causal chain, or opaque companies that may-or-may-not-cause-murders will appear and trade with it.

If the ultimate patch is "don't take any action that allows unprincipled agents to exploit you for having your principles", then maybe there isn't any edge cases. I'm confused about how to define "exploit" though.

Comment by Jeremy Gillen (jeremy-gillen) on What money-pumps exist, if any, for deontologists? · 2023-06-28T20:33:27.809Z · LW · GW

You leave money on the table in all the problems where the most efficient-in-money solution involves violating your constraint. So there's some selection pressure against you if selection is based on money.
We can (kinda) turn this into a money-pump by charging the agent a fee for to violate the constraint for it. Whenever it encounters such a situation, it pays you a fee and you do the killing.
Whether or not this counts as a money pump, I think it satisfies the reasons I actually care about money pumps, which are something like "adversarial agents can cheaply construct situations where I pay them money, but the world isn't actually different".

Comment by Jeremy Gillen (jeremy-gillen) on When is correlation transitive? · 2023-06-24T00:00:56.970Z · LW · GW

With my linear algebra being terrible, I was confused by this:

Until I realized that  and  are basis vectors and  are coordinates on a unit circle, because  and  all have length 1.

Comment by Jeremy Gillen (jeremy-gillen) on Infrafunctions and Robust Optimization · 2023-06-19T21:15:03.958Z · LW · GW

Good point on CDT, I forgot about this. I was using a more specific version of reflective stability.

> - wait.. that doesn't seem right..?

Yeah this is also my reaction. Assuming that bound seems wrong.

I think there is a problem with thinking of  as a known-to-be-acceptably-safe agent, because how can you get this information in the first place? Without running that agent in the world? To construct a useful estimate of the expected value of the "safe"-agent, you'd have to run it lots of times, necessarily sampling from it's most dangerous behaviours.

Unless there is some other non-empirical way of knowing an agent is safe?

Yeah I was thinking of having large support of the base distribution. If you just rule-in behaviours, this seems like it'd restrict capabilities too much.

Comment by Jeremy Gillen (jeremy-gillen) on Why don't quantilizers also cut off the upper end of the distribution? · 2023-05-15T07:28:33.897Z · LW · GW

Quantilizing can be thought of as maximizing a lower bound on the expected true utility, where you know that your true utility  is close to your proxy utility function  in some region , such that . If we shape this closeness assumption a bit differently, such that the approximation gets worse faster, then sometimes it can be optimal to cut off the top of the distribution (as I did here, see some of the diagrams for quantilizers with the top cut off, I'll paste one below). 


The reason normal quantilizers don't do that is that they are minimizing the distance between  and the action distribution, by a particular measure that falls out of the proof (see above link), which allows the lower bound to be as high as possible. Essentially it's minimizing distribution shift, which allows a better generalization bound.

I think this distribution shift perspective is one way of explaining why we need randomization at all. A delta function is a bigger distribution shift than a distribution that matches the shape of .
But the next question is why are we even in a situation where we need to deal with the worst case across possible true utility functions? One story is that we are dealing with an optimizer that is maximizing trueutility + error, and one way to simplify that is to model it as max min trueutility - error, where the min only controls the error function within the restrictions of the known bound.

I'm not currently super happy with that story and I'm keen for people to look for alternatives, or variations of soft optimization with different types of knowledge about the relationship between the proxy and true utility. Because intuitively it does seem like taking the 99%ile action should be fine under slightly different assumptions.

One example of this is if we know that , where  is some heavy tailed noise, and we know the distribution of  (and ), then we can calculate the actual optimal percentile action to take, and we should deterministically take that action. But this is sometimes quite sensitive to small errors in our knowledge about the distribution of  and particularly . My AISC team has been testing scenarios like this as part of their research.

Comment by Jeremy Gillen (jeremy-gillen) on Infrafunctions and Robust Optimization · 2023-05-15T06:25:13.288Z · LW · GW

I really like infrafunctions as a way of describing the goals of mild optimizers. But I don't think you've described the correct reasons why infrafunctions help with reflective stability. The main reason is you've hidden most of the difficulty of reflective stability in the  bound.

My core argument is that a normal quantilizer is reflectively stable[1] if you have such a bound. In the single-action setting, where it chooses a policy once at the beginning and then follows that policy, it must be reflectively stable because if the chosen policy constructs another optimizer that leads to low true utility, then that policy must have very low base probability (or the bound can't have been true). In a multiple-action setting, we can sample each action conditional on the previous actions, according to the quantilizer distribution, and this will be reflectively stable in the same way (given the bound).

Adding in observations doesn't change anything here if we treat U and V as being expectations over environments.

The way you've described reflective stability in the dynamic consistency section is an incentive to keep the same utility infrafunction no matter what observations are made. I don't see how this is necessary or even strongly related to reflective stability. Can't we have a reflectively stable CDT agent?

Two core difficulties of reflective stability 

I think the two core difficulties of reflective stability are 1) getting the  bound (or similar) and 2) describing an algorithm that lazily does a ~minimal amount of computation for choosing the next few actions. I expect realistic agents need 2 for efficiency. I think utility infrafunctions do help with both of these, to some extent.

The key difficulty of getting a tight  bound with normal quantilizers is that simple priors over policies don't clearly distinguish policies that create optimizers. So there's always a region at the top where "create an optimizer" makes up most of the mass. My best guess for a workaround for this is to draw simple conservative OOD boundaries in state-space and policy-space (the base distribution is usually just over policy space, and is predefined). When a boundary is crossed, it lowers the lower bound on the utility (gives Murphy more power). These boundaries need to be simple so that they can be learned from relatively few (mostly in-distribution) examples, or maybe from abstract descriptions. Being simple and conservative makes them more robust to adversarial pressure. 

Your utility infrafunction is a nice way to represent lots of simple out-of-distribution boundaries in policy-space and state-space. This is much nicer than storing this information in the base distribution of a quantilizer, and it also allows us to modulate how much optimization pressure can be applied to different regions of state or policy-space.

With 2, an infrafunction allows on-the-fly calculation that the consequences of creating a particular optimizer are bad. It can do this as long as the infrafunction treats the agent's own actions and the actions of child-agents as similar, or if it mostly relies on OOD states as the signal that the infrafunction should be uncertain (have lots of low spikes), or some combination of these. Since the max-min calculation is the motivation for randomizing in the first place, an agent that uses this will create other agents that randomize in the same way. If the utility infrafunction is only defined over policies, then it doesn't really give us an efficiency advantage because we already had to calculate the consequences of most policies when we proved the bound.

One disadvantage, which I think can't be avoided, is that an infrafunction over histories is incentivized to stop humans from doing actions that lead to out-of-distribution worlds, whereas an infrafunction over policies is not (to the extent that stopping humans doesn't itself cross boundaries). This seems necessary because it needs to consider the consequences of the actions of optimizers it creates, and this generalizes easily to all consequences since it needs to be robust.
 

  1. ^

    Where I'm defining reflective stability as: If you have an anti-Goodhart modification in your decision process (e.g. randomization), ~never follow a plan that indirectly avoids the anti-Goodhart modification (e.g. making a non-randomized optimizer). 

    The key difficulty here being that the default pathway for achieving a difficult task involves creating new optimization procedures, and by default these won't have the same anti-Goodhart properties as the original.

Comment by Jeremy Gillen (jeremy-gillen) on Soft optimization makes the value target bigger · 2023-04-11T17:16:38.899Z · LW · GW

Thanks! 

  1. I think it's more accurate to say it's incomplete. And the standard generalization bound math doesn't make that prediction as far as I'm aware, it's just the intuitive version of the theory that does. I've been excited by the small amount of singular learning theory stuff I've read. I'll read more, thanks for making that page.
  2. Fantastic!
Comment by Jeremy Gillen (jeremy-gillen) on Goodhart's Law Causal Diagrams · 2023-03-31T04:50:14.776Z · LW · GW

No, Justin knows roughly the content for the intended future posts but after getting started writing I didn't feel like I understood it well enough to distill it properly and I lost motivation, and since then I became too busy.
I'll send you the notes that we had after Justin explained his ideas to me.

Comment by Jeremy Gillen (jeremy-gillen) on What's wrong with the paperclips scenario? · 2023-01-07T18:48:41.613Z · LW · GW

Paperclip metaphor is not very useful if interpreted as "humans tell the AI to make paperclips, and it does that, and the danger comes from doing exactly what we said because we said a dumb goal". 

There is a similar-ish interpretation, which is good and useful, which is "if the AI is going to do exactly what you say, you have to be insanely precise when you tell it what to do, otherwise it will Goodhart the goal." The danger comes from Goodharting, rather than humans telling it a dumb goal. The paperclip example can be used to illustrate this, and I think this is why it's commonly used. 

And he is referencing in the first tweet (with inner alignment), that we will have very imprecise (think evolution-like) methods of communicating a goal to an AI-in-training. 

So apparently he intended the metaphor to communicate that the AI-builders weren't trying to set "make paperclips" as the goal, they were aiming for a more useful goal and "make paperclips" happened to be the goal that it latched on to. Tiny molecular squiggles is better here because it's a more realistic optima of an imperfectly learned goal representation.

Comment by Jeremy Gillen (jeremy-gillen) on Soft optimization makes the value target bigger · 2023-01-05T15:32:32.998Z · LW · GW
  • On it always being a rescaled subset: Nice! This explains the results of my empirical experiments. Jessica made a similar argument for why quantilizers are optimal, but I hadn't gotten around to trying to adapt it to this slightly different situation. It makes sense now that the maximin distribution is like quantilizing against the value lower bound, except that the value lower bound changes if you change the minimax distribution. This explains why some of the distributions are exactly quantilizers but some not, it depends on whether that value lower bound drops lower than the start of the policy distribution.
     
  • On planning: Yeah it might be hard to factorize the final policy distribution. But I think it will be easy to approximately factorize the prior in lots of different ways. And I'm hopeful that we can prove that some approximate factorizations maintain the same q value, or maybe only have a small impact on the q value. Haven't done any work on this yet.
    • If it turns out we need near-exact factorizations, we might still be able to use sampling techniques like rejection sampling to correct an approximate sampling distribution, because we have easy access to the correct density of samples that we have generated (just prior/q), we just need an approximate distribution to use for getting high value samples more often, which seems straightforward.
Comment by Jeremy Gillen (jeremy-gillen) on Soft optimization makes the value target bigger · 2023-01-03T22:03:52.221Z · LW · GW

Thanks for clarifying, I misunderstood your post and must have forgotten about the scope, sorry about that. I'll remove that paragraph. Thanks for the links, I hadn't read those, and I appreciate the pseudocode.

I think most likely I still don't understand what you mean by grader-optimizer, but it's probably better to discuss on your post after I've spent more time going over your posts and comments.

My current guess in my own words is: A grader-optimizer is something that approximates argmax (has high optimization power)?
And option (1) acts a bit like a soft optimizer, but with more specific structure related to shards, and how it works out whether to continue optimizing?

Comment by Jeremy Gillen (jeremy-gillen) on Soft optimization makes the value target bigger · 2023-01-03T15:05:00.837Z · LW · GW

Why does the infinite limit of value learning matter if we're doing soft optimization against a fixed utility distribution?

Comment by Jeremy Gillen (jeremy-gillen) on Soft optimization makes the value target bigger · 2023-01-03T14:59:52.471Z · LW · GW

I also think that it's probably worth considering soft optimization to the old Impact Measures work from this community -- in particular, I think it'd be interesting to cast soft optimization methods as robust optimization, and then see how the critiques raised against impact measures (e.g. in this comment or this question) apply to soft optimization methods like RL-KL or the minimax objective you outline here.

Thanks for linking these, I hadn't read most of these. As far as I can tell, most of the critiques don't really apply to soft optimization. The main one that does is Paul's "drift off the rails" thing. I expect we need to use the first AGI (with soft opt) to help solve alignment in a more permanent and robust way, then use that make a more powerful AGI that helps avoid "drifting off the rails".

In my understanding, impact measures are an important part of the utility function that we don't want to get wrong, but not much more than that. Whereas soft optimization directly removes Goodharting of the utility function. It feels like the correct formalism for attacking the root of that problem. Whereas impact measures just take care of a (particularly bad) symptom.

Abram Demski has a good answer to the question you linked that contrasts mild optimization with impact measures, and it's clear that mild optimization is preferred. And Abram actually says:

An improvement on this situation would be something which looked more like a theoretical solution to Goodhart's law, giving an (in-some-sense) optimal setting of a slider to maximize a trade-off between alignment and capabilities ("this is how you get the most of what you want"), allowing ML researchers to develop algorithms orienting toward this.

This is exactly what I've got.

Comment by Jeremy Gillen (jeremy-gillen) on Soft optimization makes the value target bigger · 2023-01-03T14:38:32.107Z · LW · GW

I agree that it's good to try to answer the question, under what sort of reliability guarantee is my model optimal, and it's worth making the optimization power vs robustness trade off explicit via toy models like the one you use above.

That being said, re: the overall approach. Almost every non degenerate regularization method can be thought of as "optimal" wrt some robust optimization problem (in the same way that non degenerate optimization can be trivially cast as Bayesian optimization) -- e.g. the RL - KL objective with respect to some  is optimal the following minimax problem:

for some . So the question is not so much "do we cap the optimization power of the agent" (which is a pretty common claim!) but "which way of regularizing agent policies more naturally captures the robust optimization problems we want solved in practice". 

Yep, agreed. Except I don't understand how you got that equation from RL with KL penalties, can you explain that further? 

I think the most novel part of this post is showing that this robust optimization problem (maximizing average utility while avoiding selection for upward errors in the proxy) is the one we want to solve, and that it can be done with a bound that is intuitively meaningful and can be determined without just guessing a number.

(It's also worth noting that an important form of implicit regularization is the underlying capacity/capability of the model we're using to represent the policy.)

Yeah I wouldn't want to rely on this without a better formal understanding of it though. KL regularization I feel like I understand.

Comment by Jeremy Gillen (jeremy-gillen) on Soft optimization makes the value target bigger · 2023-01-03T14:10:20.392Z · LW · GW

I've probably misunderstood your comment, but I think this post already does most of what you are suggesting (except for the very last bit about including human feedback)? It doesn't assume the human's utility function is some real thing that it will update toward, it has a fixed distribution over utility throughout deployment. There's no mechanism for updating that distribution, so it can't become arbitrarily certain about the utility function.

And that distribution  isn't treated like epistemic uncertainty, it's used to find a worst case lower bound on utility?

Comment by Jeremy Gillen (jeremy-gillen) on Soft optimization makes the value target bigger · 2023-01-03T10:20:23.036Z · LW · GW

Good point, policies that have upward errors will still be preferentially selected for (a little). However, with this approach, the amount of Goodharting should be constant as the proxy quality (and hence optimization power) scales up.

I agree with your second point, although I think there's a slight benefit over original quantilizers because  is set theoretically, rather than arbitrarily by hand. Hopefully this makes it less tempting to mess with it.

Comment by Jeremy Gillen (jeremy-gillen) on Neural Tangent Kernel Distillation · 2022-10-25T13:55:20.077Z · LW · GW

Thanks, you are right on both. I don't know how I missed the simplification, I remember wanting to make the analytical form as simple as possible.

I really should have added the reference for this, since I just copied it from a paper, so I did that in a footnote. I just followed up the derivation a bit further and the parts I checked seem solid, but annoying that it's spread out over three papers.

Comment by Jeremy Gillen (jeremy-gillen) on Neural Tangent Kernel Distillation · 2022-10-21T12:03:54.706Z · LW · GW

Yeah good point, I should have put more detail here.

My understanding is that, for most common initialization distributions and architectures,  and  in the infinite width limit. This is because they both end up being expectations of random variables that are symmetrically distributed around 0.

However, in the finite width regime if we want to be precise, we can simply add those terms back onto the kernel regression.

So really, with finite width:

 

There are a few other very non-rigorous parts of our explanation. Another big one is that  is underspecified by the data in the infinite width limit, so it could fit the data in lots of ways. Stuff about ridge regularized regression and bringing in details about gradient descent fixes this, I believe, but I'm not totally sure whether it changes anything at finite width.

Comment by Jeremy Gillen (jeremy-gillen) on Jeremy Gillen's Shortform · 2022-10-19T16:14:43.913Z · LW · GW

Technical alignment game tree draft

I've found it useful to organize my thoughts on technical alignment strategies approximately by the AGI architecture (and AGI design related assumptions). The target audience of this text is mostly myself, when I have it better worded I might put it in a post. Mostly ignoring corrigibility-style approaches here.

  • End-to-end trained RL: Optimizing too hard on any plausible training objective leads to gaming that objective
    • Scalable oversight: Adjust the objective to make it harder to game, as the agent gets more capable of gaming it.
      • Problem: Transparency is required to detect gaming/deception, and this problem gets harder as you scale capabilities. Additionally, you need to be careful to scale capabilities slowly.
    • Shard theory: Agents that can control their data distribution will hit a phase transition where they start controlling their data distribution to avoid goal changes. Plausibly we could make this phase transition happen before deception/gaming, and train it to value honesty + other standard human values.
      • Problem: Needs to be robust to “improving cognition for efficiency & generality”, i.e. goal directed part of mind overriding heuristic morality part of mind.
  • Transfer learning
    • Natural abstractions: Most of the work of identifying human values comes from predictive world modeling. This knowledge transfers, such that training the model to pursue human goals is relatively data-efficient.
      • Failure mode: Can't require much optimization to get high capabilities, otherwise the capabilities optimization will probably dominate the goal learning, and re-learn most of the goals.
  • We have an inner aligned scalable optimizer, which will optimize a given precisely defined objective (given a WM & action space).
    • Vanessa!IRL gives us an approach to detecting and finding the goals of agents modeled inside an incomprehensible ontology.
      • Unclear whether the way Vanessa!IRL draws a line between irrationalities and goal-quirks is equivalent to the way a human would want this line to be drawn on reflection.
  • We have human-level imitators
    • Do some version of HCH
      • But imitators probably won’t be well aligned off distribution, for normal goal misgeneralization reasons. HCH moves them off distribution, at least a bit.
  • End-to-end RL + we have unrealistically good interpretability
    • Re-target the search
      • Interpretability probably won’t ever be this good, and if it was, we might not need to learn the search algorithm (we could build it from scratch, probably with better inner alignment guarantees).