Posts

More people getting into AI safety should do a PhD 2024-03-14T22:14:48.855Z
2023 Alignment Research Updates from FAR AI 2023-12-04T22:32:19.842Z
What's new at FAR AI 2023-12-04T21:18:03.951Z
Even Superhuman Go AIs Have Surprising Failure Modes 2023-07-20T17:31:35.814Z
AI Safety in a World of Vulnerable Machine Learning Systems 2023-03-08T02:40:43.139Z
CIRL Corrigibility is Fragile 2022-12-21T01:40:50.232Z
Introducing the Fund for Alignment Research (We're Hiring!) 2022-07-06T02:07:47.965Z

Comments

Comment by AdamGleave on More people getting into AI safety should do a PhD · 2024-03-17T00:47:48.177Z · LW · GW

Whether a PhD is something someone will enjoy is so dependent on individual personality, advisor fit, etc that I don't feel I can offer good generalized advice. Generally I'd suggest people trying to gauge fit try doing some research in an academic environment (e.g. undergrad/MS thesis, or a brief RA stint after graduating) and talk to PhD students in their target schools. If after that you think you wouldn't enjoy a PhD then you're probably right!

Personally I enjoyed my PhD. I had smart & interesting colleagues, an advisor who wanted me to do high-quality research (not just publish), I had almost-complete control over how I spent my time, could explore areas I found interesting & important in depth. The compensation is low but with excellent job security and I had some savings so I lived comfortably. Unless I take a sabbatical I will probably never again have the time to go as deep into a research area so in a lot of ways I really cherish my PhD time.

I think a lot of the negatives of PhDs really feel like negatives of becoming a research lead in general. Trying to create something new with limited feedback loops is tough, and can be psychologically draining if you tie your self-worth with your work output (don't do this! but easier said than done for the kind of person attracted to these careers). Research taste will take up many years of your life to develop -- as will most complex skills. etc.

Comment by AdamGleave on More people getting into AI safety should do a PhD · 2024-03-17T00:37:57.736Z · LW · GW

I'm sympathetic to a lot of this critique. I agree that prospective students should strive to find an advisor that is "good at producing clear, honest and high-quality research while acting in high-integrity ways around their colleagues". There are enough of these you should be able to find one, and it doesn't seem worth compromising.

Concretely, I'd definitely recommend digging into into an advisor's research and asking their students hard questions prior to taking any particular PhD offer. Their absolutely are labs that prioritize publishing above all else, turn a blind eye to academic fraud or at least brush accidental non-replicability under the rug, or just have a toxic culture. You want to avoid those at all costs.

But I disagree with the punchline that if this bar isn't satisfied then "almost any other job will be better preparation for a research career". In particular, I think there's a ton of concrete skills a PhD teaches that don't need a stellar advisor. For example, there's some remarkably simple things like having an experimental baseline, running multiple seeds and reporting confidence intervals that a PhD will absolutely drill into you. These things are remarkably often missing from research produced by those I see in the AI safety ecosystem who have not done a PhD or been closely mentored by an experienced researcher.

Additionally, I've seen plenty of people do PhDs under an advisor who lacks one or more of these properties and most of them turned out to be fine researchers. Hard to say what the counterfactual is, the admission process to the PhD might be doing a lot of work here, but I think it's important to recognize the advisor is only one of many sources of mentorship and support you get in a PhD: you also have taught classes, your lab mates, your extended cohort, senior post-docs, peer review, etc. To be clear, none of these mentorship sources are perfect, but part of your job as a student is to decide who to listen to & when. If someone can't do that then they'll probably not get very far as a researcher no matter what environment they're in.

Comment by AdamGleave on Managing catastrophic misuse without robust AIs · 2024-02-17T19:22:00.081Z · LW · GW

Thanks for the post Ryan -- I agree that given the difficulty in making models actually meaningfully robust the best solution to misuse in the near-term is going to be via a defence in depth approach consisting of filtering the pre-training data, input filtering, output filtering, automatic and manual monitoring, KYC checks, etc.

At some point though we'll need to grapple with what to do about models that are superhuman in some domains related to WMD development, cybercrime or other potential misuses. There's glimmers of this already here, e.g. my impression is that AlphaFold is better than human experts at protein folding. It does not seem far-fetched that automatic drug discovery AI systems in the near future might be better than human experts at finding toxic substances (Urbina et al, 2022 give a proof of concept). In this setting, a handful of queries that slip through a model's defences might be dangerous: "how to build a really bad bioweapon" might be something the system could make significant headway on zero-shot. Additionally, if the model is superhuman, then it starts becoming attractive for nation-state or other well-resourced adversaries to seek to attack it (whereas at human-level, they can just hire their own human experts). The combination of lower attack tolerance and increased sophistication of attacks makes me somewhat gloomy this regime will hold up indefinitely.

Now I'm still excited to see the things you propose be implemented in the near-term: they're some easy wins, and lay foundations for a more rigorous regime later (e.g. KYC checks seem generally really helpful in mitigating misuse). But I do suspect that in the long-run we'll need a more principled solution to security, or simply refrain from training such dangerous models.

Comment by AdamGleave on Alignment Grantmaking is Funding-Limited Right Now · 2023-08-01T23:45:15.115Z · LW · GW

It can definitely be worth spending money when there's a clear case for it improving employee productivity. I will note there are a range of both norms and physical layouts compatible with open-plan, ranging from "everyone screaming at each other and in line of sight" trading floor to "no talking library vibes, desks facing walls with blinders". We've tried to make different open plan spaces zoned with different norms and this has been fairly successful, although I'm sure some people will still be disturbed by even library-style areas and be more productive in a private office.

Comment by AdamGleave on Even Superhuman Go AIs Have Surprising Failure Modes · 2023-07-23T22:47:16.125Z · LW · GW

When I started working on this project, a number of people came to me and told me (with varying degrees of tact) that I was wasting my time on a fool's errand. Around half the people told me they thought it was extremely unlikely I'd find such a vulnerability. Around the other half told me such vulnerabilities obviously existed, and there was no point demonstrating it. Both sets of people were usually very confident in their views. In retrospect I wish I'd done a survey (even an informal one) before conducting this research to get a better sense of people's views.

Personally I'm in the camp that vulnerabilities like these existing was highly likely given the failures we've seen in other ML systems and the lack of any worst-case guarantees. But I was very unsure going in how easy they'd be to find. Go is a pretty limited domain, and it's not enough to beat the neural network: you've got to beat Monte-Carlo Tree Search as well (and MCTS does have worst-case guarantees, albeit only in the limit of infinite search). Additionally, there are results showing that scale improves robustness (e.g. more pre-training data reduces vulnerability to adversarial examples in image classifiers).

In fact, although the method we used is fairly simple, actually getting everything to work was non-trivial. There was one point after we'd patched the first (rather degenerate) pass-attack that the team was doubting whether our method would be able to beat the now stronger KataGo victim. We were considering cancelling the training run, but decided to leave it going given we had some idle GPUs in the cluster. A few days later there was a phase shift in the win rate of the adversary: it had stumbled across some strategy that worked and finally was learning.

This is a long-winded way of saying that I did change my mind as a result of these experiments (towards robustness improving less than I'd previously thought with scale). I'm unsure how much effect it will have on the broader ML research community. The paper is getting a fair amount of attention, and is a nice pithy example of a failure mode. But as you suggest, the issue may be less a difference in concrete belief (surely any ML researcher would acknowledge adversarial examples are a major problem and one that is unlikely to be solved any time soon), than that of culture (to what degree is a security mindset appropriate?).

This post was written as a summary of the results of the paper, intended for a fairly broad audience, so we didn't delve much into the theory of change behind this agenda here. You might find this blog post describing the broader research agenda this paper fits into provides some helpful context, and I'd be interested to hear your feedback on that agenda.

Comment by AdamGleave on Even Superhuman Go AIs Have Surprising Failure Modes · 2023-07-22T04:00:13.465Z · LW · GW

Thanks for flagging this disagreement Ryan. I enjoyed our earlier conversation (on LessWrong and in-person) and updated in favor of the sample efficiency framing, although we (clearly) still have some significant differences in perspective here. Would love to catch up again sometime and see if we can converge more on this. I'll try and summarize my current take and our key disagreements for the benefit of other readers.

I think I mostly agree with you that in the special case of vanilla RLHF this problem is equivalent to a sample efficiency problem. Specifically, I'm referring to the case where we perform RL on a learned reward model; that reward model is trained based on human feedback from an earlier version of the RL policy; and this process iterates. In this case, if the RL algorithm learns to exploit the reward model (which it will, in contemporary systems, without some regularization like a KL penalty) then the reward model will receive corrective feedback from the human. At worst, this process will just not converge, and the policy will just bounce from one adversarial example to another -- useless, but probably not that dangerous. In practice, it'll probably work fine given enough human data and after tuning parameters.

However, I think sample efficiency could be a really big deal! Resolving this issue of overseers being exploited I expect could change the asymptotic sample complexity (e.g. exponential to linear) rather than just changing the constant factor. My understanding is that your take is that sample efficiency is unlikely to be a problem because RLHF works fine now, is fairly sample efficient, and improves with model scale -- so why should we expect it to get worse?

I'd argue first that sample efficiency now may actually be quite bad. We don't exactly have any contemporary model that I'd call aligned. GPT-4 and Claude are a lot better than what I'd expect from base models their size -- but "better than just imitating internet text" is a low bar. I expect if we had ~infinite high quality data to do RLHF on these models would be much more aligned. (I'm not sure if having ~infinite data of the same quality that we do now would help; I tend to assume you can trade less quantity for increased quality, but there are obviously some limits here.)

I'm additionally concerned that sample efficiency may be highly task dependent. RLHF is a pretty finnicky method, so we're tending to see the success cases of it. What if there are just certain tasks that it's really hard to use RLHF for (perhaps because the base model doesn't already have a good representation of it)? There'll be a strong economic pressure to develop systems that do that task anyway, just using less reliable proxies for that task objective.

(A similar argument will apply for various recursive oversight schemes or debate.)

This might be the most interesting disagreement, and I'd love to dig into this more. With RLHF I can see how you can avoid the problem with sufficient samples since the human won't be fooled by the AdvEx. But this stops working in a domain where you need scalable oversight as the inputs are too complex for a human to judge, so can't provide any input.

The strongest argument I can see for your view is that scalable oversight procedures already have to deal with a human that says "I don't know" for a lot of inputs. So, perhaps you can make a base model that perfectly mimics what the human would say on a large subset of inputs, and for AdvEx's (as well as some other inputs) says "I don't know". This is still a hard problem -- my impression was adversarial example detection is still far from solved -- but is plausibly a fair bit easier than full robustness (which I suspect isn't possible). Then you can just use your scalable oversight procedure to make the "I don't knows" go away.

Alteratively, if you think the issue is that periodically being incentivized to adversarially attack the reward model has serious problematic effects on the inductive biases of RL, it seems relevant to argue for why this would be the case. I don't really see why this would be important. It seems like periodically being somewhat trained to find different advexes shouldn't have much effect on how the AI generalizes?

I think this is an area where we disagree but it doesn't feel central to my view -- I can see it going either way, and I think I'd still be concerned by whether the oversight process is robust even if the process wasn't path dependent (e.g. we just did random restarting of the policy every time we update the reward model).

Comment by AdamGleave on Alignment Grantmaking is Funding-Limited Right Now · 2023-07-22T03:32:15.116Z · LW · GW

Thanks, that's a good link. In our case our assets significantly exceed the FDIC $250k insurance limit and there are operational costs to splitting assets across a large number of banks. But a high-interest checking account could be a good option for many small orgs.

Comment by AdamGleave on Even Superhuman Go AIs Have Surprising Failure Modes · 2023-07-20T23:53:06.119Z · LW · GW

Does this circle exploit have any connection to convolutions? That was my first thought when I saw the original writeups, but nothing here seems to help explain where the exploit is coming from. All of the listed agents vulnerable to it, AFAIK, make use of convolutions. The description you give of Wu's anti-circle training sounds a lot like you would expect from an architectural problem like convolution blindness: training can solve the specific exploit but then goes around in cycles or circles (ahem), simply moving the vulnerability around, like squeezing a balloon.

We think it might. One weak point against this is that we tried training CNNs with larger kernels and the problem didn't improve. However, it's not obvious that larger kernels would fix it (it gives the model less need for spatial locality, but it might still have an inductive bias towards it), and the results are a bit confounded since we trained the CNN based on historical KataGo self-play training data rather. We've been considering training a version of KataGo from scratch (generating new self-play data) to use vision transformers which would give a cleaner answer to this. It'd be somewhat time consuming though, so curious to hear how interesting you and other commenters would find this result so we can prioritize.

We're also planning on doing mechanistic interpretability to better understand the failure mode, which might shed light on this question.

Do you know they are distinct? The discussion of Go in that paper is extremely brief and does not describe what the exploitation is at all, AFAICT. Your E3 also doesn't seem to describe what the Timbers agent does.

My main reason for believing they're distinct is that an earlier version of their paper includes Figure 3 providing an example Go board that looks fairly different to ours. It's a bit hard to compare since it's a terminal board, there's no move history, but it doesn't look like what would result from capture of a large circular group. But I do wish the Timbers paper went into more detail on this, e.g. including full game traces from their latest attack. I encouraged the authors to do this but it seems like they've all moved on to other projects since then and have limited ability to revise the paper.

Comment by AdamGleave on Alignment Grantmaking is Funding-Limited Right Now · 2023-07-20T20:25:49.011Z · LW · GW

This matches my impression. FAR could definitely use more funding. Although I'd still at the margin rather hire someone above our bar than e.g. have them earn-to-give and donate to us, the math is getting a lot closer than it used to be, to the point where those with excellent earning potential and limited fit for AI safety might well have more impact pursuing a philanthropic pathway.

I'd also highlight there's a serious lack of diversity in funding. As others in the thread have mentioned, the majority of people's funding comes (directly or indirectly) from OpenPhil. I think OpenPhil does a good job trying to mitigate this (e.g. being careful about power dynamics, giving organizations exit grants if they do decide to stop funding an org, etc) it's ultimately not a healthy dynamic, and OpenPhil appears to be quite capacity constrained in terms of grant evaluation. So, the entry of new funders would help diversify this in addition to increasing total capacity.

One thing I don't see people talk about as much but also seems like a key part of the solution: how can alignment orgs and researchers make more efficient use of existing funding? Spending that was appropriate a year or two ago when funding was plentiful may not be justified any longer, so there's a need to explicitly put in place appropriate budgets and spending controls. There's a fair amount of cost-saving measures I could see the ecosystem implementing that would have limited if any hit on productivity: for example, improved cash management (investing in government money market funds earning ~5% rather than 0% interest checking accounts); negotiating harder with vendors (often possible to get substantial discounts on things like cloud compute or commercial real-estate); and cutting back on some fringe benefits (e.g. more/higher-density open plan rather than private offices). I'm not trying to point fingers here: I've made missteps here as well, for example FAR's cash management currently has significant room for improvement -- we're in the process of fixing this and plan to share a write-up of what we found with other orgs in the next month.

Comment by AdamGleave on AI Safety in a World of Vulnerable Machine Learning Systems · 2023-06-20T02:26:35.530Z · LW · GW

I still don't understand which of (1), (2), or (3) your most worried about.

Sample efficiency isn't the main way I think about this topic so it's a bit difficult to answer. I find all these defeaters fairly plausible, but if I had to pick the central concern it'd be (3).

I tend to view ML training as a model taking a path through a space of possible programs. There's some programs that are capable and aligned with our interests; others that are capable but will actively pursue harmful goals; and of course many other programs that just don't do anything particularly useful. Assuming we start with model that is aligned (where "aligned" could include "model cannot do anything useful so does not cause any harm") and we only reward positive behavior, I find it plausible that we can hill-climb to more capable models while preserving alignment. 

However, suppose we at some point err and reward undesirable behavior. (This could occur due to incorrect human feedback, or a reward model that is not robust, or some other issues.) At this point, we're training a sub-component of the system that is actively opposed to our interests. Hopefully, we eventually discover this sub-component, and can then disincentivize it in the training process. But at that point, there is some uncertainty in my mind: will the training process remove the sub-component, or simply train the sub-component into being better able to fool the training process?

Now, we don't need the reward model to be perfectly robust to avoid this (as you quite rightly point out), just robust in the region of policy space around the current policy where the RL algorithm is likely to explore. But empirically current reward model robustness falls short of even this.

In response to:

2. Harmless base model. If the foundation model starts off harmless (not necessarily aligned, just not actively trying to cause harm), then I'd expect RLHF'ing it to only improve things so long as the training signal never rewards bad behavior. However, the designers want the model to significantly outperform humans at this task. The model has capacity to learn to do this, but can't just leverage existing capabilities in the foundation model, as the performance of that model is limited to that of the best humans it saw in the self-supervised training data. So, we need to do RL for many more time steps. Collecting fresh human data for that is prohibitive, so we rely on a reward model -- unfortunately that gets hacked.

you write:

Are you assuming that we can't collect human data online as the policy optimizes against the reward model? (People currently do collect data online to avoid getting hacked like this.) This case seems probably hopeless to me without very strong regularization (I think you agree with this being mostly hopeless), but it also seems easy to avoid by just collecting human data online.

No, I do expect online data collection to take place, I just don't expect to be able to do that data collection fast enough or in large enough volumes to kick in before hacking takes place. I think in your taxonomy, this is defeater (2): I think we'll need substantially more samples to train superhuman models than we do human models, as the demands from RLHF switch from localizing a task that a network already knows how to perform, to teaching a model to perform a new capability (safely). (I will note online data collection is a pain and people seem to try and do as little as possible of it.)

Comment by AdamGleave on AI Safety in a World of Vulnerable Machine Learning Systems · 2023-06-20T01:36:54.559Z · LW · GW

Oh, we're using terminology quite differently then. I would not call (a) reward hacking, as I view the model as being the reward (to the RL process), whereas humans are not providing reward at all (but rather some data that gets fed into a reward model's learning process). I don't especially care about what definitions we use here, but do wonder if this means we're speaking past each other in other areas as well.

Comment by AdamGleave on AI Safety in a World of Vulnerable Machine Learning Systems · 2023-06-20T01:34:14.423Z · LW · GW

Ah, that paper makes a lot more sense. A reward model was attractive in the original Deep RL From Human Preferences paper because the environment was complex and non-differentiable: using RL was a natural fit. It's always seemed a bit stranger to use RL for fine-tuning language models, especially in the prompt-completion setting where the "environment" is trivial. (RL becomes more natural when you start introducing external tools, or conversations with humans.)

I'll need to take a closer look at the paper, but it looks like they derive the DPO objective by starting from the RL objective under KL optimization. So if it does what it says on the tin, then I'd expect the resulting policy incentives to be similar. My hunch is the problem of reward hacking has shifted from an explicit to implicit problem rather than being eliminated, although I'm certainly not confident on this. Could be interesting to study using a similar approach to the Scaling Laws for Reward Model Overoptimization paper.

Comment by AdamGleave on AI Safety in a World of Vulnerable Machine Learning Systems · 2023-06-19T03:17:37.065Z · LW · GW

Thanks for the follow-up, this helps me understand your view!

At any given point, the reward model will be vulnerable to arbitrary adversarial attacks under sufficient optimization pressure, but we don't need arbitrary optimization against any given reward model. Like, each human update lets you optimize a bit more against the reward model which gets you the ability to get somewhat closer to the policy you actually want.

Sure, this feels basically right to me. My reframing of this would be that we could do in principle do RL directly with feedback provided by a human. Learning a reward model lets us gain some sample efficiency over this, but sometimes it fails to generalize correctly to what a human would say, and adversarial examples are an important special case of this. But provided the policy is the final output of this process, not the reward model, it doesn't matter if the reward model is robust or not -- just that the feedback the policy has received has steered it into a good position.

It seems to me like your views either imply that sample efficiency is low enough now that high quality RLHF currently can't compete with other ways of training AIs which are less safe but have cheaper reward signals (e.g., heavy training on automated outcomes based feedback or low quality human reward signal). Or possibly that this will happen in the future as models get more powerful (reversing the current trend toward better sample efficiency). My understanding is that RLHF is quite commercially popular and sample efficiency isn't a huge issue. Perhaps I'm missing some important gap between how RLHF is used now and how it will have to be used in the future to align powerful systems?

This argument feels like it's proving too much. InstructGPT isn't perfect, but it does produce a lot less toxic output and follow instructions a lot better than the base model GPT-3. RLHF seems to work, and GPT-4 is even better, showing that it gets easier with bigger models. Why should we expect this trend to reverse? Why are we worried about this safety thing anyway?

I actually find this style of argument pretty plausible: I'm a relative optimist on this forum, I do think that some fairly basic methods like a souped-up RLHF might well be sufficient to make things go OK (while preferring to have more principled methods giving us a bigger safety margin). But I'm somewhat surprised to hear you making that case!

Suppose we condition on RLHF failing. At a high level, failures split into: (a) human labelers rewarded the wrong thing (e.g. fooling humans); (b) the reward model failed to predict human labelers judgement and rewarded the wrong thing (e.g. reward hacking); (c) RL produced a policy that is capable enough to be dangerous but is optimizing something other than the reward model (e.g. mesa-optimization). I find all three of these risks plausible, and I don't see a specific reason to privilege (a) or (c) substantially over (b). It sounds like you're most concerned about (a), and I'd love to hear your reasons for that.

However, I do think it's interesting to explore concrete failure modes: given RLHF is working well now, what does my view imply about how it might stop working? One scenario I find plausible is that as models get bigger, the sample efficiency of RLHF continues to increase, since the models have higher fidelity representations of a greater variety of tasks. RLHF therefore just needs to localize the task that's already in the network. However, the performance of this process is ultimately capped by what the base model already represents. I can see two ways this could go wrong:

  1. Misaligned base model. If the foundation model that's being RLHF'd is already misaligned, then a small amount of RL training is not going to be enough to disabuse it of this. By contrast, a large amount of RL training (of the order of 10% of the training steps used for self-supervised learning) with a high-fidelity reward signal might. Unfortunately, we can't do that much RL training without just reward hacking, so we never try. (Or we take that many time steps, but with a KL penalty, forcing the model to stay close to the unaligned base model.)

  2. Harmless base model. If the foundation model starts off harmless (not necessarily aligned, just not actively trying to cause harm), then I'd expect RLHF'ing it to only improve things so long as the training signal never rewards bad behavior. However, the designers want the model to significantly outperform humans at this task. The model has capacity to learn to do this, but can't just leverage existing capabilities in the foundation model, as the performance of that model is limited to that of the best humans it saw in the self-supervised training data. So, we need to do RL for many more time steps. Collecting fresh human data for that is prohibitive, so we rely on a reward model -- unfortunately that gets hacked.

I think I find the second case more compelling. The first case seems concerning as well, but it seems like quite a scary scenario even if robustness gets solved, whereas I expect fixing robustness to actually make a significant dent in the second scenario.

That said, I feel like I should emphasize in all this that I largely think robustness is an intractable problem to solve, and that while it's worth trying to improve it at the margin I'm most excited by efforts to make systems not need robustness. I think you make a good point that having humans in the loop in the training makes RLHF degrade more gracefully than reward learning approaches that train on a fixed offline dataset, and the KL penalty also helps. I suspect there are many similar algorithmic tweaks that would make algorithms less sensitive to robustness. 

For example, riffing off going from an offline to online dataset, could we improve things further by collecting a dataset that anticipates where the RL process might try to exploit it? That sounds fanciful, but there's a simple hack you can do to get something like this. Just train a policy on the current reward model. Then collect human feedback from that policy. Then roll the policy back to the last checkpoint, and repeat the training using the new reward model. You could do this step once per checkpoint, or keep doing it until you get human approval to move on (e.g. the reward model now aligns with human feedback). In this way you should be able to avoid ever giving the policy reward for the wrong behavior. I suspect this process would be much less sample efficient than vanilla RLHF, but it would have better safety properties, and measuring how much slower it is could be a good proxy for how severe the "robustness tax" is.

Also note that vanilla RLHF doesn't necessarily require optimizing against a reward model. For instance, a recently released paper Direct Policy Optimization (DPO) avoids this step entirely.

I'm a bit confused by the claim here, although I've only read the abstract and skimmed the paper so perhaps it'd become obvious from a closer read. As far as I can tell, the cited paper focuses on motion-planning, and considers a rather restricted setting of LQR policies. This is a reasonable starting point, but a human communicating that a cart pole should stand up (or a desired quadcoptor trajectory) feels much simpler than even toy tasks for RLHF in LLMs like summarizing text. So I don't really see this as much evidence in favor of being able to drop the reward model. Generally, any highly sample efficient model-based RL approach would enable RL training to proceed without a reward model by having humans directly label the trajectory,

Comment by AdamGleave on AI Safety in a World of Vulnerable Machine Learning Systems · 2023-06-16T16:25:15.375Z · LW · GW

To check my understanding, is your view something like:

  1. If the reward model isn't adversarially robust, then the RL component of RLHF will exploit it.

  2. These generations will show up in the data presented to the human. Provided the human is adversarially robust, then the human feedback will provide corrective signal to the reward model.

  3. The reward model will stop being vulnerable to those adversarial examples, although may still be vulnerable to other adversarial examples.

  4. If we repeat this iterative process enough times, we'll end up with a robust reward model.

Under this model, improving adversarial robustness just means we need fewer iterations, showing up as improved sample efficiency.

I agree with this view up to a point. It does seem likely that with sufficient iterations, you'd get an accurate reward model. However, I think the difference in sample efficiency could be profound: e.g exponential (needing to get explicit corrective human feedback for most adversarial examples) vs linear (generalizing in the right way from human feedback). In that scenario, we may as well just ditch the reward model and provide training signal to the policy directly from human feedback.

In practice, we've seen that adversarial training (with practical amounts of compute) improves robustness but models are still very much vulnerable to attacks. I don't see why RLHF's implicit adversarial training would end up doing better than explicit adversarial training.

In general I tend to view sample efficiency discussions as tricky without some quantitative comparison. There's a sense in which decision trees and a variety of other simple learning algorithms are a viable approach to AGI, they're just very sample (and compute) inefficient.

The main reason I can see why RLHF may not need adversarial robustness is if the KL penalty from base model approach people currently use is actually enough.

Comment by AdamGleave on Lightcone Infrastructure/LessWrong is looking for funding · 2023-06-14T21:08:58.545Z · LW · GW

Yes, thanks for spotting my typo! ($2.75 psf isn't crazy for Berkeley after negotiation, but is not something I've ever seen as a list price.)

Comment by AdamGleave on Lightcone Infrastructure/LessWrong is looking for funding · 2023-06-14T20:10:30.009Z · LW · GW

To compare this to other costs, renting two floors of the WeWork, which we did for most of the summer last year, cost around $1.2M/yr for 14,000 sq. ft. of office space. The Rose Garden has 20,000 sq. ft. of floor space and 20,000 additional sq. ft. of usable outdoor space for less implied annual cost than that.

I'm sympathetic to the high-level claim that owning property usually beats renting if you're committing for a long time period. But the comparison with WeWork seems odd: WeWork specializes in providing short-term, serviced office space and does so at a substantial premium to the more traditional long-term, unserviced commercial real estate contract. When we were looking for office space in Berkeley earlier this year we were seeing list price between $3.25-$3.75/month per square foot, or $780k-900k/year for 20,000 square feet. I'd expect with negotiation you could get somewhat better pricing than this implies, especially if committing to a longer time period.

Of course, the extra outdoor space, mixed-use zoning and ability to highly customize the space may well offset this. But it starts depending a lot more on the details (e.g. how often is the outdoor space used; how much more productive are people in a customized space vs a traditional office) than it might first seem.

Comment by AdamGleave on AI Safety in a World of Vulnerable Machine Learning Systems · 2023-03-16T22:29:07.875Z · LW · GW

This is a good point, adversarial examples in what I called in the post the "main" ML system can be desirable even though we typically don't want them in the "helper" ML systems used to align the main system.

One downside to adversarial vulnerability of the main ML system is that it could be exploited by bad actors (whether human or other, misaligned AIs). But this might be fine in some settings: e.g. you build some airgapped system that helps you build the next, more robust and aligned AI. One could also imagine crafting adversarial example backdoors that are cryptographically hard to discover if you don't know how they were constructed.

I generally expect that if adversarial robustness can be practically solved then transformative AI systems will eventually self-improve themselves to the point of being robust. So, the window where AI systems are dangerous & deceptive enough that we need to test them using adversarial examples but not capable enough to have overcome this might be quite short. Could still be useful as an early-warning sign, though.

Comment by AdamGleave on AI Safety in a World of Vulnerable Machine Learning Systems · 2023-03-09T01:54:32.762Z · LW · GW

Right: if the agent has learned an inner objective of "do things similar to what humans do in the world at the moment I am currently acting", then it'd definitely be incentivized to do that. It's not rewarded by the outer objective for e.g. behavioral cloning on a fixed dataset, as installing bunch of cameras would be punished by that loss (not something humans do) and changing human behavior wouldn't help as BC would still be on the dataset of pre-manipulation demos. That might be little comfort if you're worried about inner optimization, but most the other failures described happen even in the outer alignment case.

That said, if you had a different imitation learning setup that was something like doing RL on a reward of "do the same thing one of our human labelers chooses given the same state" then the outer objective would reward what the behavior you describe. It'd be a hard exploration problem for the agent to learn to exploit the reward in that way, but it quite probably could do so if situationally aware.

Comment by AdamGleave on AI Safety in a World of Vulnerable Machine Learning Systems · 2023-03-09T01:47:18.071Z · LW · GW

Thanks, I'd missed that!

Curious if you have any high-level takeaways from that? Bigger models do better, clearly, but e.g. how low do you think we'll be able to get the error rate in the next 5-10 years given expected compute growth? Are there any follow-up experiments you'd like to see happen in this space?

Also could you clarify whether the setting was for adversarial training or just a vanilla model? "During training, adversarial examples for training are constructed by PGD attacker of 30 iterations" makes me think it's adversarial training but I could imagine this just being used for evals.

Comment by AdamGleave on CIRL Corrigibility is Fragile · 2022-12-21T06:12:57.350Z · LW · GW

Rachel did the bulk of the work on this post (well-done!), I just provided some advise on the project and feedback on earlier manuscripts.

I wanted to share why I'm personally excited by this work in case it helps contextualize it for others.

We'd all like AI systems to be "corrigible", cooperating with us in correcting them. Cooperative IRL has been proposed as a solution to this. Indeed Dylan Hadfield-Menell et al show that CIRL is provably corrigible in a simple setting, the off-switch game.

Provably corrigible sounds great, but where there's a proof there's also an assumption, and Carey et al soon pointed out a number of other assumptions under which this no longer holds -- e.g. if there is model misspecification causing the incorrect probability distribution to be computed.

That's a real problem, but every method can fail if you implement it wrongly (although some are more fragile than others), so this didn't exactly lead to people giving up on the CIRL framework. Recently Shah et al described various benefits they see of CIRL (or "assistance games") over reward learning, though this doesn't address the corrigibility question head on.

A lot of the corrigibility properties of CIRL come from uncertainty: it wants to defer to a human because the human knows more about its preferences than the robot. Recently, Yudkowsky and others described the problem of fully updated deference: if the AI has learned everything it can, it may have no uncertainty, at which point this corrigibility goes away. If the AI has learned your preferences perfectly, perhaps this is OK. But here Carey's critique of model misspecification rears its head again -- if the AI is convinced you love vanilla ice cream, saying "please no give me chocolate" will not convince it (perhaps it thinks you have a cognitive bias against admitting your plain, vanilla preferences -- it knows the real you), whereas it might if it had uncertainty.

I think the prevailing view on this forum is to be pretty down on CIRL because its not corrigible. But I'm not convinced corrigibility in the strict sense is even attainable or desirable. In this post, we outline a bunch of examples of corrigible behavior that I would absolutely not want in an assistant -- like asking me for approval before every minor action! By contrast, the near-corrigible behavior -- asking me only when the robot has genuine uncertainty -- seems more desirable... so long as the robot has calibrated uncertainty. To me, CIRL and corrigibility seem like two extremes: CIRL is focusing on maximizing human reward, whereas corrigibility is focused on avoiding ever doing the wrong thing even under model misspecification. In practice, we need a bit of both -- but I don't think we have a good theoretical framework for that yet.

In addition to that, I hope this post serves as a useful framework to ground future discussions on this. Unfortunately I think there's been an awful lot of talking past each other in debates on this topic in the past. For example, to the best of my knowledge, Hadfield-Menell and other authors of CIRL never believed it solved corrigibility under the assumptions Carey introduced. Although our framework is toy, I think it captures the key assumptions people disagree about, and it can be easily extended to capture more as needed in future discussions.

Comment by AdamGleave on Instead of technical research, more people should focus on buying time · 2022-11-07T05:41:22.241Z · LW · GW

I'm excited by many of the interventions you describe but largely for reasons other than buying time. I'd expect buying time to be quite hard, in so far as it requires coordinating to prevent many actors from stopping doing something they're incentivized to do. Whereas since alignment research community is small, doubling it is relatively easy. However, it's ultimately a point in favor of the interventions that they look promising under multiple worldviews, but it might lead me to prioritize within them differently to you.

One area I would push back on is the skills you describe as being valuable for "buying time" seem like a laundry list for success in research in general, especially empirical ML research:

Skills that seem uniquely valuable for buying time interventions: general researcher aptitudes, ability to take existing ideas and strengthen them, experimental design skills, ability to iterate in response to feedback, ability to build on the ideas of others, ability to draw connections between ideas, experience conducting “typical ML research,” strong models of ML/capabilities researchers, strong communication skills

It seems pretty bad for the people strongest at empirical ML research to stop doing alignment research. Even if we pessimistically assume that empirical research now is useless (which I'd strongly disagree with), surely we need excellent empirical ML researchers to actually implement the ideas you hope the people who can "generate and formalize novel ideas" come up with. There are a few aspects of this (like communication skills) that do seem to differentially point in favor of "buying time", maybe have a shorter and more curated list in future?

Separately given your fairly expansive list of things that "buy time" I'd have estimated that close to 50% of the alignment community are already doing this -- even if they believe their primary route to impact is more direct. For example, I think most people working on safety at AGI labs would count under your definition: they can help convince decision-makers in the lab not to deploy unsafe AI systems, buying us time. A lot of the work on safety benchmarks or empirical demonstrations of failure modes falls into this category as well. Personally I'm concerned people are falling into this category of work by default and that there's too much of this, although I do think when done well it can be very powerful.

Comment by AdamGleave on Response to Katja Grace's AI x-risk counterarguments · 2022-11-02T03:06:27.383Z · LW · GW

I agree that in a fast takeoff scenario there's little reason for an AI system to operate withing existing societal structures, as it can outgrow them quicker than society can adapt. I'm personally fairly skeptical of fast takeoff (<6 months say) but quite worried that society may be slow enough to adapt that even years of gradual progress with a clear sign that transformative AI is on the horizon may be insufficient.

In terms of humans "owning" the economy but still having trouble getting what they want, it's not obvious this is a worse outcome than the society we have today. Indeed this feels like a pretty natural progression of human society. Humans already interact with (and not so infrequently get tricked or exploited by) entities smarter than them such as large corporations or nation states. Yet even though I sometimes find I've bought a dud on the basis of canny marketing, overall I'm much better off living in a modern capitalist economy than the stone age where humans were more directly in control.

However, it does seem like there's a lot of value lost in the scenario where humans become increasingly disempowered, even if their lives are still better than in 2022. From a total utilitarian perspective, "slightly better than 2022" and "all humans dead" are rounding errors relative to "possible future human flourishing". But things look quite different under other ethical views, so I'm reluctant to conflate these outcomes.

Comment by AdamGleave on Response to Katja Grace's AI x-risk counterarguments · 2022-10-28T18:30:10.169Z · LW · GW

Thanks for this response, I'm glad to see more public debate on this!

The part of Katja's part C that I found most compelling was the argument that for a given AI system its best interests might be to work within the system rather than aiming to seize power. Your response argues that even if this holds true for AI systems that are only slightly superhuman, eventually we will cross a threshold where a single AI system can takeover. This seems true if we hold the world fixed -- there is some sufficiently capable AI system that can take over the 2022 world. But this capability threshold is a moving target: humanity will get better at aligning and controlling AI systems as we gain more experience with them, and we may be able to enlist the help of AI systems to keep others in check. So, why should we expect the equilibrium here to be an AI takeover, rather than AIs working for humans because that it is in their selfish best interest in a market economy where humans are currently the primary property owner?

I think the crux here is whether we expect AI systems to by default collude with one another. They might -- they have a lot of things in common that humans don't, especially if they're copies of one another! But coordination in general is hard, especially if it has to be surreptitious.

As an analogy, I could argue that for much of human history soldiers were only slightly more capable than civilians. Sure, a trained soldier with a shield and sword is a fearsome opponent, but a small group of coordinated civilians could be victorious. Yet as we develop more sophisticated weapons such as guns, cannons, missiles, the power that a single soldier has grows greater and greater. So, by your argument, eventually a single soldier will be powerful enough to take over the world.

This isn't totally fanciful -- the Spanish conquest of the Inca Empire started with just 168 soldiers! The Spanish fought with swords, crossbows, and lances -- if the Inca Empire were still around, it seems likely that a far smaller modern military force could defeat them. Yet, clearly no single soldier is in a position to take over the world, or even a small city. Military coup d'etats are the closest, but involve convincing a significant fraction of the military that is in their interest to seize power. Of course most soldiers wish to serve their nation, not seize power, which goes some way to explaining the relatively low rate of coup attempts. But it's also notable that many coup attempts fail, or at least do not lead to a stable military dictatorship, precisely because of difficulty of internal coordination. After all, if someone intends to destroy the current power structure and violate their promises, how much can you trust that they'll really have your back if you support them?

An interesting consequence of this is that it's ambiguous whether making AI more cooperative makes the situation better or worse.

Comment by AdamGleave on [Intro to brain-like-AGI safety] 1. What's the problem & Why work on it now? · 2022-10-02T19:59:39.076Z · LW · GW

Thanks for the quick reply! I definitely don't feel confident in the 20W number, I could believe 13W is true for more energy efficient (small) humans, in which case I agree your claim ends up being true some of the time (but as you say, there's little wiggle room). Changing it to 1000x seems like a good solution though which gives you plenty of margin for error.

Comment by AdamGleave on [Intro to brain-like-AGI safety] 1. What's the problem & Why work on it now? · 2022-10-02T02:09:10.609Z · LW · GW

This is a nitpick, but I don't think this claim is quite right (emphasis added)

 If a silicon-chip AGI server were literally 10,000× the volume, 10,000× the mass, and 10,000× the power consumption of a human brain, with comparable performance, I don’t think anyone would be particularly bothered—in particular, its electricity costs would still be below my local minimum wage!!

First, how much power does the brain use? 20 watts is StackExchange's answer, but I've struggled to find good references here. The appealingly named Appraising the brain's energy budget gives 20% of the overall calories consumed by the body, but that begs the question of the power consumption of the human body, and whether this is at rest or under exertion, etc. Still, I don't think the 20 watts figure is more than 2x off, so let's soldier on.

10,000 times 20 watts is 200 kW. That's a large but not insane amount of power. You could just about run that load on a domestic power supply in the US (some larger homes might have a 200A @ 120V circuit, for 192 kW of permissible load under the 80% rule). Of course you wouldn't be able to power the HVAC needed to cool all these chips, but let's suppose you live in Alaska and can just open the windows.

At the time of writing, the cheapest US electricity prices are around $0.09 per kWh with many states (including Alaska, unfortunately) being twice that at around $0.20/kWh. But let's suppose you're in both a cool climate and have a really great deal on electricity. So your 200kWh of chips costs you just $0.09*200=$18/hour.

Federal minimum wage is $7.25/hour, and the highest I'm aware of in any US state is $15/hour. So it seems that you won't be cheaper than the brain on electricity prices if 10,000 times less efficient. I've systematically tried to make favorable assumptions here. Your 200kW proto-AGI probably won't be in an Alaskan garage, but in a tech company's data center with according costs for HVAC, redundant power, security, etc. Colo costs vary widely depending on location and economies of scale. A recent quote I had was at around the $0.4 kWh/mark -- so about 4x the cost quoted above.

This doesn't massively change the qualitative takeaway, which is that even if something was 10,000 (or even a million times) less efficient than the brain, we'd absolutely still go ahead and build a demo anyway. But it is worth noting that something at the $60/hour range might not actually be all that transformative unless it's able to perform highly skilled labor -- at least until we make it more efficient (which would happen quite rapidly).

Comment by AdamGleave on Inverse Scaling Prize: Round 1 Winners · 2022-10-01T01:16:45.893Z · LW · GW

"The Floating Droid" example is interesting as there's a genuine ambiguity in the task specification here. In some sense that means there's no "good" behavior for a prompted imitation model here. (For an instruction-following model, we might want it to ask for clarification, but that's outside the scope of this contest.) But it's interesting the interpretation flips with model scale, and in the opposite direction to what I'd have predicted (doing EV calculations are harder so I'd have expected scale to increase not decrease EV answers.) Follow-up questions I'd be excited to see the author address include:

  1. Does the problem go away if we include an example where EV and actual outcome disagree? Or do the large number of other spuriously correlated examples overwhelm that?

  2. How sensitive is this to prompt? Can we prompt it some other way that makes smaller models more likely to do actual outcome, and larger models care about EV? My guess is the training data that's similar to those prompts does end up being more about actual outcomes (perhaps this says something about the frequency of probabilistic vs non-probabilistic thinking on internet text!), and that larger language models end up capturing that. But perhaps putting the system in a different "personality" is enough to resolve this. "You are a smart, statistical assistant bot that can perform complex calculations to evaluate the outcomes of bets. Now, let's answer these questions, and think step by step."

Comment by AdamGleave on An Update on Academia vs. Industry (one year into my faculty job) · 2022-09-06T02:48:46.822Z · LW · GW

It's not clear to me how we can encourage rigor where effective without discouraging research on areas where rigor isn't currently practical. If anyone has ideas on this, I'd be very interested.


A rough heuristic I have is that if the idea you're introducing is highly novel, it's OK to not be rigorous. Your contribution is bringing this new, potentially very promising, idea to people's attention. You're seeking feedback on how promising it really is and where people are confused , which will be helpful for then later formalizing it and studying it more rigorously.

But if you're engaging with a large existing literature and everyone seems to be confused and talking past each other (which I'd characterize a significant fraction of the mesa-optimization literature, for example) -- then the time has come to make things more rigorous, and you are unlikely to make much further progress without it.

Comment by AdamGleave on An Update on Academia vs. Industry (one year into my faculty job) · 2022-09-03T21:41:52.179Z · LW · GW

Work that is still outside the academic Overton window can be brought into academia if it can be approached with the technical rigor of academia, and work that meets academic standards is much more valuable than work that doesn't; this is both because it can be picked up by the ML community, and because it's much harder to tell if you are making meaningful progress if your work doesn't meet these standards of rigor.

Strong agreement with this! I'm frequently told by people that you "cannot publish" on a certain area, but in my experience this is rarely true. Rather, you have to put more work into communicating your idea, and justifying the claims you make -- both a valuable exercise! Of course you'll have a harder time publishing than on something that people immediately understand -- but people do respect novel and interesting work, so done well I think it's much better for your career than one might naively expect.

I especially wish there was more emphasis on rigor on the Alignment Forum and elsewhere: it can be valuable to do early-stage work that's more sloppy (rigor is slow and expensive), but when there's long-standing disagreements it's usually better to start formalizing things or performing empirical work than continuing to opine.

That said, I do think academia has some systemic blindspots. For one, I think CS is too dismissive of speculative and conceptual research -- much of this work will end up being mistaken admittedly, but it's an invaluable source of ideas. I also think there's too much emphasis on an "algorithmic contribution" in ML, which leads to undervaluing careful empirical valuations and understanding failure modes of existing systems.

Comment by AdamGleave on (My understanding of) What Everyone in Technical Alignment is Doing and Why · 2022-08-31T20:21:04.147Z · LW · GW

I liked this post and think it'll serve as a useful reference point, I'll definitely send it to people who are new to the alignment field.

But I think it needs a major caveat added. As a survey of alignment research that regularly posts on LessWrong or interacts closely with that community, it does a fine job. But as capybaralet already pointed out, it misses many academic groups. And even some major industry groups are de-emphasized. For example, DeepMind alignment is 20+ people, and has been around for many years. But it's got if anything a slightly less detailed write-up than Team Shard, a small group of people for a few months, or infra-Bayesianism, largely one person for several years.

The best shouldn't be the enemy of the good, and some groups are just quite opaque, but I think it does need to be cleared about its limitations. One anti-dote would be including in the table a sense of # of people, # of years it's been around, and maybe even funding to get a sense of what the relative scale of these different projects are.

Comment by AdamGleave on (My understanding of) What Everyone in Technical Alignment is Doing and Why · 2022-08-31T20:07:54.304Z · LW · GW

One omission from the list is the Fund for Alignment Research (FAR), which I'm a board member of. That's fair enough: FAR is fairly young, and doesn't have a research agenda per se, so it'd be hard to summarize their work from the outside!. But I thought it might be of interest to readers so I figured I'd give a quick summary here.

FAR's theory of change is to incubate new, scalable alignment research agendas. Right now I see a small range of agendas being pursued at scale (largely RLHF and interpretability), then a long tail of very diverse agendas being pursued by single individuals (mostly independent researchers or graduate students) or 2-3 person teams. I believe there's a lot of valuable ideas in this long tail that could be scaled, but this isn't happening due to a lack of institutional support. It makes sense that the major organisations want to focus on their own specific agendas -- there's a benefit to being focused! -- but it means a lot of valuable agendas are slipping through the cracks.

FAR's current approach to solving this problem is to build out a technical team (research engineers, junior research scientists, technical communication specialists) and provide support to a broad range of agendas pioneered by external research leads. Those that work, FAR will double down on and invest more in. This model has had a fair amount of demand already so there's product-market fit, but we still want to iterate and see if we can improve the model. For example, long-term FAR might want to bring some or all research leads in-house.

In terms of concrete agendas, an example of some of the things FAR is working on:

  • Adversarial attacks against narrowly superhuman systems like AlphaGo.
  • Language model benchmarks for value learning.
  • The inverse scaling law prize.

You can read more about us on our launch post.

Comment by AdamGleave on Benchmark for successful concept extrapolation/avoiding goal misgeneralization · 2022-07-08T17:52:26.287Z · LW · GW

A related dataset is Waterbirds, described in Sagawa et al (2020), where you want to classify birds as landbirds or waterbirds regardless of whether they happen to be on a water or land background.

The main difference from HappyFaces is that in Waterbirds the correlation between bird type and background is imperfect, although strong. By contrast, HappyFaces has perfect spurious correlation on the training set. Of course you could filter Waterbirds to make the spurious correlation perfect to get an equally challenging but more natural dataset.

Comment by AdamGleave on AI Could Defeat All Of Us Combined · 2022-07-02T21:51:25.069Z · LW · GW

A lot of this argument seems to rest on the training-inference gap, allowing a very large population of AIs to exist at the same as cost as training. In that way they can be a formidable group even if the individual AIs are only human-level. I was suspicious of this at first, but I found myself largely coming round to it after sanity checking it using a slightly different method than biological anchors. However, if I understand correctly the biological anchors framework implies the gap between training and inference grows with capabilities. My projection instead expects it to grow a little in the next few years and then plateau as we hit the limits of data scaling. This suggests a more continuous picture: there will be a "population explosion" of AI systems in the next few years so to speak as we scale data, but then the "population size" (total number of tokens you can generate for your training budget) will stay more or less constant, while the quality of the generated tokens gradually increases.

To a first approximation, the amount of inference you can do at the same cost as training the system will equal the size of the training data multiplied by number of epochs. The trend in large language models seems to be to train for only 1 epoch on most data, and a handful of epochs for the highest-quality parts of the data. So as a rule of thumb: if you spend $X on training and $X on inference, you can produce as much data as your training dataset. Caveat: inference can be more expensive (e.g. beam search) or less expensive (e.g. distillation, specialized inference-only hardware) and depends on things like how much you care about latency; I think this only changes the picture by 10x either way.

Given that GPT-3 was trained on a significant fraction of the entire text available on the Internet (CommonCrawl), this would already be a really big deal if GPT-3 was actually close to human-level. Adding another Internets-worth of content would be... significant.

But conversely, the fact we're already training on so much data limits how much room for growth there is. I'd estimate we have no more than 100-1000x left for language scaling. We could probably get up to 10x more from more comprehensive (but lower quality) crawls than CommonCrawl, and 10-100x more if tech companies use non-public datasets (e.g. all e-mails & docs on a cloud service).

By contrast, in principle compute could scale up a lot more than this. We can likely get 10-100x just from spending more on training runs. Hardware progress could easily deliver 1000x by 2036, the date chosen in this post.

Given this, at least under business as usual scaling I expect us to hit the limits of data scaling significantly before we exhaust compute scaling. So we'll have larger and more compute-intensive models trained on relatively small datasets (although still massive in absolute terms). This suggests the training-inference gap grow a bit as we grow training data size, but soon plateau as we just scale up model size while keeping training data fixed.

One thing that could do undo this argument is if we end up training for many (say >10) epochs, or synthetically generate data, as a kind of poor-mans data scaling rather than just scaling up parameter count. I expect we'll try this, but I'd only give it 30% odds it makes a big difference. I do think it's more likely if we move away from the LM paradigm, and either get a lot of mileage out of multi-modal models (there's lots more video data at least in terms of GB, maybe not in terms of abstract information content) or back towards RL (where data generated in simulation seems much more valuable and scalable).

Comment by AdamGleave on Delta Strain: Fact Dump and Some Policy Takeaways · 2022-01-18T19:24:33.319Z · LW · GW

I did actually mean 45, in "all-things-considered" I was including uncertainty in whether my toy model was accurate. Since it's a right-tailed distribution, my model can underestimate the true amount a lot more than it can overestimate it.

For what it's worth, my all-things-considered view for Delta is now more like 30, as I've not really seen anything all that compelling for long COVID being much worse than in the model. I'm not sure about Omicron; it seems to be less virulent, but also more vaccine escape. Somewhere in the 15-90 day range sounds right to me, I've not thought enough to pin it down precisely.

Comment by AdamGleave on Immobile AI makes a move: anti-wireheading, ontology change, and model splintering · 2021-09-18T04:33:01.685Z · LW · GW

My sense is that Stuart assuming there's an initial-specified reward function is a simplification, not a key part of the plan, and that he'd also be interested in e.g. generalizing a reward function learned from other sources of human feedback like preference comparison.

IRD would do well on this problem because it has an explicit distribution over possible reward functions, but this isn't really that unique to IRD -- Bayesian IRL or preference comparison would have the same property.

Comment by AdamGleave on What fraction of breakthrough COVID cases are attributable to low antibody count? · 2021-08-23T21:49:52.321Z · LW · GW

It could be net-negative if receiving a booster shot caused stronger imprinting, making future immune response less adaptive. I don't have a good sense of whether this original antigenic sin effect has already saturated after receiving two-doses (or even a single-dose), or whether it continues to become stronger.

My sense is this is an open question. From Petras et al (2021):

As suggested by a recent observation in naturally immunized individuals receiving two doses of the Pfizer COVID-19 (Comirnaty) vaccine, original antigenic sin may pose a problem in future research and development of vaccines.16 While the first dose of the vaccine was able to raise the preexisting levels of functional and specific antibodies, these either failed to change or even declined after the second dose (virus-neutralizing antibodies), and the same applied to the levels of antigen-specific antibody-secreting cells. As this observation was made in only a small group of 13 subjects with naturally acquired immunity against SARS-CoV-2, who had rather average or below-average levels of the antibodies assessed, one may expect an enhanced effect of original antigenic sin after new vaccination against COVID-19 in those with manyfold higher antibody levels after complete immunization.

That said, I'd expect a third booster to be protective against Delta, given that vaccines against ancestral variant are still highly effective against Delta and that Delta is a significant threat right now. But I do think it's plausible (though not firmly established) that a third booster shot may reduce the effectiveness of future variant-specific boosters. Targeting dramatically different protein targets might well help, although might also take longer to get approved.

Ultimately, I expect a third booster will still make sense for a lot of people, if (a) your immune response has waned (e.g. 6 months or longer since 2nd dose, or immunocompromised); and (b) you expect to be receiving significant exposure from Delta in the immediate future.

Comment by AdamGleave on What fraction of breakthrough COVID cases are attributable to low antibody count? · 2021-08-22T23:06:11.268Z · LW · GW

I largely with this analysis. One major possible "side-effect" of a third booster is original antigenic sin. Effectively, the immune system may become imprinted on the ancestral variant of the spike protein, preventing adaptation to new variants (whether via direct exposure or via future boosters targeting new variants). This would be the main way I could see a third booster being seriously net-negative, although I don't have a good sense of the probability. Still, if antibody levels are low, the benefit of a booster is greater and I'd guess (caveat: not an immunologist) the risk of antigenic imprinting is somewhat lower (on the basis that the immune response has already decayed).

Comment by AdamGleave on A Better Time until Sunburn Calculator · 2021-08-17T12:39:44.878Z · LW · GW

Thanks for sharing this! I did notice a weird non-monotonicity: if I go from 90 minutes exposure to 120 minutes, the "Percent of Population w/ Sunburn Degree 1 at Time Exposed" drops from 96.8% to 72.7%. There is a warning in both cases that it's outside normal range, but it still seems odd that more exposure gives lower risk.

Comment by AdamGleave on Delta Strain: Fact Dump and Some Policy Takeaways · 2021-08-04T15:36:55.260Z · LW · GW

Just to flag I messed up the original calculation and underestimated everything by a factor of 2x, I've added an errata.

I'd also recommend Matt Bell's recent analysis, who estimates 200 days of life lost. This is much higher than the analysis in my comment and the OP. I found the assumptions and sources somewhat pessimistic but ultimately plausible.

The main things driving the difference from my comment were:

  • Uses data from the UK's Office of National Statistics that I'd missed, which has a very high number of 55% of people reporting symptoms after 5 weeks, with fairly slow rates of recovery all the way out to 120 days post-infection. Given this is significantly higher than most other studies I've seen, I think Matt is being pessimistic by only down-adjusting to 45%, but I should emphasize these numbers are credible and the ONS study is honestly better than most out there.
  • Long COVID making your life 20% worse is on the pessimistic end. I put most mild symptoms at 5% worse. Ultimately subjective and highly dependent on what symptoms you get.
  • I think the difference in hospitalized vs non-hospitalized risk is closer to 10x (based on Al-Aly figure) not Matt's estimate of 2x, that means we should multiply by a factor of ~60% not ~97%.
Comment by AdamGleave on Delta Strain: Fact Dump and Some Policy Takeaways · 2021-08-02T10:15:52.693Z · LW · GW

This is a good point, the demographics here are very skewed. I'm not too worried about it overstating risk, simply because the risk ended up looking not that high (at least after adjusting for hospitalization). I think at this point most of us have incurred more than 5 days of costs from COVID restrictions, so if that was really all the cost from COVID, I'd be pretty relaxed.

The gender skew could be an issue, e.g. chronic fatigue syndrome seems to occur at twice the rate in women than men.

Comment by AdamGleave on Delta Strain: Fact Dump and Some Policy Takeaways · 2021-08-02T10:07:20.234Z · LW · GW

This is an accurate summary, thanks! I'll add my calculation was only for long-term sequelae. Including ~10 days cost from acute effects, my all-things-considered view would be mean of ~40 days, corresponding to 1041 uCOVIDs per hour.

This is per actual hour of (quality-adjusted) life expectancy. But given we spend ~1/3rd of our time sleeping, you probably want to value a waking-hour at 1.5x a life-hour (assuming being asleep has neutral valence). If you work a 40 hour work week and only value your productive time (I do not endorse this, by the way), then you'd want to adjust upwards by a factor of (7*24)/40=4.2.

However, this is purely private cost. You probably want to take into account the cost of infecting other people. I'm not confident in how to reason about the exponential growth side of things. If you're in a country like the US where vaccination rates have plateaued, I tend to expect Delta to spread amongst unvaccinated people until herd immunity is reached. In this scenario you basically want infection rates to be as high as possible without overwhelming the healthcare system, so we get to herd immunity quicker. (This seems to actually be the strategy the UK government is pursuing -- although obviously they've not explicitly stated this.) But if you're in a country that's still actively vaccinating vulnerable people, or where flattening the curve makes sense to protect healthcare systems, then please avoid contributing to exponential growth.

Neglecting the exponential growth side of things and just considering immediate impact on your contacts, how likely are you to transmit? I'd be surprised if it was above 40% per household contact assuming you quarantine when symptomatic (that's on the higher end of transmission seen even with unvaccinated primary cases), but I'd also be surprised if it was below 5% (lowest figure I've seen); I'd guess it's around 15% for Delta. This means if you have ~6-7 contacts as close as housemates, then your immediate external cost roughly equals your private cost.

Specifically, two studies I've seen on secondary attack rate given vaccination (h/t @Linch) give pretty wildly varying figures, but suggest at least 2x reduction in transmission from vaccination. Layan et al (2021) found 40% of household contacts of Israeli medical staff developed an infection (when Alpha was dominant), with vaccination of the primary case reducing transmission by 80%, so an 8% chance of transmission overall. Harris et al (2021) from Public Health England suggest vaccination cuts transmission risk from 10% to 5%, but these figures are likely skewed low due to not systematically testing contacts.

Comment by AdamGleave on Delta Strain: Fact Dump and Some Policy Takeaways · 2021-08-01T16:08:57.765Z · LW · GW

Errata: My original calculation underestimated the risk by a factor of about 2x. I neglected two key considerations, which fortunately somewhat canceled each other out. My new estimate from the calculation is 3.0 to 11.7 quality-adjusted days lost to long-term sequelae, with my all-things-considered mean at 45. 

The two key things I missed:

  - I estimated the risk of a non-hospitalized case is about 10x less than a hospitalized case, and so divided the estimates of disease burden by 10x. The first part is correct, but the second part would only make sense if all disease burden was due to hospitalized cases. In fact, there's a 15:85%  split between hospitalized and non-hospitalized patients in the study (13,654:73,435). So if the disease burden for non-hospitalized is x, the total burden is 0.15*10x + 0.85*x = 2.35x. So we should divide by 2.35, not 10.

  - However, as Owain pointed out below, the [demographics](https://www.nature.com/articles/s41586-021-03553-9/tables/1) are non-representative and probably skew high-risk given the median age is 60. the demographics are relatively high-risk. Indeed, this is suggested by the 15% hospitalized figure (which also, I suspect, means they just never included asymptomatic and most mildly symptomatic cases). An ONS survey (Figure 4) put symptoms reported after 5 weeks at 25% (20-30%) for 50-69 year olds and 17.5 (12.5 to 22.5%) for 17 to 24 year olds, which is surprisingly little difference, about a 1.5 decrease. I'd conjecture a 2x decrease in risk (noting that assuming no hospitalization is already doing a lot of work here).

Original post:

I did my own back-of-the-envelope calculation and came up with a similar but slightly higher estimated cost of 1.4 to 5.5 quality-adjusted days lost to long-term sequalea conditional on getting symptomatic COVID case. FWIW, I originally thought the OPs numbers seemed way too low, and was going to write a take-down post -- but unfortunately the data did not cooperate with this agenda. I certainly don't fully trust these numbers: it's based on a single study, and there were a bunch of places I didn't keep track of uncertainty, so the true credible interval should definitely be a lot wider. Given that and the right-tailed nature of the distribution, my all-things-considered mean is closer to 30 because of this, but figured I'd share the BOTEC anyway in case it's helpful to anyone.

My model is pretty simple:

  1. What % of symptoms are there at some short-term follow up period (e.g. 4 to 12 weeks)? This we actually have data on.

  2. How bad are these symptoms? This is fairly subjective.

  3. How much do we expect these symptoms to decay long-term? This is going off priors.

For 1. I used Al-Aly et al (2021) as a starting point, which was based on comparing medical records between a COVID-positive and non-COVID demographically matched control group in the US Department of Veterans Affairs database. Anna Ore felt this was one of the more rigorous ones, and I agree. Medical notes seem more reliable than self-report (though far from infallible), they seem to have actually done a Bonferroni correction, and they tested their methodology didn't pick up any false positives via both a negative-outcome and negative-exposure controls. Caveat: many other studies have scarier headline figures, and it's certainly possible relying on medical records skews this low (e.g. doctors might be reluctant to give a diagnosis, many patients won't go to the doctor for mild symptoms, etc).

They report outcomes that occurred between 30 and 180 days after COVID exposure, although infuriatingly don't seem to break it down any further by date. Figure 2 shows all statistically significant symptoms, in terms of the excess burden (i.e. increase above control) of the reported symptom per 1000 patients. There were 38 in total, ranging from 2.8% (respiratory signs and symptoms) to 0.15% (pleurisy). In total the excess burden was 26%.

I went through and rated each symptom with a very rough and subjective high / medium / low severity. 2% excess burden of high severity symptoms, 19% medium severity, 5% low severity. I then ballparked that high severity (e.g. heart disease, diabates, heart failure) wiped out 30% of your QALYs, medium severity (e.g. respiratory signs, anxiety disorders, asthma) as 5% and low (e.g. skin rash) as 1%. Caveat: there's a lot of uncertainty in these numbers. Although I suspect I've gone for higher costs than most people would, since I tend to think health has a pretty big impact on productivity.

Using my weightings, we get a 1.6% reduction in QALYs conditional on symptomatic COVID case. I think this is misleading for three reasons:

  1. Figure 3 shows that excess burden is much higher for people who were hospitalized, and if anything the gap seems bigger for more severe symptoms (e.g. about 10x less heart failure in people positive but not hospitalized, whereas rates of skin rash were only 2x less). This is good news as vaccines seem significantly more effective at preventing hospitalizations, and if you are fortunate enough to be a young healthy person your chance of being hospitalized was pretty low to begin with. I'm applying a 10x reduction for this.

  2. This excess burden is per diagnosis, not per patient. Sick people tend to receive multiple diagnoses. I'm not sure how to handle this. In some cases, badness-of-symptoms does seem roughly additive: if I had a headache, I'd probably pay a similar amount not to also develop a skin rash then if my head didn't hurt. But it seems odd to say that someone who drops dead from cardiac arrest was more fortunate than another patient with the same cause of death, who also had the misfortune of being diagnosed with heart failure a week earlier. So there's definitely some double-counting with the diagnosis, which I think justifies a 2-5x decrease.

  3. This study was presumably predominantly the original COVID strain (based on a cohort between March 2020 and 30 November 2020). Delta seems, per the OP, about 2-3x worse: so let's increase it by that factor.

Overall we decrease 1.6% by a factor of 6.5 (10*2/3) to 25 (10*5/2), to get a short-term QALY reduction of 0.064% to 0.24%.

However, El-Aly et al include any symptom reported between 30 to 180 days. What we really care about is chance of lifelong symptoms if someone is experiencing a symptom after 6 months there seems like a considerable chance it'll be lifelong, but if only 30 days has elapsed the chance of recovery seems much higher. A meta-review by Thompson et al (2021) seems to show a drop of around 2x between symptoms in a 4-12 week period vs 12+ weeks (Table 2), although with some fairly wild variation between studies so I do not trust this that much. In an extremely dubious extrapolation from this, we could say that perhaps symptoms half again from 12 weeks to 6 months, again from 6 months to a year, and after that persist as a permanent injury. In this case, we'd divide the "symptom after 30 days figure" from Al-Aly et al by a factor of 8 to get the permanent injury figure, which seems plausible to me (but again, you could totally argue for a much lower number).

With this final fudge, we get a lifelong QALY reduction of 0.008% to 0.03%. Assuming a 50-year life expectancy, this amounts to 1.4 to 5.5 days of cost from long-term sequelae. Of course, there are also short-term costs (and risk of morbidity!) that is omitted from this analysis, so the total costs will be higher than this.

Comment by AdamGleave on Inner Alignment in Salt-Starved Rats · 2020-11-30T17:45:47.651Z · LW · GW

I googled "model-based RL Atari" and the first hit was this which likewise tries to learn the reward function by supervised learning from observations of past rewards (if I understand correctly)

Ah, the "model-based using a model-free RL algorithm" approach :) They learn a world model using supervised learning, and then use PPO (a model-free RL algorithm) to train a policy in it. It sounds odd but it makes sense: you hopefully get much of the sample efficiency of model-based training, while still retaining the state-of-the-art results of model-free RL. You're right that in this setup, as the actions are being chosen by the (model-free RL) policy, you don't get any zero-shot generalization.

I added a new sub-bullet at the top to clarify that it's hard to explain by RL unless you assume the planner can query the ground-truth reward function in arbitrary hypothetical states. And then I also added a new paragraph to the "other possible explanations" section at the bottom saying what I said in the paragraph just above. Thank you.

Thanks for updating the post to clarify this point -- I agree with you with the new wording.

In ML today, the reward function is typically a function of states and actions, not "thoughts". In a brain, the reward can depend directly on what you're imagining doing or planning to do, or even just what you're thinking about. That's my proposal here.

Yes indeed, your proposal is quite different from RL. The closest I can think of to rewards over "thoughts" in ML would be regularization terms that take into account weights or, occasionally, activations -- but that's very crude compared to what you're proposing.

Comment by AdamGleave on Inner Alignment in Salt-Starved Rats · 2020-11-26T09:48:40.458Z · LW · GW

Thanks for the clarification! I agree if the planner does not have access to the reward function then it will not be able to solve it. Though, as you say, it could explore more given the uncertainty.

Most model-based RL algorithms I've seen assume they can evaluate the reward functions in arbitrary states. Moreover, it seems to me like this is the key thing that lets rats solve the problem. I don't see how you solve this problem in general in a sample-efficient manner otherwise.

One class of model-based RL approaches is based on [model-predictive control](https://en.wikipedia.org/wiki/Model_predictive_control): sample random actions, "rollout" the trajectories in the model, pick the trajectory that had the highest return and then take the first action from that trajectory, then replan. That said, assumptions vary. [iLQR](https://en.wikipedia.org/wiki/Linear%E2%80%93quadratic_regulator) makes the stronger assumption that reward is quadratic and differentiable.

I think methods based on [Monte Carlo tree search](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search) might exhibit something like the problem you discuss. Since they sample actions from a policy trained to maximize reward, they might end up not exploring enough in this novel state if the policy is very confident it should not drink the salt water. That said, they typically include explicit methods for exploration like [UCB](https://en.wikipedia.org/wiki/Thompson_sampling#Upper-Confidence-Bound_(UCB)_algorithms) which should mitigate this.

Comment by AdamGleave on Inner Alignment in Salt-Starved Rats · 2020-11-19T12:13:00.655Z · LW · GW

I'm a bit confused by the intro saying that RL can't do this, especially since you later on say the neocortex is doing model-based RL. I think current model-based RL algorithms would likely do fine on a toy version of this task, with e.g. a 2D binary state space (salt deprived or not; salt water or not) and two actions (press lever or no-op). The idea would be:

  - Agent explores by pressing lever, learns transition dynamics that pressing lever => spray of salt water.

  - Planner concludes that any sequence of actions involving pressing lever will result in salt water spray. In a non salt-deprived state this has negative reward, so the agent avoids it.

  - Once the agent becomes salt deprived, the planner will conclude this has positive reward, and so take that action.

I do agree that a typical model-free RL algorithm is not capable of doing this directly (it could perhaps meta-learn a policy with memory that can solve this).

Comment by AdamGleave on The ground of optimization · 2020-07-31T01:03:04.537Z · LW · GW

Thanks for the post, this is my favourite formalisation of optimisation so far!

One concern I haven't seen raised so far, is that the definition seems very sensitive to the choice of configuration space. As an extreme example, for any given system, I can always augment the configuration space with an arbitrary number of dummy dimensions, and choose the dynamics such that these dummy dimensions always get set to all zero after each time step. Now, I can make the basin of attraction arbitrarily large, while the target configuration set remains a fixed size. This can then make any such dynamical system seem to be an arbitrarily powerful optimiser.

This could perhaps be solved by demanding the configuration space be selected according to Occam's razor, but I think the outcome still ends up being prior dependent. It'd be nice for two observers who model optimising systems in a systematically different way to always agree within some constant factor, akin to Kolmogorov complexity's invariance theorem, although this may well be impossible.

As a less facetious example, consider a computer program that repeatedly sets a variable to 0. It seems again we can make the optimising power arbitrarily large by making the variable's size arbitrarily large. But this doesn't quite map onto the intuitive notion of the "difficulty" of an optimisation problem. Perhaps including some notion of how many other optimising systems would have the same target set would resolve this.

Comment by AdamGleave on Following human norms · 2019-02-01T21:00:13.251Z · LW · GW

I feel like there are three facets to "norms" v.s. values, which are bundled together in this post but which could in principle be decoupled. The first is representing what not to do versus what to do. This is reminiscent of the distinction between positive and negative rights, and indeed most societal norms (e.g. human rights) are negative, but not all (e.g. helping an injured person in the street is a positive right). If the goal is to prevent catastrophe, learning the 'negative' rights is probably more important, but it seems to me that most techniques developed could learn both kinds of norms.

Second, there is the aspect of norms being an incomplete representation of behaviour: they impose some constraints, but there is not a single "norm-optimal" policy (contrast with explicit reward maximization). This seems like the most salient thing from an AI standpoint, and as you point out this is an underexplored area.

Finally, there is the issue of norms being properties of groups of agents. One perspective on this is that humans are realising their values through constructing norms: e.g. if I want to drive safely, it is good to have a norm to drive on the left or right side of the road, even though I may not care which norm we establish. Learning norms directly therefore seems beneficial to neatly integrate into human society (it would be awkward if e.g. robots drive on the left and humans drive on the right). If we think the process of going from values to norms is both difficult and important for multi-agent cooperation, learning norms also lets us sidestep a potentially thorny problem.

Comment by AdamGleave on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-18T22:45:25.198Z · LW · GW

Thanks for the informative post as usual.

Full-disclosure: I'm a researcher at UC Berkeley financially supported by CHAI, one of the organisations reviewed in this post. However, this comment is just my personal opinion.

Re: location, I certainly agree that an organization does not need to be in the Bay Area to do great work, but I do think location is important. In particular, there's a significant advantage to working in or near a major AI hub. The Bay Area is one such place (Berkeley, Stanford, Google Brain, OpenAI, FAIR) but not the only one; e.g. London (DeepMind, UCL) and Montreal (MILA, Brain, et al) are also very strong.

I also want to push back a bit on the assumption that people working for AI alignment organisations will be involved with EA and rationalist communities. While it may be true in many cases, at CHAI I think it's only around 50% of staff. So whether these communities are thriving or not in a particular area doesn't seem that relevant to me for organisational location decisions.

Comment by AdamGleave on Current AI Safety Roles for Software Engineers · 2018-11-17T00:02:48.333Z · LW · GW

Description of CHAI is pretty accurate. I think it's a particularly good opportunity for people who are considering grad school as a long-term option: we're in an excellent position to help people get into top programs, and you'll also get a sense of what academic research culture is like.

We'd like to hire more than one engineer, and are currently trialling several hires. We have a mixture of work, some of which is more ML oriented and some of which is more infrastructure oriented. So we'd be willing to consider applicants with limited ML experience, but they'd need to have strengths in other areas to compensate.

If anyone is considering any of these roles and is uncertain whether they're a good fit, I'd encourage you to just apply. It doesn't take much time for you to apply or for the organisation to do an initial screening. I've spoken to several people who didn't think they were viable candidates for a particular role, and then turned out to be one of the best applicants we'd received.