Posts

Scaling Laws for Reward Model Overoptimization 2022-10-20T00:20:06.920Z
Frequent arguments about alignment 2021-06-23T00:46:38.568Z

Comments

Comment by John Schulman (john-schulman) on Do anthropic considerations undercut the evolution anchor from the Bio Anchors report? · 2022-10-01T22:19:35.044Z · LW · GW

I think that doing N independent parallel computation and selecting one of them is way less useful than doing an N times longer serial computation. This kind of selection only helps you guess something that is impossible to deduce in any other way. So if anthropics is tacitly selecting the earth out of N other worlds, that doesn't contribute a factor of N to the total computation, it's a much smaller factor.

EDIT: intended to write a comment rather than an answer.

Comment by John Schulman (john-schulman) on A Test for Language Model Consciousness · 2022-08-25T20:50:15.436Z · LW · GW

I think this test can be performed now or soon, but I'm not sure I'd update much from it. Current LMs are already pretty good at answering questions about themselves when prompted with a small amount of information about themselves. ("You are a transformer language model trained by AICo with data up to 2022/04"). We could also bake in this information through fine-tuning. They won't be able to tell you how many layers they have without being told, but we humans can't determine our brain architecture through introspection either. 

I think the answer to "are you phenomenally conscious" will be sensitive to small differences in the training data involving similar conversations. Dialog-prompted models probably fall back on literary depictions of AI for self-oriented questions they don't know how to answer, so the answer might depend on which sci-fi AI the model is role-playing. (It's harder to say what determines the OOD behavior for models trained with more sophisticated methods like RLHF.)

Comment by John Schulman (john-schulman) on AGI Ruin: A List of Lethalities · 2022-06-07T01:04:42.290Z · LW · GW

Re: smooth vs bumpy capabilities, I agree that capabilities sometimes emerge abruptly and unexpectedly. Still, iterative deployment with gradually increasing stakes is much safer than deploying a model to do something totally unprecedented and high-stakes. There are multiple ways to make deployment more conservative and gradual. (E.g., incrementally increase the amount of work the AI is allowed to do without close supervision, incrementally increase the amount of KL-divergence between the new policy and a known-to-be-safe policy.)

Re: ontological collapse, there are definitely some tricky issues here, but the problem might not be so bad with the current paradigm, where you start with a pretrained model (which doesn't really have goals and isn't good at long-horizon control), and fine-tune it (which makes it better at goal-directed behavior). In this case, most of the concepts are learned during the pretraining phase, not the fine-tuning phase where it learns goal-directed behavior.

Comment by John Schulman (john-schulman) on AGI Ruin: A List of Lethalities · 2022-06-07T00:51:02.102Z · LW · GW

To do what, exactly, in this nice iterated fashion, before Facebook AI Research destroys the world six months later?  What is the weak pivotal act that you can perform so safely?

Do alignment & safety research, set up regulatory bodies and monitoring systems.

When the rater is flawed, cranking up the power to NP levels blows up the P part of the system.

Not sure exactly what this means. I'm claiming that you can make raters less flawed, for example, by decomposing the rating task, and providing model-generated critiques that help with their rating. Also, as models get more sample efficient, you can rely more on highly skilled and vetted raters.
 

Comment by John Schulman (john-schulman) on AGI Ruin: A List of Lethalities · 2022-06-06T16:39:48.049Z · LW · GW

Found this to be an interesting list of challenges, but I disagree with a few points. (Not trying to be comprehensive here, just a few thoughts after the first read-through.)

  • Several of the points here are premised on needing to do a pivotal act that is way out of distribution from anything the agent has been trained on. But it's much safer to deploy AI iteratively; increasing the stakes, time horizons, and autonomy a little bit each time. With this iterative approach to deployment, you only need to generalize a little bit out of distribution. Further, you can use Agent N to help you closely supervise Agent N+1 before giving it any power.
  • One claim is that Capabilities generalize further than alignment once capabilities start to generalize far. The argument is that an agent's world model and tactics will be automatically fixed by reasoning and data, but its inner objective won't be changed by these things. I agree with the preceding sentence, but I would draw a different (and more optimistic) conclusion from it. That it might be possible to establish an agent's inner objective when training on easy problems, when the agent isn't very capable, such that this objective remains stable as the agent becomes more powerful.
    Also, there's empirical evidence that alignment generalizes surprisingly well: several thousand instruction following examples radically improve the aligned behavior on a wide distribution of language tasks (InstructGPT paper) a prompt with about 20 conversations gives much better behavior on a wide variety of conversational inputs (HHH paper). Making a contemporary language model well-behaved seems to be much easier than teaching it a new cognitive skill.
  • Human raters make systematic errors - regular, compactly describable, predictable errors.... This is indeed one of the big problems of outer alignment, but there's lots of ongoing research and promising ideas for fixing it. Namely, using models to help amplify and improve the human feedback signal. Because P!=NP it's easier to verify proofs than to write them. Obviously alignment isn't about writing proofs, but the general principle does apply. You can reduce "behaving well" to "answering questions truthfully" by asking questions like "did the agent follow the instructions in this episode?", and use those to define the reward function. These questions are not formulated in formal language where verification is easy, but there's reason to believe that verification is also easier than proof-generation for informal arguments.
Comment by John Schulman (john-schulman) on Reshaping the AI Industry · 2022-05-30T00:39:24.130Z · LW · GW

IMO prosaic alignment techniques (say, around improving supervision quality through RRM & debate type methods) are highly underrated by the ML research community, even if you ignore x-risk and just optimize for near-term usefulness and intellectual interestingness. I think this is due to a combination of (1) they haven't been marketed well to the ML community, (2) lack of benchmarks and datasets, (3) need to use human subjects in experiments, (4) it takes a decent amount of compute, which was out of reach, perhaps until recently.

Comment by John Schulman (john-schulman) on Is AI Progress Impossible To Predict? · 2022-05-15T19:40:44.943Z · LW · GW

Interesting analysis. Have you tried doing an analysis on quantities other than % improvement? A 10% improvement from low accuracy is different from a 10% improvement at high accuracy. So for example, you could try doing a linear regression from small_to_medium_improvement, medium_accuracy -> large_accuracy and look at the variance explained.

Edit: I tried linear regression on the chinchilla MMLU data, predicting the large model accuracy from the 3 smaller models' accuracies, and only got 8% of variance explained, vs 7% of variance explained by only looking at the second largest model's accuracy. So that's consistent with the OP's claim of unpredictability.

Edit2: MMLU performance for the smaller models is about chance level, so it's not surprising that we can't predict much from it. (The accuracies we're looking at for these models are noise.)

Comment by John Schulman (john-schulman) on Ideal governance (for companies, countries and more) · 2022-05-14T18:36:04.351Z · LW · GW

This is from his memoir The Singapore Story, from right after he finished studying in the UK. (Don't have a precise reference, just a text file with some notes.)

Comment by John Schulman (john-schulman) on Intuitions about solving hard problems · 2022-04-26T03:14:40.974Z · LW · GW

Weight-sharing makes deception much harder.

Could you explain or provide a reference for this?

Comment by John Schulman (john-schulman) on Ideal governance (for companies, countries and more) · 2022-04-18T04:59:16.118Z · LW · GW

Lee Kuan Yew wrote about how he went looking for a governance system for his party, the PAP (which now rules Singapore) after the party nearly was captured by the communists in the 50s. He looked at the Catholic Church as an inspiring example of a system that had survived for a long time, and he eventually settled on a system based on the Church's system for electing cardinals and the Pope.

Comment by John Schulman (john-schulman) on Introducing the Principles of Intelligent Behaviour in Biological and Social Systems (PIBBSS) Fellowship · 2021-12-19T00:13:51.635Z · LW · GW

I'm especially interested in the analogy between AI alignment and democracy. (I guess this goes under "Social Structures and Institutions".) Democracy is supposed to align a superhuman entity with the will of the people, but there are a lot of failures, closely analogous to well-known AI alignment issues: 

  • politicians optimize for the approval of low-information voters, rather than truly optimizing the people's wellbeing (deceptive alignment)
  • politician, pacs, parties, permanent bureaucrats are agents with their own goals  that don't align with the populace (mesa optimizers)

I think it's more likely that insights will transfer from the field of AI alignment to the field of government design than vice versa. Easier to do experiments on the AI side, and clearer thinkers.

Comment by John Schulman (john-schulman) on EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised · 2021-11-19T08:59:22.720Z · LW · GW

Would you say Learning to Summarize is an example of this? https://arxiv.org/abs/2009.01325

It's model based RL because you're optimizing against the model of the human (ie the reward model). And  there are some results at the end on test-time search.

Or do you have something else in mind?

Comment by John Schulman (john-schulman) on EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised · 2021-11-14T18:02:42.491Z · LW · GW

Thanks, this is very insightful. BTW, I think your paper is excellent!

Comment by John Schulman (john-schulman) on EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised · 2021-11-12T10:06:04.265Z · LW · GW

I'm still not sure how to reconcile your results with the fact that the participants in the procgen contest ended up winning with modifications of our PPO/PPG baselines, rather than Q-learning and other value-based algorithms, whereas your paper suggests that Q-learning performs much better. The contest used 8M timesteps + 200 levels. I assume that your "QL" baseline is pretty similar to widespread DQN implementations.

https://arxiv.org/pdf/2103.15332.pdf

https://www.aicrowd.com/challenges/neurips-2020-procgen-competition/leaderboards?challenge_leaderboard_extra_id=470&challenge_round_id=662

Are there implementation level changes that dramatically improve performance of your QL implementation?

(Currently on vacation and I read your paper briefly while traveling, but I may very well have missed something.)

Comment by John Schulman (john-schulman) on EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised · 2021-11-12T09:58:31.993Z · LW · GW

There's no PPO/PPG curve there -- I'd be curious to see that comparison. (though I agree that QL/MuZero will probably be more sample efficient.)

Comment by John Schulman (john-schulman) on EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised · 2021-11-07T10:56:15.404Z · LW · GW

Performance is mostly limited here by the fact that there are 500 levels for each game (i.e., level overfitting is the problem) so it's not that meaningful to look at sample efficiency wrt environment interactions. The results would look a lot different on the full distribution of levels. I agree with your statement directionally though.

Comment by John Schulman (john-schulman) on Frequent arguments about alignment · 2021-06-25T05:21:22.247Z · LW · GW

Agree with what you've written here -- I think you put it very well.

Comment by John Schulman (john-schulman) on Frequent arguments about alignment · 2021-06-25T05:17:23.240Z · LW · GW

In my experience, you need separate teams doing safety research because specialization is useful -- it's easiest to make progress when both individuals and teams specialize a bit and develop taste and mastery of a narrow range of topics.

Comment by John Schulman (john-schulman) on Frequent arguments about alignment · 2021-06-25T04:36:49.511Z · LW · GW

Yeah that's also good point, though I don't want to read too much into it, since it might be a historical accident.

Comment by John Schulman (john-schulman) on Frequent arguments about alignment · 2021-06-23T18:59:02.331Z · LW · GW

yup, added a sentence about it

Comment by John Schulman (john-schulman) on "Decision Transformer" (Tool AIs are secret Agent AIs) · 2021-06-09T15:46:36.485Z · LW · GW

Basically agree -- I think that a model trained by maximum likelihood on offline data is less goal-directed than one that's trained by an iterative process where you reinforce its own samples (aka online RL), but still somewhat goal directed. It needs to simulate a goal-directed agent to do a good job at maximum likelihood. OTOH it's mostly concerned with covering all possibilities, so the goal directed reasoning isn't emphasized. But with multiple iterations, the model can improve quality (-> more goal directedness) at the expense of coverage/diversity.

Comment by John Schulman (john-schulman) on The case for aligning narrowly superhuman models · 2021-06-08T22:19:12.699Z · LW · GW

Super clear and actionable -- my new favorite post on AF.

I also agree with it, and it's similar to what we're doing at OpenAI (largely thanks to Paul's influence).

Comment by John Schulman (john-schulman) on Teaching ML to answer questions honestly instead of predicting human answers · 2021-05-31T18:03:31.644Z · LW · GW

D'oh, re: the optimum of the objective, I now see that the solution is nontrivial. Here's my current understanding.

Intuitively, the MAP version of the objective says: find me a simple model theta1 such that there's more-complex theta2 with high likelihood under p(theta2|theta1) (which corresponds to sampling theta2 near theta1 until theta2 satisfies head-agreement condition) and high data-likelihood p(data|theta2). 

And this connects to the previous argument about world models and language as follows: we want theta1 to contain half a world model, and we want theta2 to contain the full world model and high data-likelihood (for one of the head) and the two heads agree. Based on Step1, the problem is still pretty underconstrained, but maybe that's resolved in Step 2.

Comment by John Schulman (john-schulman) on Teaching ML to answer questions honestly instead of predicting human answers · 2021-05-31T07:02:29.326Z · LW · GW

Isn't the Step 1 objective (the unnormalized posterior log probability of (θ₁, θ₂)) maximized at θ₁ = θ₂=argmax L + prior? Also, I don't see what this objective has to do with learning a world model.

Comment by John Schulman (john-schulman) on The Case for a Journal of AI Alignment · 2021-01-09T20:08:48.120Z · LW · GW

I think this is a good idea. If you go ahead with it, here's a suggestion.

Reviewers often procrastinate for weeks or months. This is partly because doing a review takes an unbounded amount of time, especially for articles that are long or confusing. So instead of sending the reviewers a manuscript with a due date, book a calendar event for 2 hours with the reviewers. The reviewers join a call or group chat and read the paper and discuss it. They can also help clear each other's confusions. They aim to complete the review by the end of the time window.

Comment by John Schulman (john-schulman) on Multi-dimensional rewards for AGI interpretability and control · 2021-01-04T04:38:13.209Z · LW · GW

There's a decent amount of literature on using multiple rewards, though often it's framed as learning about multiple goals. Here are some off the top of my head:

The Horde (classic): http://www.ifaamas.org/Proceedings/aamas2011/papers/A6_R70.pdf
Universal Value Function Approximators: http://proceedings.mlr.press/v37/schaul15.html
Learning to Act By Predicting: https://arxiv.org/abs/1611.01779
Temporal Difference Models: https://arxiv.org/abs/1802.09081
Successor Features: https://papers.nips.cc/paper/2017/hash/350db081a661525235354dd3e19b8c05-Abstract.html
 

Also see the discussion in Appendix D about prediction heads in OpenAI Five, used mostly for interpretability/diagnostics https://cdn.openai.com/dota-2.pdf.

Comment by John Schulman (john-schulman) on Why Neural Networks Generalise, and Why They Are (Kind of) Bayesian · 2020-12-30T23:31:02.764Z · LW · GW

The results in Neural Networks Are Fundamentally Bayesian are pretty cool -- that's clever how they were able to estimate the densities.

A couple thoughts on the limitations:

  • There are various priors over functions for which we can calculate the exact posterior. (E.g., Gaussian processes.) However, doing Bayesian inference on these priors doesn't perform as well as neural networks on most datasets. So knowing SGD is Bayesian is only interesting if we also know that the prior is interesting. I think the ideal theoretical result would be to show that SGD on neural nets is an approximation of Solomonoff Induction (or something like SI), and the approximation gets better as the NNs get bigger and deeper. But I have yet to see any theory that connects neural nets/ SGD to something like short programs.
  • If SGD works because it's Bayesian, then making it more Bayesian should make it work better. But according to https://arxiv.org/abs/2002.02405 that's not the case. Lowering the temperature, or taking the MAP (=temperature 0) generalizes better than taking the full Bayesian posterior, as calculated by an expensive MCMC procedure.
Comment by John Schulman (john-schulman) on Debate Minus Factored Cognition · 2020-12-30T22:25:11.622Z · LW · GW

I might be missing some context here, but I didn't understand the section "No Indescribable Hellworlds Hypothesis" and how hellworlds have to do with debate.

Comment by John Schulman (john-schulman) on Debate update: Obfuscated arguments problem · 2020-12-24T01:23:14.007Z · LW · GW

OK, I guess I'm a bit unclear on the problem setup and how it involves a training phase and deployment phase.

Comment by John Schulman (john-schulman) on Debate update: Obfuscated arguments problem · 2020-12-23T18:21:07.775Z · LW · GW

Wonderful writeup! 

I'm sure you've thought about this, but I'm curious why the following approach fails. Suppose we require the debaters to each initially write up a detailed argument in judge-understandable language and read each other's argument. Then, during the debate, each debater is allowed to quote short passages from their opponent's writeup. Honest will be able to either find a contradiction or an unsupported statement in Dishonest's initial writeup. If Honest quotes a passage and says its unsupported, then dishonest has to respond with the supporting sentences.