The Apprentice Experiment

post by johnswentworth · 2021-06-10T03:29:27.257Z · LW · GW · 11 comments

Contents

  Background Models
  The Plan
  Aysajan’s Intro
  Hopes
None
11 comments

About two months ago, someone asked me what I would do with more funding. Other than the obvious (i.e. generally improve my own quality-of-life in minor ways), my main answer was: take on an apprentice. I have some models about how best to train people for this sort of work, and an apprentice would allow me to test those models while also supporting my own research. I started laying groundwork for that plan - in particular, Specializing in Problems We Don’t Understand [LW · GW] laid out my main background model.

Then, about a month ago, Aysajan [LW · GW] put up a short post titled “Can I be Your Apprentice? [LW · GW]” - essentially an open call to people on LW doing cool work. We talked, it seemed like a good fit, so the apprentice experiment kicked off ~3 weeks ago.

This post will provide more detail on models, motivation, the plan, etc, including a section for Aysajan to introduce himself.

Background Models

First background model: Specializing in Problems We Don’t Understand [LW · GW]. Problems-we-don’t-understand are similar to each other in a way which problems-we-do-understand are not. In the context of scientific research, preparadigmatic research in different fields is similar in a way which research within a paradigm is not. There are general skills and knowledge useful for finding/creating structure de novo, as opposed to working within some already-mapped structure.

Furthermore, while problems-we-don’t-understand may require some specialized knowledge, specialized knowledge of the field is never the rate-limiting step; if it were, then the problem would already be tractable to people steeped in the existing specialized knowledge of the field. If a problem is tractable within the current paradigm, then it isn’t preparadigmatic. Broad, generalizable skills/knowledge are much more important for problems-we-don’t-understand than for problems-we-do-understand.

The linked post goes into more detail on how one can train and specialize in problems-we-don’t-understand.

Second background model: Selection Has A Quality Ceiling [LW · GW]. If we want people with a lot of skill in a lot of areas, trying to hire such people directly is Hard, in a big-O sense. As the number of traits we’re filtering for increases, the number of people we have to test in order to find one with all the requisite traits increases exponentially. The big-O requirements training skills are much better: as long as learning one skill doesn’t make another harder, the time required to train all of them should increase at-most linearly with the number skills.

Alas, most schools/companies today seem to mostly select, rather than train. Which makes sense - most companies don’t really need people with lots of skill in lots of areas, they just need people who will pick up the particulars of their industry quickly as-needed. But for problems-we-don’t-understand, people with lots of skill in lots of areas are exactly what we want.

Third background model: illegible skills [? · GW]. A lot of key skills/knowledge are hard to transmit by direct explanation. They’re not necessarily things which a teacher would even notice enough to consider important - just background skills or knowledge which is so ingrained that it becomes invisible. This sort of skill/knowledge is most easily transmitted by exposure: demonstration by the teacher, experimentation by the student, and feedback, ideally on a day-to-day basis. Thus the importance of an apprenticeship-like structure: high exposure and one-on-one interaction helps transmit illegible skills/knowledge.

(I suspect that this also relates to Bloom’s two-sigma problem: one-on-one tutoring works about two standard deviations better than anything else in education. Regardless of whether illegible skill transmission is actually a core part of that phenomenon, an apprenticeship certainly involves enough one-on-one tutoring that I expect the two-sigma benefit to kick in.)

The Plan

Originally, I planned to put out a call for an apprentice around the end of this month/early next month. I hoped to get a few responses, filter for basic technical skills and personality compatibility, then randomly choose someone from a hopefully-not-too-short list. The intent was to avoid filtering heavily: I want to create new human capital, not merely select for existing human capital. And if the experiment works, I want to be able to do it again. Choosing someone who’s already obviously a uniquely good fit would compromise the information-value of the experiment.

Instead of that process, I’ve effectively selected on one thing: putting up a LessWrong post asking if anyone wants an apprentice. (Well, ok, I did also screen for basic technical skills and personality compatibility.) Aysajan’s resume is typical enough that I’m not too worried about selection effects there, but putting up a LessWrong post asking if anyone wants an apprentice implies a kind of chutzpah and do-what-it-takes attitude that may not be so easy to replicate. So from an experimental replicability standpoint, that’s a minor note of concern. (From a personal standpoint, I love it.)

From here, the plan is for Aysajan to spend the next few months working on the sorts of projects I worked on before focusing on alignment full time - the sorts of projects which I expect to build skills for solving problems-we-don’t-understand. These won’t be strictly or even primarily alignment-related; the goal is to build skill in solving problems-we-don’t-understand, and alignment is a pretty difficult area in which to practice.

Aysajan’s first post [LW · GW] since the apprenticeship started went up just recently. It’s a write-up of an exercise looking for various systems besides probabilistic models which satisfy the assumptions used in Cox’ Theorem to derive Bayes’ Rule.

I don’t have any particular experimental outcomes to measure. If the project goes as well as I hope, then I expect it will be quite obvious. No need to go inventing proxies which don’t actually quite capture the things I care about.

Aysajan’s Intro

(This section is in Aysajan’s voice.)

I am a business school faculty in a Canadian university. I earned my master’s degree in statistics and PhD in operations research in the US in 2018. I joined LessWrong not long ago and have been truly enjoying the intelligent conversations/debates going on here, whether it is related to general rationality, ML/AI, or simply investment. In the meantime, I find myself thinking Albert Einstein’s famous quote about learning quite frequently. He famously said “The more I learn, the more I realize how much I don’t know”. While being truly amazed by all the fascinating work LessWrong community members have been doing, I realize that I couldn’t do much due to my limited domain knowledge and limited hands-on experience. I am inspired and I want to contribute, especially in the fields of ML/AI research. But in reality, as an outlier in my current professional network (business academia), I am greatly struggling due to lack of guidance. Thus I made a call for an apprenticeship. I have a strong desire to contribute to the community by conducting original research and I believe apprenticeship is one of the best ways to learn ML/AI skills: learn from the best and do original research.

Hopes

(This section is in John's voice, but speaks for both of us.)

Best-case scenario, this experiment provides a prototype for producing new specialists in problems-we-don’t-understand. At a personal level, we want to work with such people, and the ability to produce them would make that a lot easier. At a community level, alignment is a particularly difficult problem-we-don’t-understand (especially due to the lack of good feedback loops), and we hear that we’re now more bottlenecked on effective researchers than on funding.

But what we really want is a whole community or institute of people who specialize in problems-we-don’t-understand, making breakthroughs not only on alignment but on aging, on designing organisms from scratch, on efficient orbital delivery and terraforming, on generalized cognitive processes in organisms or organizations or brains, on fusion energy, on practical design of organizations or contract structures or memes, on cryptographically-secure biodefenses, …. We want a group of people who will set the whole damn world on fire.

11 comments

Comments sorted by top scores.

comment by Zvi · 2021-06-12T11:30:17.041Z · LW(p) · GW(p)

The first experimental results are in, and 'asking for what you want' soundly defeats the null hypothesis. Yay!

This is awesome. My one full attempt at an apprentice went quite well in the past, in a completely different field, albeit with a stronger selection filter, and the person in question ended up world-class and now is running a successful start-up for which I'm a small angel. 

There was an aborted attempt to take on a rationalist at one point, alas that did not work out as at the time the thing I was doing didn't go well and the operation ended before they could get good, as the opportunity costs to continuing were too high. It did look promising in terms of the apprentice developing good skills, and if I'd had more capital at the time I think it would have gone well.

I also hired someone who will start Monday who will kind of be an apprentice, but they are rather uniquely high human capital in many related ways, so it doesn't count. 

Matches like this seem highly valuable, and I think it would be good to provide some resources to make this easier. 

comment by Elizabeth (pktechgirl) · 2022-09-24T00:45:54.475Z · LW(p) · GW(p)

I'm very curious for an update on how this went and what you both learned.

Replies from: johnswentworth
comment by johnswentworth · 2022-09-25T16:27:21.621Z · LW(p) · GW(p)

The MATS Models [LW · GW] post contains a bunch of my models after updating on how working with Aysajan went. About half the exercises in that post are things I tested with Aysajan (prototypical examples, framing, existing evidence, all the writing stuff), and a bunch of them are made to address specific skill-bottlenecks which I noticed by working with Aysajan (especially the sort of stuff in What Are You Tracking In Your Head? [LW · GW]).

Updates besides those:

  • We explicitly did not focus on alignment/AI; the idea was to get a practice loop going on other hard problems. I don't think that loop ever really properly got going.
    • One contributing mistake: I gave Aysajan lots of freedom in what problems to focus on; in hindsight I think I should have assigned problems, especially early on. 
  • I now think we did some things in the wrong order.
    • We covered a bunch of technical content early on (especially via framing exercises [LW(p) · GW(p)]); I now think practice with prototypical examples [LW · GW] should have come before that. I didn't realize until pretty late that the prototypical examples skill was super-important and missing, and I think having that skill dramatically increases the returns to technical study in general.
    • We spent a lot of time on problem choice early on (like e.g. Hamming questions [LW · GW]). As mentioned above, I now think I should have assigned problems early on and worked on problem choice later.
  • Having worked with both Aysajan and the MATS teams, I've updated toward both full-time attention and (small) teams mattering a lot. ~3 people working together on the same problems basically full-time in the same room results in way more focus than other setups, given the same level of attention from me.
comment by alkjash · 2021-06-11T17:01:35.522Z · LW(p) · GW(p)

This is great!

I'm interested in the educational side of this, particularly how to do one-on-one mentorship well. I've had effective mentors in the past who did anything from [blast me with charisma and then leave me to my own devices] to [put me under constant surveillance until I past the next test, rinse, repeat.] Can you say something about your educational philosophy/methods?

Replies from: johnswentworth
comment by johnswentworth · 2021-06-11T19:44:39.025Z · LW(p) · GW(p)

There's a lot of different kinds-of-value which mentorship can provide, but I'll break it into two main classes:

  • Things which can-in-principle be provided by other channels, but can be accelerated by 1-on-1 mentorship.
  • Things for which 1-on-1 mentorship is basically the only channel.

The first class includes situations where mentorship is a direct substitute for a textbook, in the same way that a lecture is a direct substitute for a textbook. But it also includes situations where mentorship adds value, especially via feedback. A lecture or textbook only has space to warn against the most common failure-modes and explain "how to steer", and learning to recognize failure-modes or steer "in the wild" takes practice. Similar principles apply to things which must be learned-by-doing: many mistakes will be made, many wrong turns, and without a guide, it may take a lot of time and effort to figure out the mistakes and which turns to take. A mentor can spot failure-modes as they come up, point them out (which potentially helps build recognition), point out the right direction when needed, and generally save a lot of time/effort which would otherwise be spent being stuck. A mentor still isn't strictly necessary in these situations - one can still gain the relevant skills from a textbook or a project - but it may take longer that way.

For these use-cases, there's a delicate balance. On the one hand, the mentee needs to explore and learn to recognize failure-cases and steer on their own, not become reliant on the mentor's guidance. On the other hand, the mentor does need to make sure the mentee doesn't spend too much time stuck. The socratic method is often useful here, as are the techniques of research conversation support role [LW · GW]. Also, once a mistake has been made and then pointed out, or once the mentor has provided some steering, it's usually worth explicitly explaining the more general pattern and how this instance fits it. (This also includes things like pointing out a different frame and then explaining how this frame works more generally - that's a more meta kind of "steering".)

The second class is mostly illegible knowledge/skills - things which a mentor wouldn't explicitly notice or doesn't know how to explain. For these, demonstration is the main channel. Feedback can be provided to some degree by demonstrating, then having the mentee try, or vice-versa. In general, it won't be obvious exactly what the mentor is doing differently than the mentee, or how to explain what the mentor is doing differently, but the mentee will hopefully pick it up anyway, at least enough to mimic it.

comment by TurnTrout · 2021-06-10T20:43:53.564Z · LW(p) · GW(p)

I'm both excited about this particular experiment and about the prospect that Aysajan’s post eventually increases the supply of promising researchers, because the criteria for good apprentices are different than the selection-driven criteria for good junior researchers (on a given technical problem).

comment by adamShimi · 2021-06-12T11:16:32.837Z · LW(p) · GW(p)

Excited about this!

From a personal standpoint, I'm curious whether Aysajan learns some interesting deconfusion skills through this apprenticeship, as this is what I'm most interested in, and because I expect deconfusion to be a fundamental subskill in solving problems we don't understand.

On a community level, I really want as many people as possible tackling these problems, so I hope this result in better ways of training for this task.

comment by Felix Karg (felix-karg) · 2021-06-11T11:18:46.579Z · LW(p) · GW(p)

From here, the plan is for Aysajan to spend the next few months working on the sorts of projects I worked on before focusing on alignment full time - the sorts of projects which I expect to build skills for solving problems-we-don’t-understand.

What kind of projects/problems are you thinking about? This might become a very valuable community resource, even for those of us without your guidance.

Replies from: johnswentworth, ChristianKl
comment by johnswentworth · 2021-06-11T16:23:50.834Z · LW(p) · GW(p)

Some of this I've written about before:

Those definitely don't cover all of it, though.

So far, other than those, we've mostly been kicking around smaller problems. For instance, the last couple days we were talking about general approaches for gearsy modelling in the context of a research problem Aysajan's been working on (specifically, modelling a change in India's farm subsidy policy). We also spent a few days on writing exercises - approximately everyone benefits from more practice in that department.

We've also done a few exercises to come up with Hard Problems to focus on. ("What sci-fi technologies or magic powers would you like to have?" was a particularly good one, and the lists of unsolved problems are also intended to generate ideas.) Once Aysajan has settled on ~10-20 Hard Problems to focus on (initially), those will drive the projects. You should see posts on whatever he's working on fairly frequently.

Replies from: felix-karg
comment by Felix Karg (felix-karg) · 2021-06-12T16:52:07.852Z · LW(p) · GW(p)

Awesome! Thanks for keeping us up-to-date!

comment by ChristianKl · 2021-06-11T16:48:55.065Z · LW(p) · GW(p)

johnswentworth published a lot of post about individual projects like for example human longevity. Just look into his post history.