Refine: An Incubator for Conceptual Alignment Research Bets

post by adamShimi · 2022-04-15T08:57:35.502Z · LW · GW · 13 comments

Contents

  Why?
  Who?
  Some concrete details
  How can I apply?
None
13 comments

I’m opening an incubator called Refine for conceptual alignment [AF · GW] research in London, which will be hosted by Conjecture [LW · GW]. The program is a three-month fully-paid fellowship for helping aspiring independent researchers find, formulate, and get funding for new conceptual alignment research bets, ideas that are promising enough to try out for a few months to see if they have more potential.

If this sounds like something you’d be interested in, you can apply here!

Why?

I see a gaping hole in the alignment training ecosystem: there are no programs dedicated specifically to creating new independent conceptual researchers and helping them build original research agendas.

The programs that do exist (AI Safety Camp, SERI MATS) tend to focus on an apprenticeship (or “accelerated PhD”) model in which participants work under researchers on already-established research directions. And while there are avenues for independent alignment researchers to get started on their own [AF · GW], it is fraught with many risks, slowing down progress considerably.

So I feel the need for a program geared specifically towards conceptual alignment researchers that are interested in doing their own research and making their own research bets.

Who?

This program is for self-motivated and curious people who want to become independent conceptual alignment researchers and expand the portfolio of alignment bets and research ideas available. 

When I look at great conceptual researchers like John Wentworth, Paul Christiano, Evan Hubinger, Steve Byrnes, Vanessa Kosoy, and others, as well as at the good (famous and not) researchers I know from my PhD, they all have the same thing in common: they ask a question and keep looking for the answer. They tolerate confusion, not in the sense that they accept it, but in that they are able to work with it and not hide away behind premature formalization. They don’t give up on the problem; they search for different angles and approaches until it yields. Paul Graham calls this being relentlessly resourceful.

(Relentlessly Resourceful, Paul Graham, 2009)

I was writing a talk for investors, and I had to explain what to look for in founders. What would someone who was the opposite of hapless be like? They'd be relentlessly resourceful. Not merely relentless. That's not enough to make things go your way except in a few mostly uninteresting domains. In any interesting domain, the difficulties will be novel. Which means you can't simply plow through them, because you don't know initially how hard they are; you don't know whether you're about to plow through a block of foam or granite. So you have to be resourceful. You have to keep trying new things.

This is one of the main traits I’m looking for in an applicant — someone who will lead a new research agenda and morph it proactively, as needed. 

Another point that matters is being curious about different topics and ideas than the ones traditionally discussed in alignment. As I wrote in a recent post [AF · GW] and plan to discuss more in an upcoming sequence, I think we need to be more pluralist in our approach to alignment, and explore far more directions, from novel ideas to old approaches that may have been discarded too soon. And new ideas often come from unexpected places.

As one example, here is what Jesse Schell writes about his experience speaking to a professional juggler who performed tricks no one else could do:

(The Art of Game Design, Jesse Schell, 2008)

“The secret is: don’t look to other jugglers for inspiration—look everywhere else.” He proceeded to do a beautiful looping pattern, where his arms kind of spiraled, and he turned occasional pirouettes. “I learned that one watching a ballet in New York. and this one...” he did a move that involved the balls popping up and down as his hands fluttered delicately back and forth. “I learned that from a flock of geese I saw take off from a lake up in Maine. And this,” he did a weird mechanical looking movement where the balls almost appeared to move at right angles. “I learned that from a paper punch machine on Long Island.” He laughed a little and stopped juggling for a minute. “People try to copy these moves, but they can’t. They always try... yeah, look at that fella, over there!” He pointed to a juggler with a long ponytail across the gym who was doing the “ballet” move, but it just looked dumb. Something was missing, but I couldn’t say what.

“See, these guys can copy my moves, but they can’t copy my inspiration.”

As for previous experience with alignment research, it can both be a blessing and a curse.  While familiarity with alignment concepts can help bootstrap the learning and idea generation process, it also risks clogging the babble [? · GW] process by constraining “what makes sense”. For those it would be helpful for, the program includes some initial teaching on core alignment ideas (according to me) and the mental moves necessary for good alignment research. 

Some concrete details

We plan to invite the first cohort of 4-5 fellows from July/August through September/October (wiggle room depending on some ops details), though exact dates will be determined by their availability. We anticipate that other cohorts will follow, so if you miss the first round but are still interested, please apply. 

This is a full-time position in London where fellows will work out of Conjecture’s offices. The program includes:

During the first month of the program, participants will spend their time discussing abstract models of alignment, what the problem is about, and the different research approaches that have been pursued. The focus will be on understanding the assumptions and constraints behind the different takes and research programs, to get a high-level map of the field.

The next ~two months of the program will focus on helping fellows babble new research bets on alignment, refine them, test them, and either throw them away or change them. By the end, the goal is for fellows to narrow in on a research bet that could be further investigated in the following 6 months, and is promising enough to warrant funding.

It’s worth noting that while the incubator is being housed by Conjecture, fellows do not have any constraints imposed by the company. Fellows will not have to work on Conjecture’s research agendas or be obligated to collaborate after the program is over. Similarly, I’m not looking for people to work on my own research ideas, but for new exciting research bets I wouldn’t have thought about.

How can I apply?

We will review applications on a rolling-basis, with a usual delay of 1 week before response and a month before a decision (with a work task in the middle). The application is open now!

13 comments

Comments sorted by top scores.

comment by lc · 2022-04-15T09:12:36.161Z · LW(p) · GW(p)

Guarding against the habits [LW · GW] that hide positive feedback: Thanks. I mean it.

Replies from: adamShimi
comment by adamShimi · 2022-04-19T12:08:35.402Z · LW(p) · GW(p)

Thanks for making your positive feedback visible. ;)

comment by Charlie Steiner · 2022-04-15T12:25:21.376Z · LW(p) · GW(p)

Great news! I have to change the post I was drafting about unfilled niches :)

Replies from: adamShimi
comment by adamShimi · 2022-04-19T12:07:52.011Z · LW(p) · GW(p)

Sorry to make you work more, but happy to fill a much needed niche. ^^

comment by Joe Collman (Joe_Collman) · 2022-04-16T04:44:14.920Z · LW(p) · GW(p)

Wholeheartedly agree, and I think it's great that you're doing this.
I'll be very interested in what you learn along the way w.r.t. more/less effective processes.

(Bonus points for referencing the art of game design - one of my favourite books.)

Replies from: adamShimi
comment by adamShimi · 2022-04-19T12:07:12.732Z · LW(p) · GW(p)

Thanks! Yes, this is very much an experiment, and even if it fails, I expect it to be a productive mistake [AF · GW] we can learn from. ;)

comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-04-16T04:53:53.578Z · LW(p) · GW(p)

I'm really excited for the outcomes you describe: more relentlessly resourceful independent researchers exploring a wider range of options. I do feel a bit concerned that your search for good applicants is up against a challenge. I think that both the intelligence necessary to produce good results and the personality trait of agentiveness such that they can become relentlessly resourceful with training are rare and largely determined early in life. And I think that, given this, a lot of such people will already be quite absorbed in profitable paths by the time they are college graduates. So, it makes me wonder if you should look for young people who are already being remarkably successful in life, and try to recruit them in particular...

comment by Ulisse Mini (ulisse-mini) · 2022-11-07T19:02:02.513Z · LW(p) · GW(p)

Random thought I had about this: IIRC the science of skill transfer between fields shows it doesn't really happen except in people with a high degree of mastery. (Cite: Ultralearning or Peak mentions this I think?)

Might be something to look into for Refine, a master of X could be significantly better at transferring insights from X to Y.

comment by hath · 2022-04-15T23:24:22.306Z · LW(p) · GW(p)

Are you accepting minors for this program?

Replies from: adamShimi
comment by adamShimi · 2022-04-19T12:10:21.336Z · LW(p) · GW(p)

I think this is something we will have to address on a case by case basis. By default I would say probably no, but for really brilliant minors, there might be an option.

Not promising anything, but if you know anyone in this situation they should apply, it's not long at all.

comment by AtillaYasar (atillayasar) · 2022-12-10T00:30:57.571Z · LW(p) · GW(p)

How can I apply?

Unfortunately, applications are closed at the moment.

comment by AtillaYasar (atillayasar) · 2022-12-10T00:29:13.428Z · LW(p) · GW(p)

I’m opening an incubator called Refine for conceptual alignment research in London, which will be hosted by Conjecture. The program is a three-month fully-paid fellowship for helping aspiring independent researchers find, formulate, and get funding for new conceptual alignment research bets, ideas that are promising enough to try out for a few months to see if they have more potential.

(note: applications are currently closed)

Replies from: atillayasar
comment by AtillaYasar (atillayasar) · 2022-12-10T00:33:28.499Z · LW(p) · GW(p)

The form at this link <https://docs.google.com/forms/d/e/1FAIpQLSdU5IXFCUlVfwACGKAmoO2DAbh24IQuaRIgd9vgd1X8x5f3EQ/closedform> says "The form Refine Incubator Application is no longer accepting responses.
Try contacting the owner of the form if you think this is a mistake."
so I suggested changing the parts where it says to sign up, to a note about applications not being accepted anymore.