How to parallelize "inherently" serial theory work?

post by NicholasKross · 2023-04-07T00:08:44.428Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    9 Brendan Long
    4 Nathan Helm-Burger
None
No comments

Things this question is assuming, for the sake of discussion: The hardest parts of AI alignment are theoretical. Those parts will be critical for getting AI alignment right. The biggest bottlenecks to theoretical AI alignment, are "serial" work, as described in this Nate Soares post [LW · GW]. For quick reference: is the kind that seems to require "some researcher retreat to a mountain lair for a handful of years" in a row.

Examples Soares gives are "Einstein's theory of general relativity, [and] Grothendieck's simplification of algebraic geometry".

The question: How can AI alignment researchers parallelize this work?

I've asked a version of this question before [LW · GW], without realizing that this is a core part of it.

This thread is for brainstorming, collecting, and discussing techniques for taking the "inherently" serial work of deep mathematical and theoretical mastery... and making it parallelizable.

I am aware this could seem impossible, but sometimes seemingly-impossible things are worth brainstorming about, just in case, whenever (as is true here) we don't know it's impossible.

Answers

answer by Brendan Long · 2023-04-07T01:09:30.082Z · LW(p) · GW(p)

Some options I can think of:

  • Optimize your researcher's single-threaded performance by offloading as many unnecessary tasks as possible to different cores workers. For example, mow the researcher's lawn for them, cook for them, etc.
  • Speed up any learning parts by providing experts (i.e. if you want to know "Why is X?", find an expert on X to answer questions about it instead of needing your researcher to track down the answer from less-personalized sources). Or spin off worker threads have assistants fetch and correlate personalized descriptions even if they're not experts.

It's also possible that some of this work can be sped up by putting researchers working on the problems in contact with each other. My understanding is that it's generally more effective for people to bounce their ideas off of other smart people than to work alone.

comment by NicholasKross · 2023-04-07T01:30:22.056Z · LW(p) · GW(p)

I definitely wonder about the relative-effectiveness of e.g. paying on-call tutors about specific higher maths areas + CS, to help promising [LW · GW] (or established) alignment researchers.

answer by Nathan Helm-Burger · 2023-04-11T01:44:03.770Z · LW(p) · GW(p)

My guess is that making small teams consisting of a skilled mathematician, a skilled programmer, a skilled ML theorist, and a skilled manager would be a good way to make progress. Make a hundred or a thousand such groups, in the assumption that maybe 1% of them will pay off.

comment by Raemon · 2023-04-11T02:30:17.810Z · LW(p) · GW(p)

I think this is a good idea, but doesn't quite feel like an answer to the question (at least as I understood it). i.e. "get a bunch of serial researchers working in parallel, hope one of them succeeds", which I think So8res articulated in AI alignment researchers don't (seem to) stack [LW · GW].

I do think small teams with a few different skillsets working together is probably a good way to go in many cases. Your comment here reminds me of Wentworth's team structure in MATS Models [LW · GW], although that only had 3 people.

Replies from: nathan-helm-burger, nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-04-11T03:02:52.496Z · LW(p) · GW(p)

Yeah, so, my experience from working in academia says to me that the odds of finding two researchers with a similar frame on a novel problem and good social chemistry such that they add to each other's productivity is something like between 1/200 and 1/1000, even after filtering for 'competent researchers interested in the general topic'. So I'm not at all surprised that the results of getting about 10 new researchers working on alignment has not found a match yet.

From my experience working in industry, I think that a big failing of the attempts I've seen at organizing research groups is undervaluing a good manager. Having someone who is 'people-oriented' to coach and coordinate is important for preventing burnout, and for keeping several 'research-oriented' people focused on working together on a given task instead of wandering off in different directions.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-04-11T03:07:07.108Z · LW(p) · GW(p)

Also, I'm hopeful that a separate approach of deliberately 'cyborg'-ing researchers by getting them proficient at and using the latest SoTA models, and making SoTA models specifically fine-tuned for the purpose of assisting in research could help speed up individual researchers. Maybe having the AI able to do the research all on its own means it's already too dangerous, but I don't think that that holds for 'useful enough to be a good tool'.

No comments

Comments sorted by top scores.