Suggestions for net positive LLM research

post by Cole Wyeth (Amyr) · 2023-12-13T17:29:11.666Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    3 porby
    2 leogao
    2 RogerDearnaley
None
No comments

I am starting a PhD in computer science, focusing on agent foundations so far, which is great. I intend to continue devoting at least half my time to agent foundations.

However, for several reasons, it seems to be important for me to do some applied work, particularly with LLMs:

  1. I believe I'm otherwise pretty well positioned to get an impactful job at Google DeepMind, but apparently some impressive machine learning engineering creds are necessary to get on even the safety team currently.
  2. My PhD supervisor is pushing for me to work on LLMs, though he seems to be pretty flexible about the details.
  3. As much fun as math is, I also value developing my practical skills (in fact, half the fun of learning things is becoming more powerful in the real world).
  4. LLM experts seem likely to be in high demand right now, though I am not sure how long that will last.

Now, I've spent the last couple of years mostly studying AIXI and its foundations. I'm pretty comfortable with standard deep learning algorithms and libraries and I have some industry experience with machine learning engineering, but I am not an expert on NLP, LLMs, or prosiac alignment. Therefore, I am looking for suggestions from the community about LLM related research projects that would satisfy as many as possible of the following criteria:

  1. Is not directly focused on improving frontier model capabilities (for ethical reasons; though my timelines seem to be longer than the average lesswronger, I'm not able to accept the risk that I am wrong).
  2. Produces mundane utility. I find it much more fulfilling to work on things that I can see becoming useful to people, and I also want a measure of my success which is as concrete as possible. 
  3. Contributes to prosiac alignment. It would be particularly nice if the experimental/engineering work involved is likely to inform my ideas for mathematical alignment research.
  4. Machine learning engineering/research creds.

Any suggestions are appreciated. I may also link this question to a Manifold market in the future (probably "conditional on working full time at DeepMind within 18 months of graduation, which areas of research did my PhD thesis include") or something along those lines. Thanks!

Answers

answer by porby · 2023-12-13T20:45:52.295Z · LW(p) · GW(p)

I'm accumulating a to-do list of experiments much faster than my ability to complete them:

  1. Characterizing fine-tuning effects with feature dictionaries [LW(p) · GW(p)]
  2. Toy-scale automated neural network decompilation [LW(p) · GW(p)] (difficult to scale)
  3. Trying to understand evolution of internal representational features across blocks by throwing constraints at it [LW(p) · GW(p)] 
  4. Using soft prompts as a proxy measure of informational distance between models/conditions and behaviors [LW(p) · GW(p)] (see note below)
  5. Prompt retrodiction for interpreting fine tuning, with more difficult extension for activation matching [LW(p) · GW(p)]
  6. Miscellaneous bunch of experiments [LW(p) · GW(p)]

If you wanted to take one of these and run with it or a variant, I wouldn't mind!

The unifying theme behind many of these is goal agnosticism [LW · GW]: understanding it, verifying it, maintaining it, and using it.

Note: I've already started some of these experiments, and I will very like start others soon. If you (or anyone reading this, for that matter) sees something they'd like to try, we should chat to avoid doing redundant work. I currently expect to focus on #4 for the next handful of weeks, so that one is probably at the highest risk of redundancy.

Further note: I haven't done a deep dive on all relevant literature; it could be that some of these have already been done somewhere!  (If anyone happens to know of prior art for any of these, please let me know.)

comment by Cole Wyeth (Amyr) · 2024-01-07T21:51:18.977Z · LW(p) · GW(p)

This is an extensive list, I'll return to it when I have a bit more experience with the area.

answer by leogao · 2024-01-08T10:02:42.083Z · LW(p) · GW(p)

I'm not familiar enough with agent foundations to provide very detailed object level advice, but I think it would be hugely valuable to empirically test agent foundations ideas in real models, with the understanding that AGI doesn't necessarily have to look like LMs but any theory for intelligence has to at least fit both LMs and AGI. As an example, we might believe that LMs might not have goals in the same sense as AGI eventually, but then we can ask why LMs can still seem to achieve any goals at all, and perhaps through empirical investigation of LMs we can get a better understanding of the nature of goal seeking. I think this would be much, much more valuable than generic LM alignment work.

answer by RogerDearnaley · 2023-12-14T00:16:25.477Z · LW(p) · GW(p)

If I were in your position, I would work on the ideas described in my post How to Control an LLM's Behavior [LW · GW] and the paper Pretraining Language Models with Human Preferences that inspired it.

From the paper's results, the approach is very effective, my post discusses how to make it very controllable and flexible, and it has the particular advantage that since it's done at pretraining time it can't just be easily fine-tuned away out of an open-source model (admittedly, the latter might do more for your employability at Meta FAIR Paris or Mistral than at DeepMind — but then, which of those seem like the higher x-risk to solve?)

comment by Cole Wyeth (Amyr) · 2024-01-07T21:52:35.166Z · LW(p) · GW(p)

I like this idea, can I DM you about the research frontier?

Replies from: roger-d-1
comment by RogerDearnaley (roger-d-1) · 2024-01-08T09:02:37.571Z · LW(p) · GW(p)

Of course. I also wrote a second post on another possible specific application of this approach: Language Model Memorization, Copyright Law, and Conditional Pretraining Alignment [LW · GW]. 

No comments

Comments sorted by top scores.