We're Redwood Research, we do applied alignment research, AMA

post by Nate Thomas (nate-thomas) · 2021-10-06T05:51:59.161Z · LW · GW · 2 comments

This is a link post for https://forum.effectivealtruism.org/posts/xDDggeXYgenAGSTyq/we-re-redwood-research-we-do-applied-alignment-research-ama

Contents

2 comments

Redwood Research is a longtermist organization working on AI alignment based in Berkeley, California. We're going to do an AMA this week; we'll answer questions mostly on Wednesday and Thursday this week (6th and 7th of October). Buck Shlegeris, Bill Zito, myself, and perhaps other people will be answering questions.

Here's an edited excerpt from this doc that describes our basic setup, plan, and goals.

Redwood Research is a longtermist research lab focusing on applied AI alignment. We’re led by Nate Thomas (CEO), Buck Shlegeris (CTO), and Bill Zito (COO/software engineer); our board is Nate, Paul Christiano and Holden Karnofsky. We currently have ten people on staff.

Our goal is to grow into a lab that does lots of alignment work that we think is particularly valuable and wouldn’t have happened elsewhere.

Our current approach to alignment research:

  • We’re generally focused on prosaic alignment approaches.
  • We expect to mostly produce value by doing applied alignment research. I think of applied alignment research as research that takes ideas for how to align systems, such as amplification or transparency, and then tries to figure out how to make them work out in practice. I expect that this kind of practical research will be a big part of making alignment succeed. See this post [AF · GW] for a bit more about how I think about the distinction between theoretical and applied alignment work.
  • We are interested in thinking about our research from an explicit perspective of wanting to align superhuman systems.
    • When choosing between projects, we’ll be thinking about questions like “to what extent is this class of techniques fundamentally limited? Is this class of techniques likely to be a useful tool to have in our toolkit when we’re trying to align highly capable systems, or is it a dead end?”
    • I expect us to be quite interested in doing research of the form “fix alignment problems in current models” because it seems generally healthy to engage with concrete problems, but we’ll want to carefully think through exactly which problems along these lines are worth working on and which techniques we want to improve by solving them.

We're hiring for research, engineering, and an office operations manager.

You can see our website here. Other things we've written that might be interesting:

We're up for answering questions about anything people are interested in, including

We're looking forward to answering your questions!

2 comments

Comments sorted by top scores.

comment by Ben Pace (Benito) · 2021-10-06T05:55:44.362Z · LW(p) · GW(p)

Would you prefer questions here or on the EA Forum?

Replies from: Buck
comment by Buck · 2021-10-06T15:04:03.608Z · LW(p) · GW(p)

I think we prefer questions on the EA Forum.