Introducing the AI Alignment Forum (FAQ)

post by habryka (habryka4), Ben Pace (Benito), Raemon, jimrandomh · 2018-10-29T21:07:54.494Z · LW · GW · 8 comments

Contents

  What are the five most important highlights about the AI Alignment Forum in this FAQ?
  What is the purpose of the AI Alignment Forum?
  Who is the AI Alignment Forum for?
  Why do we need another website for alignment research?
  What type of content is appropriate for this Forum?
  What are the three new sequences I've been hearing about?
  In what way is it easier for potential future Alignment researchers to get involved?
  What is the exact setup with content on LessWrong?
  How do new members get added to the Forum?
  Who is running this project?
  Can I use LaTeX?
  I have a different question.
None
8 comments

After a few months of open beta, the AI Alignment Forum is ready to launch. It is a new website built by the team behind LessWrong 2.0, to help create a new hub for technical AI Alignment research and discussion. This is an in-progress FAQ about the new Forum.

What are the five most important highlights about the AI Alignment Forum in this FAQ?

What is the purpose of the AI Alignment Forum?

Our first priority is obviously to avert catastrophic outcomes from unaligned Artificial Intelligence. We think the best way to achieve this at the margin is to build an online-hub for AI Alignment research, which both allows the existing top researchers in the field to talk about cutting-edge ideas and approaches, as well as the onboarding of new researchers and contributors.

We think that to solve the AI Alignment problem, the field of AI Alignment research needs to be able to effectively coordinate a large number of researchers from a large number of organisations, with significantly different approaches. Two decades ago we might have invested heavily in the development of a conference or a journal, but with the onset of the internet, an online forum with its ability to do much faster and more comprehensive forms of peer-review seemed to us like a more promising way to help the field form a good set of standards and methodologies.

Who is the AI Alignment Forum for?

There exists an interconnected community of Alignment researchers in industry, academia, and elsewhere, who have spent many years thinking carefully about a variety of approaches to alignment. Such research receives institutional support from organisations including FHI, CHAI, DeepMind, OpenAI, MIRI, Open Philanthropy, and others. The Forum membership currently consists of researchers at these organisations and their respective collaborators.

The Forum is also intended to be a way to interact with and contribute to the cutting edge research for people not connected to these institutions either professionally or socially. There have been many such individuals on LessWrong, and that is the current best place for such people to start contributing, to be given feedback and skill-up in this domain.

There are about 50-100 members of the Forum. These folks will be able to post and comment on the Forum, and this group will not grow in size quickly.

Why do we need another website for alignment research?

There are many places online that host research on the alignment problem, such as the OpenAI blog, the DeepMind Safety Research blog, the Intelligent Agent Foundations Forum, AI-Alignment.com, and of course LessWrong.com.

But none of these spaces are set up to host discussion amongst the 50-100 people working in the field. And those that do host discussion have unclear assumptions about what’s common knowledge.

What type of content is appropriate for this Forum?

As a rule-of-thumb, if a thought is something you’d bring up when talking to someone at a research workshop or a colleague in your lab, it’s also a welcome comment or post here.

If you’d like a sense of what other Forum members are interested in, here’s some quick data on what high-level content forum members are interested in seeing, taken from a survey we gave to invitees to the open beta (n = 34).

The responses were on a 1-5 scale, which represented “If I see 1 post per day, I want to see this type of content…” (1) Once per year, (2) Once per 3-4 months (3) Once per 1-2 months (4) Once per 1-2 weeks (5) A third of all posts that I see.

Here were the types of content asked about, and the mean response:

Related data: After integrating over all 34 respondents’ self-predictions, they predict 3.2 comments and 0.99 posts per day. We’ll report on everyone’s self-accuracy in a year ;)

What are the three new sequences I've been hearing about?

We have been coordinating with AI alignment researchers to create three new sequences of posts that we hope can serve as introductions to some of the most important core ideas in AI Alignment. The three new sequences will be:

Over the next few weeks, we will be releasing about one post per day from these sequences, starting with the first post in the Embedded Agency sequence.

If you are interested in learning about AI alignment, you're very welcome to ask questions and discuss the content in the comment sections. And if you are already familiar with a lot of the core ideas, then we would greatly appreciate feedback on the sequences as we publish them. We hope that these sequences can be a major part of how new people get involved in AI alignment research, and so we care a lot about their quality and clarity.

In what way is it easier for potential future Alignment researchers to get involved?

Most scientific fields have to balance the need for high-context discussion with other specialists, and public discussion which allows the broader dissemination of new ideas, the onboarding of new members and the opportunity for new potential researchers to prove themselves. We tried to design a system that still allows newcomers to participate and learn, while giving established researchers the space to have high-level discussions with other researchers.

To do that, we integrated the new AI Alignment Forum closely with the existing LessWrong platform, where you can find and comment on all content on the AI Alignment Forum on LessWrong, and your comments and posts can be moved to the AI Alignment Forum by mods for further engagement by the researchers. For details on the exact setup, see the question on that below.

We hope that this will result in a system in which cutting-edge research and discussion can happen, while new good ideas and participants can get noticed and rewarded for their contributions.

If you’ve been interested in doing alignment research, then we think one of the best ways to do that right now is to comment on AI Alignment Forum posts on LessWrong, and check out the new content we’ll be rolling out.

What is the exact setup with content on LessWrong?

Here are the details:

The AI Alignment Forum survey (sent to all beta invitees) received 34 submissions. One question asked whether the integration with LW would lead to the person contributing more or less to the AI Alignment Forum (on a range from 0 to 6). The mean response was 3.7, the median was 3, and there was only one response below 3 (where 3 represented ‘doesn’t matter’).

How do new members get added to the Forum?

There are about 50-100 members of the AI Alignment Forum, and while the number will grow, it will grow rarely and slowly.

We’re talking with the alignment researchers at CHAI, DeepMind, OpenAI, MIRI, and will be bringing on a moderator with invite-power from each of those organisations. They will naturally have a much better sense of the field and researchers in their orgs, than we the site designers. We’ll edit this post to include them once they’re confirmed.

On alignmentforum.org in the top right corner (after you created an account) is a small application form available. If you’re a regular contributor on LessWrong and want to point us to some of your best work, or if perhaps you’re a full-time researcher in an adjacent field and would like to participate in the Forum research discussion, you’re welcome to use that to let us know who you are and what research you have done.

Who is running this project?

The AI Alignment Forum development team consists of Oliver Habryka, Ben Pace, Raymond Arnold, and Jim Babcock. We're in conversation with alignment researchers from DeepMind, OpenAI, MIRI and CHAI to confirm moderators from those organisations.

We would like to thank BERI, EA Grants, Nick Beckstead, Matt Wage and Eric Rogstad for the support that lead to this Forum being built.

Can I use LaTeX?

Yes! You can use LaTeX in posts and comments with Cmd+4 / Ctrl+4.

Also, if you go into your user settings and switch to the markdown editor, you can just copy-paste LaTeX into a post/comment and it will render when you submit with no further work.

(Talk to us in intercom if you run into any problems.)

I have a different question.

Use the comment section below. Alternatively, use intercom (bottom right corner).

8 comments

Comments sorted by top scores.

comment by Said Achmiz (SaidAchmiz) · 2018-10-29T21:15:35.710Z · LW(p) · GW(p)

For users of GreaterWrong: note that GW has an Alignment Forum view [? · GW] (which you can also access by clicking the “AF” icon next to any Alignment Forum post).

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-11-17T03:19:05.834Z · LW(p) · GW(p)

Said, can you make it so that if I make a comment on an AF post through GW or respond to an AF comment, it automatically gets posted to AF? (Or let me know if there's a setting to control this.) Right now I have to remember to go to that comment in LW and then manually move it to AF.

Replies from: clone of saturn, SaidAchmiz
comment by clone of saturn · 2019-02-16T06:30:02.404Z · LW(p) · GW(p)

This should work now, sorry about the delay.

comment by Said Achmiz (SaidAchmiz) · 2018-11-17T06:43:55.686Z · LW(p) · GW(p)

There is no setting to do this currently, but we’re working on it!

comment by Pattern · 2018-10-30T17:16:17.998Z · LW(p) · GW(p)
and a secondary karma score specific to AI Alignment Forum members.

So do people have AF karma iff they have an account on Alignment Forum?

Replies from: habryka4
comment by habryka (habryka4) · 2018-10-30T18:23:57.698Z · LW(p) · GW(p)

Assuming that you mean "are full members on the alignment forum", the answer is no. As soon as any of your content ever gets promoted to the Alignment Forum, you can start accruing AF karma, and can start voting on alignment forum content with your AF karma (i.e. your votes will change the vote-totals).

(the account databases are shared, so every LW user can log in on alignment forum, but it will say "not a member" in the top right corner)

Replies from: mtrazzi
comment by Michaël Trazzi (mtrazzi) · 2018-10-31T11:49:06.596Z · LW(p) · GW(p)

(the account databases are shared, so every LW user can log in on alignment forum, but it will say "not a member" in the top right corner)

I am having some issues in trying to log in from a github-linked account. It redirects me to LW with an empty page and does nothing.

Replies from: habryka4
comment by habryka (habryka4) · 2018-10-31T17:02:27.769Z · LW(p) · GW(p)

Ah, sorry! I think I didn't test the oAuth login on alignmentforum.org. We probably have a reference to the LessWrong.com URL somewhere left in that code, so that's where it forwards you. Will fix it ASAP.