Apply to LASR Labs: a London-based technical AI safety research programme

post by Erin Robertson, charlie_griffin (cjgriffin), joehardie · 2024-04-09T17:34:06.847Z · LW · GW · 1 comments

Contents

  Programme details: 
  Who should apply?
  Topics and supervisors: 
  Timeline: 
  How is this different from other programmes? 
None
2 comments

Edit: Applications for this round are now closed! If you are interested in future rounds, you can express interest here

TLDR; apply by April 24th 23:59 GMT+1 to join a 12-week programme and write a technical AI safety paper in a team of 4 with supervision from an experienced researcher. Work full time from the LISA offices in London, alongside AI safety organisations including Apollo Research, Bluedot Impact and Leap Labs. 

Apply to be a participant here

Express interest in being a supervisor here 

London AI Safety Research (LASR) Labs (previously run as AI Safety Hub Labs [AF · GW]) is a research programme where participants will work in small teams to publish a paper and accompanying blog post contributing to AI safety. 

Teams of 4 will work with a supervisor to write  an academic paper, with support and management from LASR. We expect LASR Labs to be a good fit for applicants looking to join technical AI safety teams in the next year. From the previous cohort, three alumni are working in the UK AI Safety Institute, another is working on OpenAI’s dangerous capabilities evals team, and several others are continuing collaborations with their supervisors or doing AI Safety research in their PhD programmes. Labs will also be a good fit for someone hoping to publish in academia; four out of five groups in the last cohort had papers accepted to workshops (at NeurIPS) or conferences (ICLR). [AF · GW]

Participants will work full time and in person from the London Initiative for Safe AI (LISA) co-working space, a hub for researchers from organisations such as Apollo ResearchLeap LabsPIBBSSPivotal Research and MATS extension programme. The office will host various guest sessions, talks, and networking events
 

Programme details: 

The programme will run from the 1st July to the 20th September 2024 (12 weeks). Participants will receive travel expenses and free lunch and dinner Monday - Friday. 

In the first week, you will learn about and critically evaluate ~12 technical AI safety research projects with support from LASR. Developing “research taste” / an understanding of which projects might be promising is difficult and often takes many years, but is a key skill of producing useful AI safety work. The first week aims to give participants space to develop their research prioritisation skills, and learn about various different agendas and their respective routes to value. At the end of the week, participants will express preferences about their favourite problems, and we will match them into teams. Last year, all participants were assigned to one of their top-rated projects. 

In the remaining 11 weeks, you will write and then submit an AI safety research paper (as a preprint, workshop paper, or conference paper). 

During the programme, flexible and comprehensive support will be available, including; 

All of the above is optional (except accountability). 

Who should apply?

We are looking for applicants with the following skills: 

There are no specific requirements for experience, but we anticipate successful applicants will have done some of these things:

Research shows that people from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work.

Note: this programme takes place in London. Participants without an existing right to work in the UK are responsible for getting their own visas, however we can help with this process by providing letters of recommendation etc. Please get in touch if you have any visa related questions; erin[at]lasrlabs.org
 

Topics and supervisors: 

We are in the process of narrowing down supervisors and projects for the coming cohort. 

In previous rounds, participants have worked on projects relating to: the science of deep learning, multi-agent systems and collusion, theory of alignment in RLdeception in LLMs [AF · GW], interpretability probes and concept extrapolation [? · GW]. We’re also excited about a range of other areas, including automated interpretability, scalable oversightcapability evals and AI control.

If you are interested in supervising a project and have relevant experience in the above topics, please fill in this expression of interest form for supervisors.

Timeline: 

Application deadline: April 24th 

Mentors finalised: on or before May 24th 

Offers sent: 31st May 

The application process consists of the initial form, a technical test and an interview.

How is this different from other programmes? 

There are many similar programmes happening this summer, including MATS, PIBBSS, Pivotal Research Fellowship and ERA. We expect all of these programmes to be an excellent opportunity to gain relevant skills for a technical AI safety career. LASR Labs might be an especially good option if;

1 comments

Comments sorted by top scores.

comment by joehardie · 2024-04-25T10:25:01.502Z · LW(p) · GW(p)

Applications for this round are now closed! If you are interested in future rounds, you can express interest here: https://airtable.com/appbzbkQ3OwRBaojt/shruJmwbbk07e1i7y 

comment by joehardie · 2024-04-25T10:24:25.042Z · LW(p) · GW(p)