Apply to LASR Labs: a London-based technical AI safety research programme
post by Erin Robertson, charlie_griffin (cjgriffin), joehardie · 2024-04-09T17:34:06.847Z · LW · GW · 1 commentsContents
Programme details: Who should apply? Topics and supervisors: Timeline: How is this different from other programmes? None 2 comments
Edit: Applications for this round are now closed! If you are interested in future rounds, you can express interest here.
TLDR; apply by April 24th 23:59 GMT+1 to join a 12-week programme and write a technical AI safety paper in a team of 4 with supervision from an experienced researcher. Work full time from the LISA offices in London, alongside AI safety organisations including Apollo Research, Bluedot Impact and Leap Labs.
Apply to be a participant here
Express interest in being a supervisor here
London AI Safety Research (LASR) Labs (previously run as AI Safety Hub Labs [AF · GW]) is a research programme where participants will work in small teams to publish a paper and accompanying blog post contributing to AI safety.
Teams of 4 will work with a supervisor to write an academic paper, with support and management from LASR. We expect LASR Labs to be a good fit for applicants looking to join technical AI safety teams in the next year. From the previous cohort, three alumni are working in the UK AI Safety Institute, another is working on OpenAI’s dangerous capabilities evals team, and several others are continuing collaborations with their supervisors or doing AI Safety research in their PhD programmes. Labs will also be a good fit for someone hoping to publish in academia; four out of five groups in the last cohort had papers accepted to workshops (at NeurIPS) or conferences (ICLR). [AF · GW]
Participants will work full time and in person from the London Initiative for Safe AI (LISA) co-working space, a hub for researchers from organisations such as Apollo Research, Leap Labs, PIBBSS, Pivotal Research and MATS extension programme. The office will host various guest sessions, talks, and networking events.
Programme details:
The programme will run from the 1st July to the 20th September 2024 (12 weeks). Participants will receive travel expenses and free lunch and dinner Monday - Friday.
In the first week, you will learn about and critically evaluate ~12 technical AI safety research projects with support from LASR. Developing “research taste” / an understanding of which projects might be promising is difficult and often takes many years, but is a key skill of producing useful AI safety work. The first week aims to give participants space to develop their research prioritisation skills, and learn about various different agendas and their respective routes to value. At the end of the week, participants will express preferences about their favourite problems, and we will match them into teams. Last year, all participants were assigned to one of their top-rated projects.
In the remaining 11 weeks, you will write and then submit an AI safety research paper (as a preprint, workshop paper, or conference paper).
During the programme, flexible and comprehensive support will be available, including;
- ML upskilling sessions and pair programming
- Reading groups
- Talks from leading AI safety researchers
- Career coaching
- Accountability and productivity assistance
All of the above is optional (except accountability).
Who should apply?
We are looking for applicants with the following skills:
- Technical ability: Strong quantitative skills and at least some machine learning experience.
- Practicality: Willingness to experiment and iterate, and dive into execution under uncertainty.
- Communication skills: Proficient in clearly explaining results and implications of experiments. Reasoning transparency.
There are no specific requirements for experience, but we anticipate successful applicants will have done some of these things:
- Conducted research in a domain relevant to the topics below or research at the intersection of your domain and frontier AI systems.
- Experienced working with LLMs.
- A few years in industry with some work in ML.
- Completed or in the process of a PhD in a relevant field like Computer Science, Physics, Maths, etc.
Research shows that people from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work.
Note: this programme takes place in London. Participants without an existing right to work in the UK are responsible for getting their own visas, however we can help with this process by providing letters of recommendation etc. Please get in touch if you have any visa related questions; erin[at]lasrlabs.org
Topics and supervisors:
We are in the process of narrowing down supervisors and projects for the coming cohort.
In previous rounds, participants have worked on projects relating to: the science of deep learning, multi-agent systems and collusion, theory of alignment in RL, deception in LLMs [AF · GW], interpretability probes and concept extrapolation [? · GW]. We’re also excited about a range of other areas, including automated interpretability, scalable oversight, capability evals and AI control.
If you are interested in supervising a project and have relevant experience in the above topics, please fill in this expression of interest form for supervisors.
Timeline:
Application deadline: April 24th
Mentors finalised: on or before May 24th
Offers sent: 31st May
The application process consists of the initial form, a technical test and an interview.
How is this different from other programmes?
There are many similar programmes happening this summer, including MATS, PIBBSS, Pivotal Research Fellowship and ERA. We expect all of these programmes to be an excellent opportunity to gain relevant skills for a technical AI safety career. LASR Labs might be an especially good option if;
- You’re excited about empirical projects
- You’re open to learning in depth about many different kinds of projects
- You want to focus on getting something published in an academic style
- You like working in a team, with an emphasis on group accountability
1 comments
Comments sorted by top scores.
comment by joehardie · 2024-04-25T10:25:01.502Z · LW(p) · GW(p)
Applications for this round are now closed! If you are interested in future rounds, you can express interest here: https://airtable.com/appbzbkQ3OwRBaojt/shruJmwbbk07e1i7y