Announcing the London Initiative for Safe AI (LISA)

post by James Fox, mike_safeAI, Ryan Kidd (ryankidd44) · 2024-02-02T23:17:47.011Z · LW · GW · 0 comments

Contents

  Introduction
  The Urgency
  Our Vision
  Our Track Record
  Our Plans
  Risks and Mitigations
    The supported individual researchers & organisations might underperform expectations.
    We might not be able to attract and retain top potential AI safety researchers.
    Given the UK Government’s AI Safety Institute and other initiatives, we might be redundant.
None
No comments

The LISA Team consists of James Fox, Mike Brozowski, Joe Murray, Nina Wolff-Ingham, Ryan Kidd, and Christian Smith.

LISA’s Advisory Board consists of Henry Sleight, Jessica Rumbelow, Marius Hobbhahn, Jamie Bernardi, and Callum McDougall.

Everyone has contributed significantly to the founding of LISA, believes in its mission & vision, and assisted with writing this post. 

TL;DR: The London Initiative for Safe AI (LISA) is a new AI Safety research centre. Our mission is to improve the safety of advanced AI systems by supporting and empowering individual researchers and small organisations. We opened in September 2023, and our office space currently hosts several research organisations and upskilling programmes, including Apollo ResearchLeap LabsMATS extensionARENA, and BlueDot Impact, as well as many individual and externally affiliated researchers.

LISA is open to different types of membership applications from other AI safety researchers and organisations. 

Although we host a limited number of short-term visitors for free, we charge long-term residents to cover our costs at varying rates depending on their circumstances. Nevertheless, we never want financial constraints to be a barrier to leading AI safety research, so please still get in touch if you would like to work from LISA’s offices but aren't able to pay.

If you or your organisation are interested in working from LISA, please apply here

If you would like to support our mission, please visit our Manifund page.

Read on for further details about LISA’s vision and theory of change. After a short introduction [? · GW], we motivate our vision [? · GW] by arguing why there is an urgency [? · GW] for LISA. Next, we summarise our track record [? · GW] and unpack our plans [? · GW] for the future. Finally, we discuss how we mitigate risks [? · GW] that might undermine our theory of change. 

Introduction

London stands out as an ideal location for a new AI safety research centre:

Despite this favourable setting, so far little community infrastructure investment has been made. Therefore, our mission is to build a home for leading AI safety research in London by incubating individual AI safety researchers and small organisations. To achieve this, LISA will:

LISA stands in a unique position to enact this vision. In 2023, we founded an office space ecosystem, which now contains organisations such as Apollo Research, Leap Labs, MATS, ARENA, BlueDot Impact, and many individual and externally affiliated researchers. We are poised to capitalise on the abundance of motivated and competent talent in London and the supportive environment provided by the UK government and other local organisations. Our approach is not just about creating a space for research; it is about building a community and a movement that can significantly improve the safety of advanced AI systems.

The Urgency

The AI safety researcher pipeline.

This figure illustrates the progression from motivated and talented individuals to researchers outputting high-impact AI safety work. LISA's activities are designed to address the critical bottleneck between Phase 3 and Phase 4 because it is difficult for talented individuals to solve for themselves.

Our Vision

Our mission is to be a professional research centre that improves the safety of advanced AI systems by supporting and empowering individual researchers and small organisations. We do this by creating a supportive, collaborative, and dynamic research environment that hosts members pioneering a diversity of AI safety research.

In the next two years, LISA’s vision is to:

  1. Be a premier AI safety research centre which has housed significant contributions to AI safety research due to its collaborative ecosystem of small organisations and individual researchers
  2. Have supported the maturation of member organisations by increasing their research productivity, impact, and recognition.
  3. Have positively influenced the career trajectories of LISA alumni, who will have transitioned into key positions in AI safety across industry, academia, and government sectors as these opportunities emerge and develop over time. Some of these would otherwise have been pursuing non-AI safety careers. Alumni will maintain links with LISA and its ecosystem, e.g., research collaborations & mentoring, speaking, and career events.
  4. Have advanced a diversity of AI safety research agendas and will have uncovered novel AI safety research agendas that significantly improve our understanding of how and why advanced AI systems work or our ability to control and align them.
  5. Have nurtured new AI safety talent and organisations by serving as a nurturing ground for new, motivated talent entering the field, positioning itself as a pivotal entry point for future leaders in AI safety research and new impactful AI safety organisations.

Our Track Record

We have been open since September 2023. In that time:

Our Plans

We will focus on activities to yield four outputs:

  1. Provide a research environment that is supportive, productive, and collaborative with an office space that is a “melting pot” of epistemically diverse AI safety researchers working on collaborative research projects and LISA will offer comprehensive operational and research support as well as amenities such as workstations, meeting rooms & phone booths, and catering (including snacks & drinks).
  2. Offer financial stability, collective recognition, and accountability to individual researchers and new small organisations by subsidising office and operations overhead, providing fiscal sponsorship of new AI safety organisations, offering Legal & immigration support, and granting annual LISA Research Fellowships to support and mature individuals who have already shown evidence of high-impact AI safety research (as part of the MATS Program, Astra Fellowship, a PhD and/or postdocs, or otherwise). 
  3. Cultivate a leading centre for AI safety research in London by admitting new member organisations and LISA Residents based on a rigorous selection process (relying on the advisory board) based on alignment with LISA’s mission, existing research competence, and cultural fit.  We will host prominent AI safety researchers as speakers, hold workshops, and host other professional AI safety events and strengthen our partnerships with similar centres in the US (e.g., Constellation, FAR AI, and CHAI), UK (e.g., Trajan House) and likely new centres elsewhere, as well as with the UK Government’s AI safety institute and AI safety researchers in industry.
  4. Foster epistemic quality and diversity amongst new AI safety researchers & organisations by seasonally hosting established and proven domain-specific mentorship and upskilling programmes such as MATS and ARENA

These outputs advance our theory of change:

A causal graph showing LISA's theory of change.


Risks and Mitigations

The supported individual researchers & organisations might underperform expectations.

 Mitigations:

We might not be able to attract and retain top potential AI safety researchers.

 Mitigations:

Given the UK Government’s AI Safety Institute and other initiatives, we might be redundant.

This is a misconception. The AI Safety Institute (AISI) has a very different focus, concentrating on evaluating the impact and risks of frontier models. Instead,  LISA will be a place where fundamental research can happen. We also house organisations like Apollo, a partner of AISI, so the relationship is complementary and collaborative. With regard to other initiatives, we think that the existence of more safety institutes is good for AI Safety and that it is good for individuals to have the choice between a range of options.

0 comments

Comments sorted by top scores.