What I learned at the AI Safety Europe Retreat

post by skaisg · 2023-04-17T17:40:22.276Z · LW · GW · 0 comments

This is a link post for https://skaisg.eu/what-i-learned-at-aiser/

Contents

  My most important takeaways
    On contributing to technical AIS as a newcomer
    On employment in AI Safety
    On technical writing
    On EU AI Safety Act
  Overall Impression
None
No comments

From the 30th March - 2nd April 2023, I attended the Artificial Intelligence Safety Europe Retreat (AISER) in Berlin, Germany. There were around 70 attendees from all over Europe. Most attendees were actively working on technical AI Safety (e.g SERI-MATS scholars, independent researchers with grants), some people were focusing on AIS strategy and governance, and some newcomers / students (like me) were looking to learn more about the field and career opportunities. For a short overview of the retreat, have a look here [LW · GW].

This post is targeted at people who couldn't make it to the AISER and/or are considering attending a similar event. Concretely, I will provide you with my takeaways from AISER which I gathered by attending some of the talks and one-on-ones. I will conclude the post with some personal (and thus very subjective) impressions of the event. I will not mention specific names as the conversations and talks weren’t given with the intention of appearing in public. Still, I think many ideas exchanged at the AISER are worth sharing with a wider audience. I will put my personal spin on them as I present them in this blog post, so all critique should be directed towards me.

Some background context about me to put this post into a (critical) perspective may be helpful. For a general overview of my experiences and skills, please consult my about page. I wrote my undergrad thesis on robustness and formal verification of AI, but I wasn't aware of the AI alignment movement at the time. Before joining the AI Safety Europe Retreat, I had only just completed the AGI Safety Fundamentals curriculum with a reading group at my university - otherwise I had no experience in the field. I'm motivated to work on AI Safety (AIS) because people seem to trust AI too much, and I want to make AI actually worthy of such trust. Also, another appeal of AIS is its interdisciplinary approach. It employs elements from diverse fields (which I find very cool and interesting to work in in their own right) such as philosophy, mathematics, computer science, and everything in between.

My most important takeaways

Read the bold text if you only care about the main points, the text following those expands upon the idea.

On contributing to technical AIS as a newcomer

On employment in AI Safety

On technical writing

On EU AI Safety Act

Overall Impression

The event was well-planned, everything was taken care of - food, accommodation, space for hang-outs and talks. Attendees could focus on the people and ideas instead of any big event-related problems. The event took place in the outskirts of Berlin, so the atmosphere for one-on-one walks was great. There was (in hindsight, unsurprisingly) a strong EA-vibe at the event - both the structure of the retreat as well as people’s opinions and personal philosophies. I learned a lot from this getaway, met a lot of new people whose motivation was passed onto me, and the European AIS scene started feeling more like a community. For me personally, there was only one drawback (that wasn't the fault of the event itself) which made me feel exhausted afterwards.

As a newcomer (indeed, I seemed to be one of, if not the, person who has had the least exposure to the rationalist [? · GW] AI safety community at the event), I was overwhelmed with the amount of new information. I had felt similarly before when working on some project I have had no previous exposure in, but the technical AI safety community is something else. It seems everyone is coming up with new research agendas and is encouraged to attack the problem in their own way. I think this is interesting and a worthy approach to solving alignment, however, this creates waves of new terminology and raises questions about how much potential these concepts actually have.

I think the feeling of overwhelm primarily came because I wasn't aware of the nuances of the different methods, and didn't have a good overview of the field, had no clue how the mentioned approaches fit in the bigger picture. Whenever I would hear someone talk passionately about some technical approach, my first inner reaction usually was “this is a well-established approach, I should really look into it” (mostly because I am used to this coming from academia), which breeds a sense of urgency and the sense of not being aware of well-known research. Whenever I do look into the mentioned approaches, however, it often turns out that someone came up with this method at most 3 years ago in a blog post or research agenda that hasn’t yet had the opportunity of being thoroughly investigated. Of course, by no means does this undermine the value of the approach talked about. The reaction I described may not be all that different from research-heavy academic environments. Even though I am aware there exist AIS labs at universities and well-established companies (as can be seen here), I seem to still have a bias that AIS research primarily comes from independent thinkers whose ideas are more questionable. However, perhaps this is partly due to a lot of the research circulating via blogs instead of peer-reviewed papers.

Clearly, the integrity of the ideas does not depend on the association of the researcher or the medium through which they are shared, but I must invest some time to overcome this personal mental bias. It is simply a very different exposure to terminology from what I’m used to, so I will have to change my mindset whilst encountering such terms in the field of AI safety. That is, I must take on a more sceptical as well as honest view, and be open to red-teaming the idea, instead of taking it at face value. Doing so may improve my overall research skills as well. 

Something else that I wasn't aware about previously is that this community really doesn't like anyone working on AI capabilities (that is, your regular AI research and industry applications e.g. creating new architectures, training models for novel tasks, etc.). You can read about the main reasons here [LW · GW] (some of the points in this specific post are quite controversial and not everyone in the AIS community seem to fully agree on them as can be seen in the comments, but it's one of the best one-stop link in my opinion). I suppose the dislike is not nearly as strong for your run-of-the-mill AI startup than it is for the huge companies with more resources. However, non-AIS work is still seen as deterring from the most important problem at hand, and thus some distaste is still present.

In conclusion, I would recommend you to attend such an event if you are interested in figuring out whether AI Safety is for you. Such an event also benefits anyone looking for connections and collaborators, or simply getting up to speed with what is going on in AIS. Be prepared for some intense philosophical and technical conversations - they may be both very rewarding as well as draining.
 

0 comments

Comments sorted by top scores.