College technical AI safety hackathon retrospective - Georgia Tech

post by yix (Yixiong Hao) · 2024-11-15T00:22:53.159Z · LW · GW · 2 comments

This is a link post for https://open.substack.com/pub/yixiong/p/campus-technical-ai-safety-hackathon

Contents

      TLDR: the AI safety initiative at Georgia Tech recently hosted an AI safety focused track at the college's flagship AI hackathon. In this post I share how it went and some of our thoughts.
  Overview
    Quick stats:
    Relevant track details:
    Execution
    Our opinion/takes
    What’s next?
None
2 comments

TLDR: the AI safety initiative at Georgia Tech recently hosted an AI safety focused track at the college's flagship AI hackathon. In this post I share how it went and some of our thoughts.

Overview

Hey!  I’m Yixiong, co-director of Georgia Tech’s AI safety student group.  We recently hosted an AI safety focused track at Georgia Tech’s biggest AI hackathon, AI ATL.  I’m writing this retrospective because I think this could be a useful data point to update on for fellow AIS groups thinking about hosting similar things!

The track was focused on evaluations on safety-critical and interesting capabilities, this is the track page that was shown to hackers (feel free to reuse/borrow content, just let us know!)

Huge thank you (in no particular order) to Michael Chen, Long Phan, Andrey Anurin, Abdur Raheem, Esben Kran, Zac Hatfield-Dodds, Aaron Begg, Alex Albert, Oliver Zhang, and others who helped us make this happen!

 

Quick stats:

Relevant track details:

We tried to optimize the track in a bunch of ways, including but not limited to:

Execution

As with all events, execution matters a lot.  This is the area that we felt could be improved the most.

Our opinion/takes

What’s next?

I think our attempt serves as a successful proof of concept for bringing the topics of AI safety/alignment hackathons to campuses.  People will engage with the topic if you try really hard.  Don’t hesitate to reach out for help if you’re thinking of something similar and want to learn more about what we did!

Two things we might do in the future:

Thanks for reading and I hope it wasn’t too noisy!

Yixiong & the Georgia Tech AISI team.

2 comments

Comments sorted by top scores.

comment by Esben Kran (esben-kran) · 2024-11-16T03:39:41.449Z · LW(p) · GW(p)

Super cool work Yixiong - we were impressed by your professionalism in this process despite working within another group's whims on this one. Some other observations from our side that may be relevant for other folks hosting hackathons:
- Prepare starter materials: For example, for some of our early interpretability hackathons, we built a full resource base (Github) with videos, Colabs, and much more (some of it with Neel Nanda, big appreciation for his efforts in making interp more available). Our philosophy for the starter materials are: "If a participant can make a submission-worthy project by maximum cloning your repo and typing two commands or simply walk through a Google Colab, this is the ideal starter code." This means that with only small adjustments, they'll be able to make an original project. We rarely if ever see this exploited, i.e. "template code as submission" because they're able to copy-paste things around for a really strong research project.
- Make sure what they should submit is super clear: Making a really nice template goes a long way to make a submission super clear for participants. An example can be seen in our MASEC hackathon: Docs and page. If someone can just receive your submission template and know everything they need to know to submit a great project, that is really good since they'll be spending most of their time inside of that document.
- Make sure judging criteria are really good: People will use your judging criteria to determine what to prioritize in their project. This is extremely valuable for you to get right. For example, we usually use a variation on the three criteria: 1) Topic advancement, 2) AI safety impact, and 3) quality / reproducibility. A recent example was the Agent Security Hackathon:

> 1. Agent safety: Does the project move the field of agent safety forward? After reading this, do we know more about how to detect dangerous agents, protect against dangerous agents, or build safer agents than before?
> 2. AI safety: Does the project solve a concrete problem in AI safety? If this project is fully realized, would we expect the world with superintelligence to be a safer (even marginally) than yesterday?
> 3. Methodology: Is the project well-executed and is the code available so we can review it? Do we expect the results to generalize beyond the specific case(s) presented in the submission?

- Make the resources and ideas available early: As Yixiong mentions, it's really valuable for people not to be confused. If they know exactly what report format they'll submit, which idea they'll work on, and who they'll work with, this is a great way to ensure that the 2-3 days of hacking are an incredibly efficient use of their time.
- Matching people by ideas trumps by background: We've tried various ways to match individuals who don't have teams. The absolute best system we've found is to get people to brainstorm before the hackathon, share their ideas, and organize teams online. We also host team matching sessions which consist of fun-fact-intros and otherwise just discusses specific research ideas.
- Don't make it longer than a weekend: If you host a hackathon and make it longer than a weekend, most people who cannot attend outside that weekend will avoid participating because they'll feel that the ones who can participate more than the weekend can spend their weekdays to win the grand prize. Additionally, a very counter-intuitive thing happens where if you give people three weeks, they'll actually spend much less time on it than if you just give them a weekend. This can depend on the prizes or outcome rewards, of course, but is a really predictable effect, in our experience.
- Don't make it shorter than two days: Depending on your goal, one day will never be enough to create an original project. Our aims are original pilot research papers that can stand on their own and the few one-day events we've hosted have never worked very well, except for brainstorming. Often, participants won't even have any functional code or any ideas on the Sunday morning of the event but by the submission deadline have a really high quality project that wins the top prize. This seems to happen due to this very concrete exploration of ideas that happens in the IDE and on the internet where some are discarded and nothing promising comes up before 11am Sunday.

And as Yixiong mentions, we have more resources on this along with an official chapter network (besides volunteer locations) at https://www.apartresearch.com/sprints/locations. You're welcome to get in touch if you're interested in hosting at sprints@apartresearch.com

COI: One of our researchers hosted a cyber-evals workshop at Yixiong's AI safety track.

Replies from: Yixiong Hao
comment by yix (Yixiong Hao) · 2024-11-16T03:56:56.702Z · LW(p) · GW(p)

Thanks again Esben for collaborating with us!  Can confidently say that the above is super valuable advice for any AI safety hackathon organizers, they're consistent with our experiences.

In the context of a college campus hackathon, I'd especially stress focus on preparing starter materials and making submission requirements clear early on!