AI Safety Camp 2024
post by Linda Linsefors · 2023-11-18T10:37:02.183Z · LW · GW · 1 commentsThis is a link post for https://aisafety.camp/
Contents
Some skills requested by various projects: Projects Apply now Apply if you… Timeline Applications Program Afterwards Team structure Research Lead (RL) Team Coordinator (TC) Other team members Apply now Questions? We are fundraising! Donate None 1 comment
AI Safety Camp connects you with a research lead to collaborate on a project – to see where your work could help ensure future AI is safe.
Apply before December 1, to collaborate online from January to April 2024.
We value diverse backgrounds. Many roles but definitely not all require some knowledge in one of: AI safety, mathematics or machine learning.
Some skills requested by various projects:
- Art, design, photography
- Humanistic academics
- Communication
- Marketing/PR
- Legal expertise
- Project management
- Interpretability methods
- Using LLMs
- Coding
- Math
- Economics
- Cybersecurity
- Reading scientific papers
- Know scientific methodologies
- Think and work independently
- Familiarity of AI risk research landscape
Projects
To not build uncontrollable AI
Projects to restrict corporations from recklessly scaling the training and uses of ML models. Given controllability limits.
- 1. Towards realistic ODDs for foundation model based AI offerings
2. Luddite Pro: information for the refined luddite
3. Lawyers (and coders) for restricting AI data laundering
4. Assessing the potential of congressional messaging campaigns for AIS
Everything else
Diverse other projects, including technical control of AGI in line with human values.
- Mech-Interp
5. Modelling trajectories of language models
6. Towards ambitious mechanistic interpretability
7. Exploring toy models of agents
8. High-level mechanistic interpretability and activation engineering library
9. Out-of-context learning interpretability
10. Understanding search and goal representations in transformers - Evaluating and Steering Models
11. Benchmarks for stable reflectivity
12. SADDER: situational awareness datasets for detecting extreme risks
13. TinyEvals: how language models speak coherent English?
14. Evaluating alignment evaluations
15. Pipelines for evaluating and steering LLMs towards faithful reasoning
16. Steering of LLMs through addition of activation vectors with latent ethical valence - Agent Foundations
17. High actuation spaces
18. Does sufficient optimization imply agent structure?
19. Discovering agents in raw bytestreams
20. The science algorithm - Miscellaneous Alignment Methods
21. SatisfIA – AI that satisfies without overdoing it
22. How promising is automating alignment research? (literature review)
23. Personalized fine-tuning token for AI value alignment
24. Self-other overlap @AE Studio
25. Asymmetric control in LLMs: model editing and steering that resists control for unalignment
26. Tackling key challenges in Debate - Other
27. AI-driven economic safety nets: restricting the macroeconomic disruptions of AGI deployment
28. Policy-based access to powerful models
29. Organise the next Virtual AI Safety Unconference
Please write your application with the research lead of your favorite project in mind. Research leads will directly review applications this round. We organizers will only assist when a project receives an overwhelming number of applications.
Apply if you…
- want to consider and try out roles for helping ensure future AI function safely;
- are able to explain why and how you would contribute to one or more projects;
- previously studied a topic or trained in skills that can bolster your new team’s progress;
- can join weekly team calls and block out 5 hours of work each week from January to April 2024.
Timeline
Applications
By 1 Dec: Apply. Fill in the questions doc and submit it through the form.
Dec 1-22: Interviews. You may receive an email for an interview, from one or more of the research leads whose project you applied for.
By 28 Dec: Final decisions. You will definitely know if you are admitted. Hopefully we can tell you sooner, but we pinky-swear we will by 28 Dec.
Program
Jan 13-14: Opening weekend. First meeting with your teammates and one-on-one chats.
Jan 15 – Apr 28: Research is happening. Teams meet weekly, and plan in their own work hours.
April 25-28: Final presentations spread over four days.
Afterwards
For as long as you want: Some teams keep working together after the official end of AISC.
When you start the project, we recommend that you don’t make any commitment beyond the official length of the program. However if you find that you work well together as a team, we encourage you to keep going even after AISC is officially over.
First virtual edition – a spontaneous collage
Team structure
Every team will have:
- one Research Lead (RL)
- one Team Coordinator (TC)
- other team members
All team members are expected to work at least 5 hours per week on the project (this number can be higher for specific projects), which includes joining weekly team meetings, and communicating regularly with other team members about their work.
Research Lead (RL)
The RL is the person behind the research proposal. They will guide the research project, and keep track of relevant milestones. When things inevitably don’t go as planned (this is research after all) the RL is in charge of setting the new course.
The RL is part of the research team and will be contributing to research the same as everyone else on the team.
Team Coordinator (TC)
The TC is the ops person of the team. If you are the TC then you are in charge of making sure meetings are scheduled, checks in with individuals on their task progress, etc.
The role of the TC is important but not expected to take too much time (except for project management-heavy teams). Most of the time, the TC will act like a regular team member contributing to the research, same as everyone else on the team.
Each project proposal states whether the looking for someone like you to take on this role.
Other team members
Other team members will work on the project under the guidance of the RL and the TC. Team members will be selected based on relevant skills, understandings and commitments to contribute to the research project.
Questions?
Check out our frequently asked questions, in case you can find the answer there.
- For questions on a project, please contact the research lead. Find their contact info at the bottom of their project doc.
- For questions about the camp in general, or if you can’t reach the specific research lead, please email contact@aisafety.camp.
May take 5 days for organizers to reply.
We are fundraising!
Organizers are volunteering this round, since we had to freeze our salaries. This is not sustainable. To make next editions happen, consider making a donation. For larger amounts, feel free to email Remmelt.
1 comments
Comments sorted by top scores.
comment by NickyP (Nicky) · 2023-11-27T22:39:57.351Z · LW(p) · GW(p)
In case anyone finds it difficult to go through all the projects, I have made a longer post where each project title is followed by a brief description, and a list of the main skills/roles they are looking for.
See here: https://www.lesswrong.com/posts/npkvZG67hRvBneoQ9 [? · GW]