0 comments
Comments sorted by top scores.
comment by porby · 2023-05-27T17:39:43.501Z · LW(p) · GW(p)
This is a project I'd like to see succeed!
For what it's worth, I talked to Alexandra around EAG London a couple of times (I'm Ross, hi again!) and I think she has a good handle on important coordination problems. I encourage people to apply.
comment by carboniferous_umbraculum (Spencer Becker-Kahn) · 2023-06-07T12:26:48.518Z · LW(p) · GW(p)
How exactly can an org like this help solve (what many people see as one of the main bottlenecks:) the issue of mentorship? How would Catalyze actually tip the scales when it comes to 'mentor matching'?
(e.g. see Richard Ngo's first high-level point in this career advice post [LW · GW])
comment by MSRayne · 2023-06-03T11:49:16.839Z · LW(p) · GW(p)
I've never had a job in my life - yes really, I've had a rather strange life so far, it's complicated - but I've been reading and thinking about topics which I now know are related to operations for years, trying to design (in my head...) a system for distributing the work of managing a complex organization across a totally decentralized group so that no one is in charge, with the aid of AI and a social media esque interface. (I've never actually made the thing, because I keep finding new things I need to know, and I'm not a software engineer, just a designer.)
So, I think I have some parts of the requisite skillset here, and a ton of intuition about how to run systems efficiently built up from all the independent studying I've done - but absolutely no prior experience with basically anything in reality, except happening to (I believe) have the right personality for operations work. Should I bother applying?
Replies from: AlexandraB↑ comment by Alexandra Bos (AlexandraB) · 2023-06-07T12:11:02.061Z · LW(p) · GW(p)
Hi, I'd encourage you to apply if you recognize yourself in the About you section!
When in doubt always apply is my motto personally
comment by jacquesthibs (jacques-thibodeau) · 2023-09-21T17:23:20.649Z · LW(p) · GW(p)
I’m curious to know if Catalyze Impact is moving forward, is on hold or if the project has been shut down.
Replies from: AlexandraB↑ comment by Alexandra Bos (AlexandraB) · 2023-09-22T12:47:56.335Z · LW(p) · GW(p)
Hi, thanks for asking! We're moving forward, got funding from Lightspeed, and plan to run our pilot in Q4 of this year. You can subscribe at the bottom of catalyze-impact.org if you want to make sure to stay in the loop about sign-ups and updates
comment by Evan R. Murphy · 2023-05-30T00:16:28.543Z · LW(p) · GW(p)
A couple of quick thoughts:
- Very glad to see someone trying to provide more infrastructure and support for independent technical alignment researchers. Wishing you great success and looking forward to hearing how your project develops.
- A lot of promising alignment research directions now seem to require access to cutting-edge models. A couple of ways you might deal with this could be:
- Partner with AI labs to help get your researchers access to their models
- Or focus on some of the few research directions such as mechanistic interpretability that still seem to be making useful progress on smaller, more accessible models
↑ comment by Alexandra Bos (AlexandraB) · 2023-06-05T11:10:18.295Z · LW(p) · GW(p)
I'd be curious to hear from the people who pressed the disagreement button on Evan's remark: what part of this do you disagree with or not recognize?
Replies from: thomas-kwa↑ comment by Thomas Kwa (thomas-kwa) · 2023-06-05T11:22:42.266Z · LW(p) · GW(p)
I didn't hit disagree, but IMO there are way more than "few research directions" that can be accessed without cutting-edge models, especially with all the new open-source LLMs.
- All conceptual work: agent foundations, mechanistic anomaly detection, etc.
- Mechanistic interpretability, which when interpreted broadly could be 40% of empirical alignment work
- Model control like the nascent area of activation additions [LW · GW]
I've heard that evals, debate, prosaic work into honesty, and various other schemes need cutting-edge models, but in the past few weeks transitioning from mostly conceptual work into empirical work, I have far more questions than I have time to answer using GPT-2 or AlphaStar sized models. If alignment is hard we'll want to understand the small models first.
Replies from: Evan R. Murphy, Evan R. Murphy↑ comment by Evan R. Murphy · 2023-06-05T20:53:04.007Z · LW(p) · GW(p)
I wasn't saying that there were only a few research directions that don't require frontier models period, just that there are only a few that don't require frontier models and still seem relevant/promising, at least assuming short timelines to AGI.
I am skeptical that agent foundations is still very promising or relevant in the present situation. I wouldn't want to shut down someone's research in this area if they were particularly passionate about it or considered themselves on the cusp of an important breakthrough. But I'm not sure it's wise to be spending scarce incubator resources to funnel new researchers into agent foundations research at this stage.
Good points about mechanistic anomaly detection and activation additions though! (And mechanistic interpretability, but I mentioned that in my previous comment.) I need to read up more on activation additions.
↑ comment by Evan R. Murphy · 2023-06-05T20:47:33.752Z · LW(p) · GW(p)
↑ comment by Alexandra Bos (AlexandraB) · 2023-06-02T21:27:44.153Z · LW(p) · GW(p)
I was thinking about helping with infrastructure around access to large amounts of compute but had not considered trying to help with access to cutting-edge models but I think it might be a very good suggestion. Thanks for sharing your thoughts!
comment by Martin Vlach (martin-vlach) · 2023-06-09T18:00:10.862Z · LW(p) · GW(p)
The website seems good, but the buttons on the 'sharing' circle on the bottom need fixing.
comment by Dennis Akar (British_Potato) · 2023-06-04T12:14:27.168Z · LW(p) · GW(p)
Yay it's back up again.