Posts

[Job Ad] MATS is hiring! 2024-10-09T02:17:04.651Z
MATS AI Safety Strategy Curriculum v2 2024-10-07T22:44:06.396Z
MATS Alumni Impact Analysis 2024-09-30T02:35:57.273Z
Apply to MATS 7.0! 2024-09-21T00:23:49.778Z
Why I funded PIBBSS 2024-09-15T19:56:33.018Z
Talk: AI safety fieldbuilding at MATS 2024-06-23T23:06:37.623Z
Talent Needs of Technical AI Safety Teams 2024-05-24T00:36:40.486Z
MATS Winter 2023-24 Retrospective 2024-05-11T00:09:17.059Z
MATS AI Safety Strategy Curriculum 2024-03-07T19:59:37.434Z
Announcing the London Initiative for Safe AI (LISA) 2024-02-02T23:17:47.011Z
MATS Summer 2023 Retrospective 2023-12-01T23:29:47.958Z
Apply for MATS Winter 2023-24! 2023-10-21T02:27:34.350Z
[Job Ad] SERI MATS is (still) hiring for our summer program 2023-06-06T21:07:07.185Z
How MATS addresses “mass movement building” concerns 2023-05-04T00:55:26.913Z
SERI MATS - Summer 2023 Cohort 2023-04-08T15:32:56.737Z
Aspiring AI safety researchers should ~argmax over AGI timelines 2023-03-03T02:04:51.685Z
Would more model evals teams be good? 2023-02-25T22:01:31.568Z
Air-gapping evaluation and support 2022-12-26T22:52:29.881Z
Probably good projects for the AI safety ecosystem 2022-12-05T02:26:41.623Z
Ryan Kidd's Shortform 2022-10-13T19:12:47.984Z
SERI MATS Program - Winter 2022 Cohort 2022-10-08T19:09:53.231Z
Selection processes for subagents 2022-06-30T23:57:25.699Z
SERI ML Alignment Theory Scholars Program 2022 2022-04-27T00:43:38.221Z
Ensembling the greedy doctor problem 2022-04-18T19:16:00.916Z
Is Fisherian Runaway Gradient Hacking? 2022-04-10T13:47:16.454Z
Introduction to inaccessible information 2021-12-09T01:28:48.154Z

Comments

Comment by Ryan Kidd (ryankidd44) on Seeking Collaborators · 2024-11-10T03:54:02.722Z · LW · GW

@abramdemski DM me :)

Comment by Ryan Kidd (ryankidd44) on Seeking Collaborators · 2024-11-10T01:33:08.054Z · LW · GW

Can you make a Manifund.org grant application if you need funding?

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-11-05T03:34:50.236Z · LW · GW

I'm not sure!

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-11-04T02:43:11.083Z · LW · GW

We don't collect GRE/SAT scores, but we do have CodeSignal scores and (for the first time) a general aptitude test developed in collaboration with SparkWave. Many MATS applicants have maxed out scores for the CodeSignal and general aptitude tests. We might share these stats later.

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-11-01T21:57:37.612Z · LW · GW

I don't agree with the following claims (which might misrepresent you):

  • "Skill levels" are domain agnostic.
  • Frontier oversight, control, evals, and non-"science of DL" interp research is strictly easier in practice than frontier agent foundations and "science of DL" interp research.
  • The main reason there is more funding/interest in the former category than the latter is due to skill issues, rather than worldview differences and clarity of scope.
  • MATS has mid researchers relative to other programs.
Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-11-01T21:43:57.998Z · LW · GW

I don't think it makes sense to compare Google intern salary with AIS program stipends this way, as AIS programs are nonprofits (with associated salary cut) and generally trying to select against people motivated principally by money. It seems like good mechanism design to pay less than tech internships, even if the technical bar for is higher, given that value alignment is best selected by looking for "costly signals" like salary sacrifice.

I don't think the correlation for competence among AIS programs is as you describe.

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-11-01T20:55:56.556Z · LW · GW

I think there some confounders here:

  • PIBBSS had 12 fellows last cohort and MATS had 90 scholars. The mean/median MATS Summer 2024 scholar was 27; I'm not sure what this was for PIBBSS. The median age of the 12 oldest MATS scholars was 35 (mean 36). If we were selecting for age (which is silly/illegal, of course) and had a smaller program, I would bet that MATS would be older than PIBBSS on average. MATS also had 12 scholars with completed PhDs and 11 in-progress.
  • Several PIBBSS fellows/affiliates have done MATS (e.g., Ann-Kathrin Dombrowski, Magdalena Wache, Brady Pelkey, Martín Soto).
  • I suspect that your estimation of "how smart do these people seem" might be somewhat contingent on research taste. Most MATS research projects are in prosaic AI safety fields like oversight & control, evals, and non-"science of DL" interpretability, while most PIBBSS research has been in "biology/physics-inspired" interpretability, agent foundations, and (recently) novel policy approaches (all of which MATS has supported historically).

Also, MATS is generally trying to further a different research porfolio than PIBBSS, as I discuss here, and has substantial success in accelerating hires to AI scaling lab safety teams and research nonprofits, helping scholars found impactful AI safety organizations, and (I suspect) accelerating AISI hires.

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-11-01T20:31:25.605Z · LW · GW

Are these PIBBSS fellows (MATS scholar analog) or PIBBSS affiliates (MATS mentor analog)?

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-11-01T19:58:53.495Z · LW · GW

Updated figure with LASR Labs and Pivotal Research Fellowship at current exchange rate of 1 GBP = 1.292 USD.

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-11-01T19:46:50.831Z · LW · GW

That seems like a reasonable stipend for LASR. I don't think they cover housing, however.

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-11-01T19:45:17.429Z · LW · GW

That said, maybe you are conceptualizing of an "efficient market" that principally values impact, in which case I would expect the governance/policy programs to have higher stipends. However, I'll note that 87% of MATS alumni are interested in working at an AISI and several are currently working at UK AISI, so it seems that MATS is doing a good job of recruiting technical governance talent that is happy to work for government wages.

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-11-01T18:55:21.872Z · LW · GW

Note that governance/policy jobs pay less than ML research/engineering jobs, so I expect GovAI, IAPS, and ERA, which are more governance focused, to have a lower stipend. Also, MATS is deliberately trying to attract top CS PhD students, so our stipend should be higher than theirs, although lower than Google internships to select for value alignment. I suspect that PIBBSS' stipend is an outlier and artificially low due to low funding. Given that PIBBSS has a mixture of ML and policy projects, and IMO is generally pursuing higher variance research than MATS, I suspect their optimal stipend would be lower than MATS', but higher than a Stanford PhD's; perhaps around IAPS' rate.

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-11-01T18:44:27.647Z · LW · GW

That's interesting! What evidence do you have of this? What metrics are you using?

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-10-31T20:49:29.074Z · LW · GW

MATS lowered the stipend from $50/h to $40/h ahead of the Summer 2023 Program to support more scholars. We then lowered it again to $30/h ahead of the Winter 2023-24 Program after surveying alumni and determining that 85% would be accept $30/h.

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-10-31T18:15:25.853Z · LW · GW

CHAI interns are paid $5k/month for in-person interns and $3.5k/month for remote interns. I used the in-person figure. https://humancompatible.ai/jobs

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-10-31T18:02:56.833Z · LW · GW

Yes, this doesn't include those costs and programs differ in this respect.

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-10-31T01:48:36.809Z · LW · GW

Hourly stipends for AI safety fellowship programs, plus some referents. The average AI safety program stipend is $26/h.

Edit: updated figure to include more programs.

Comment by Ryan Kidd (ryankidd44) on MATS Alumni Impact Analysis · 2024-10-02T18:09:14.903Z · LW · GW

1% are "Working/interning on AI capabilities."

Erratum: previously, this statistic was "7%", which erroneously included two alumni who did not complete the program before Winter 2023-24, which is outside the scope of this report. Additionally, two of the three alumni from before Winter 2023-24 who selected "working/interning on AI capabilities" first completed our survey in Sep 2024 and were therefore not included in the data used for plots and statistics. If we include those two alumni, this statistic would be 3/74 = 4.1%, but this would be misrepresentative as several other alumni who completed the program before Winter 2023-24 filled in the survey during or after Sep 2024.

Comment by Ryan Kidd (ryankidd44) on MATS Alumni Impact Analysis · 2024-09-30T23:33:17.311Z · LW · GW

Scholars working on safety teams at scaling labs generally selected "working/interning on AI alignment/control"; some of these also selected "working/interning on AI capabilities", as noted. We are independently researching where each alumnus ended up working, as the data is incomplete from this survey (but usually publicly available), and will share separately.

Comment by Ryan Kidd (ryankidd44) on MATS Alumni Impact Analysis · 2024-09-30T21:37:58.039Z · LW · GW

Great suggestion! We'll publish this in our next alumni impact evaluation, given that we will have longer-term data (with more scholars) soon.

Comment by Ryan Kidd (ryankidd44) on Why I funded PIBBSS · 2024-09-19T00:01:13.177Z · LW · GW

Cheers!

I think you might have implicitly assumed that my main crux here is whether or not take-off will be fast. I actually feel this is less decision-relevant for me than the other cruxes I listed, such as time-to-AGI or "sharp left turns." If take-off is fast, AI alignment/control does seem much harder and I'm honestly not sure what research is most effective; maybe attempts at reflectively stable or provable single-shot alignment seem crucial, or maybe we should just do the same stuff faster? I'm curious: what current AI safety research do you consider most impactful in fast take-off worlds?

To me, agent foundations research seems most useful in worlds where:

  • There is an AGI winter and we have time to do highly reliable agent design; or
  • We build alignment MVPs, institute a moratorium on superintelligence, and task the AIs to solve superintelligence alignment (quickly), possibly building off existent agent foundations work. In this world, existing agent foundations work helps human overseers ground and evaluate AI output.
Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-09-15T19:20:47.072Z · LW · GW

I just left a comment on PIBBSS' Manifund grant proposal (which I funded $25k) that people might find interesting.

Main points in favor of this grant

  1. My inside view is that PIBBSS mainly supports “blue sky” or “basic” research, some of which has a low chance of paying off, but might be critical in “worst case” alignment scenarios (e.g., where “alignment MVPs” don’t work, or “sharp left turns” and “intelligence explosions” are more likely than I expect). In contrast, of the technical research MATS supports, about half is basic research (e.g., interpretability, evals, agent foundations) and half is applied research (e.g., oversight + control, value alignment). I think the MATS portfolio is a better holistic strategy for furthering AI alignment. However, if one takes into account the research conducted at AI labs and supported by MATS, PIBBSS’ strategy makes a lot of sense: they are supporting a wide portfolio of blue sky research that is particularly neglected by existing institutions and might be very impactful in a range of possible “worst-case” AGI scenarios. I think this is a valid strategy in the current ecosystem/market and I support PIBBSS!
  2. In MATS’ recent post, “Talent Needs of Technical AI Safety Teams”, we detail an AI safety talent archetype we name “Connector”. Connectors bridge exploratory theory and empirical science, and sometimes instantiate new research paradigms. As we discussed in the post, finding and developing Connectors is hard, often their development time is on the order of years, and there is little demand on the AI safety job market for this role. However, Connectors can have an outsized impact on shaping the AI safety field and the few that make it are “household names” in AI safety and usually build organizations, teams, or grant infrastructure around them. I think that MATS is far from the ideal training ground for Connectors (although some do pass through!) as our program is only 10 weeks long (with an optional 4 month extension) rather than the ideal 12-24 months, we select scholars to fit established mentors’ preferences rather than on the basis of their original research ideas, and our curriculum and milestones generally focus on building object-level scientific skills rather than research ideation and “gap-identifying”. It’s thus no surprise that most MATS scholars are “Iterator” archetypes. I think there is substantial value in a program like PIBBSS existing, to support the development of “Connectors” and pursue impact in a higher-variance way than MATS.
  3. PIBBSS seems to have decent track record for recruiting experienced academics in non-CS fields and helping them repurpose their advanced scientific skills to develop novel approaches to AI safety. Highlights for me include Adam Shai’s “computational mechanics” approach to interpretability and model cognition, Martín Soto’s “logical updatelessness” approach to decision theory, and Gabriel Weil’s “tort law” approach to making AI labs liable for their potential harms on the long-term future.
  4. I don’t know Lucas Teixeira (Research Director) very well, but I know and respect Dušan D. Nešić (Operations Director) a lot. I also highly endorsed Nora Ammann’s vision (albeit while endorsing a different vision for MATS). I see PIBBSS as a highly competent and EA-aligned organization, and I would be excited to see them grow!
  5. I think PIBBSS would benefit from funding from diverse sources, as mainstream AI safety funders have pivoted more towards applied technical research (or more governance-relevant basic research like evals). I think Manifund regrantors are well-positioned to endorse more speculative basic research, but I don’t really know how to evalutate such research myself, so I’d rather defer to experts. PIBBSS seems well-positioned to provide this expertise! I know that Nora had quite deep models of this while Research Director and in talking with Dusan, I have had a similar impression. I hope to talk with Lucas soon!

Donor's main reservations

  1. It seems that PIBBSS might be pivoting away from higher variance blue sky research to focus on more mainstream AI interpretability. While this might create more opportunities for funding, I think this would be a mistake. The AI safety ecosystem needs a home for “weird ideas” and PIBBSS seems the most reputable, competent, EA-aligned place for this! I encourage PIBBSS to “embrace the weird”, albeit while maintaining high academic standards for basic research, modelled off the best basic science institutions.
  2. I haven’t examined PIBBSS’ applicant selection process and I’m not entirely confident it is the best version it can be, given how hard MATS has found applicant selection and my intuitions around the difficulty of choosing a blue sky research portfolio. I strongly encourage PIBBSS to publicly post and seek feedback on their applicant selection and research prioritization processes, so that the AI safety ecosystem can offer useful insight. I would also be open to discussing these more with PIBBSS, though I expect this would be less useful.
  3. My donation is not very counterfactual here, given PIBBSS’ large budget and track record. However, there has been a trend in typical large AI safety funders away from agent foundations and interpretability, so I think my grant is still meaningful.

Process for deciding amount

I decided to donate the project’s minimum funding ($25k) so that other donors would have time to consider the project’s merits and potentially contribute. Given the large budget and track record of PIBBSS, I think my funds are less counterfactual here than for smaller, more speculative projects, so I only donated the minimum. I might donate significantly more to PIBBSS later if I can’t find better grants, or if PIBBSS is unsuccessful in fundraising.

Conflicts of interest

I don't believe there are any conflicts of interest to declare.

Comment by Ryan Kidd (ryankidd44) on Probably good projects for the AI safety ecosystem · 2024-08-06T23:07:14.310Z · LW · GW

I don't think I'd change it, but my priorities have shifted. Also, many of the projects I suggested now exist, as indicated in my comments!

Comment by Ryan Kidd (ryankidd44) on Probably good projects for the AI safety ecosystem · 2024-08-06T23:06:12.212Z · LW · GW

More contests like ELK with well-operationalized research problems (i.e., clearly explain what builder/breaker steps look like), clear metrics of success, and have a well-considered target audience (who is being incentivized to apply and why?) and user journey (where do prize winners go next?).

We've seen a profusion of empirical ML hackathons and contests recently.

Comment by Ryan Kidd (ryankidd44) on Probably good projects for the AI safety ecosystem · 2024-08-06T23:05:39.298Z · LW · GW

A New York-based alignment hub that aims to provide talent search and logistical support for NYU Professor Sam Bowman’s planned AI safety research group.

Based on Bowman's comment, I no longer think this is worthwhile.

Comment by Ryan Kidd (ryankidd44) on Probably good projects for the AI safety ecosystem · 2024-08-06T23:05:09.034Z · LW · GW

Hackathons in which people with strong ML knowledge (not ML novices) write good-faith critiques of AI alignment papers and worldviews

Apart Research runs hackathons, but these are largely empirical in nature (and still valuable).

Comment by Ryan Kidd (ryankidd44) on Probably good projects for the AI safety ecosystem · 2024-08-06T23:04:34.682Z · LW · GW

A talent recruitment and onboarding organization targeting cyber security researchers

Palisade Research now exists and are running the AI Security Forum. However, I don't think Palisade are quite what I envisaged for this hiring pipeline.

Comment by Ryan Kidd (ryankidd44) on Probably good projects for the AI safety ecosystem · 2024-08-06T23:03:19.294Z · LW · GW

IAPS AI Policy Fellowship also exists now!

Comment by Ryan Kidd (ryankidd44) on Probably good projects for the AI safety ecosystem · 2024-08-06T23:02:40.153Z · LW · GW

SPAR exists! Though I don't think it can offer visas.

Comment by Ryan Kidd (ryankidd44) on Probably good projects for the AI safety ecosystem · 2024-08-06T23:01:51.229Z · LW · GW

A London-based MATS clone

This exists!

Comment by Ryan Kidd (ryankidd44) on Index of rationalist groups in the Bay Area July 2024 · 2024-07-28T18:22:34.863Z · LW · GW

This index should include Lighthaven, right?

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-07-15T22:59:09.471Z · LW · GW

I interpret your comment as assuming that new researchers with good ideas produce more impact on their own than in teams working towards a shared goal; this seems false to me. I think that independent research is usually a bad bet in general and that most new AI safety researchers should be working on relatively few impactful research directions, most of which are best pursued within a team due to the nature of the research (though some investment in other directions seems good for the portfolio).

I've addressed this a bit in thread, but here are some more thoughts:

  • New AI safety researchers seem to face mundane barriers to reducing AI catastrophic risk, including funding, infrastructure, and general executive function.
  • MATS alumni are generally doing great stuff (~78% currently work in AI safety/control, ~1.4% work on AI capabilities), but we can do even better.
  • Like any other nascent scientific/engineering discipline, AI safety will produce more impactful research with scale, albeit with some diminishing returns on impact eventually (I think we are far from the inflection point, however).
  • MATS alumni, as a large swathe of the most talented new AI safety researchers in my (possibly biased) opinion, should ideally not experience mundane barriers to reducing AI catastrophic risk.
  • Independent research seems worse than team-based research for most research that aims to reduce AI catastrophic risk:
    • "Pair-programming", builder-breaker, rubber-ducking, etc. are valuable parts of the research process and are benefited by working in a team.
    • Funding insecurity and grantwriting responsibilities are larger for independent researchers and obstruct research.
    • Orgs with larger teams and discretionary funding can take on interns to help scale projects and provide mentorship.
    • Good prosaic AI safety research largely looks more like large teams doing engineering and less like lone geniuses doing maths. Obviously, some lone genius researchers (especially on mathsy non-prosaic agendas) seem great for the portfolio too, but these people seem hard to find/train anyways (so there is often more alpha in the former by my lights). Also, I have doubts that the optimal mechanism to incentivize "lone genius research" is via small independent grants instead of large bounties and academic nerdsniping.
  • Therefore, more infrastructure and funding for MATS alumni, who are generally value-aligned and competent, is good for reducing AI catastrophic risk in expectation.
Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-07-15T19:00:28.952Z · LW · GW

Also note that historically many individuals entering AI safety seem to have been pursuing the "Connector" path, when most jobs now (and probably in the future) are "Iterator"-shaped, and larger AI safety projects are also principally bottlenecked by "Amplifiers". The historical focus on recruiting and training Connectors to the detriment of Iterators and Amplifiers has likely contributed to this relative talent shortage. A caveat: Connectors are also critical for founding new research agendas and organizations, though many self-styled Connectors would likely substantially benefit as founders by improving some Amplifier-shaped soft skills, including leadership, collaboration, networking, and fundraising.

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-07-15T17:34:19.693Z · LW · GW

In theory, sure! I know @yanni kyriacos recently assessed the need for an ANZ AI safety hub, but I think he concluded there wasn't enough of a need yet?

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-07-15T17:05:55.210Z · LW · GW

@Elizabeth, Mesa nails it above. I would also add that I am conceptualizing impactful AI safety research as the product of multiple reagents, including talent, ideas, infrastructure, and funding. In my bullet point, I was pointing to an abundance of talent and ideas relative to infrastructure and funding. I'm still mostly working on talent development at MATS, but I'm also helping with infrastructure and funding (e.g., founding LISA, advising Catalyze Impact, regranting via Manifund) and I want to do much more for these limiting reagents.

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-07-14T02:14:20.134Z · LW · GW

I would amend it to say "sometimes struggles to find meaningful employment despite having the requisite talent to further impactful research directions (which I believe are plentiful)"

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-07-12T18:20:27.913Z · LW · GW

Why does the AI safety community need help founding projects?

  1. AI safety should scale
    1. Labs need external auditors for the AI control plan to work
    2. We should pursue many research bets in case superalignment/control fails
    3. Talent leaves MATS/ARENA and sometimes struggles to find meaningful work for mundane reasons, not for lack of talent or ideas
    4. Some emerging research agendas don’t have a home
    5. There are diminishing returns at scale for current AI safety teams; sometimes founding new projects is better than joining an existing team
    6. Scaling lab alignment teams are bottlenecked by management capacity, so their talent cut-off is above the level required to do “useful AIS work”
  2. Research organizations (inc. nonprofits) are often more effective than independent researchers
    1. Block funding model” is more efficient, as researchers can spend more time researching, rather than seeking grants, managing, or other traditional PI duties that can be outsourced
    2. Open source/collective projects often need a central rallying point (e.g., EleutherAI, dev interp at Timaeus, selection theorems and cyborgism agendas seem too delocalized, etc.)
  3. There is (imminently) a market for for-profit AI safety companies and value-aligned people should capture this free energy or let worse alternatives flourish
    1. If labs or API users are made legally liable for their products, they will seek out external red-teaming/auditing consultants to prove they “made a reasonable attempt” to mitigate harms
    2. If government regulations require labs to seek external auditing, there will be a market for many types of companies
    3. “Ethical AI” companies might seek out interpretability or bias/fairness consultants
  4. New AI safety organizations struggle to get funding and co-founders despite having good ideas
    1. AIS researchers are usually not experienced entrepeneurs (e.g., don’t know how to write grant proposals for EA funders, pitch decks for VCs, manage/hire new team members, etc.)
    2. There are not many competent start-up founders in the EA/AIS community and when they join, they don’t know what is most impactful to help
    3. Creating a centralized resource for entrepeneurial education/consulting and co-founder pairing would solve these problems
Comment by Ryan Kidd (ryankidd44) on Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety) · 2024-06-28T21:31:31.584Z · LW · GW

AI that obeys the intention of a human user can be asked to help build unsafe AGI, such as by serving as a coding assistant.

I think a better example of your point is "Corrigible AI can be used by a dictator to enforce their rule."

Comment by Ryan Kidd (ryankidd44) on Talk: AI safety fieldbuilding at MATS · 2024-06-24T20:44:59.570Z · LW · GW

Yep, it was pointed out to me by @LauraVaughan (and I agree) that e.g. working for RAND or a similar government think tank is another high-impact career pathway in the "Nationalized AGI" future.

Comment by Ryan Kidd (ryankidd44) on Talent Needs of Technical AI Safety Teams · 2024-05-30T22:14:38.038Z · LW · GW

Yeah, I basically agree with this nuance. MATS really doesn't want to overanchor on CodeSignal tests or publication count in scholar selection.

Comment by Ryan Kidd (ryankidd44) on Talent Needs of Technical AI Safety Teams · 2024-05-27T04:48:13.150Z · LW · GW

I do think category theory professors or similar would be reasonable advisors for certain types of MIRI research.

Comment by Ryan Kidd (ryankidd44) on Talent Needs of Technical AI Safety Teams · 2024-05-26T22:51:25.996Z · LW · GW

Yes to all this, but also I'll go one level deeper. Even if I had tons more Manifund money to give out (and assuming all the talent needs discussed in the report are saturated with funding), it's not immediately clear to me that "giving 1-3 year stipends to high-calibre young researchers, no questions asked" is the right play if they don't have adequate mentorship, the ability to generate useful feedback loops, researcher support systems, access to frontier models if necessary, etc.

Comment by Ryan Kidd (ryankidd44) on Talent Needs of Technical AI Safety Teams · 2024-05-26T18:35:54.686Z · LW · GW

I want to sidestep critique of "more exploratory AI safety PhDs" for a moment and ask: why doesn't MIRI sponsor high-calibre young researchers with a 1-3 year basic stipend and mentorship? And why did MIRI let Vivek's team go?

Comment by Ryan Kidd (ryankidd44) on Talent Needs of Technical AI Safety Teams · 2024-05-26T03:00:23.061Z · LW · GW

We changed the title. I don't think keeping the previous title was aiding understanding at this point.

Comment by Ryan Kidd (ryankidd44) on Talent Needs of Technical AI Safety Teams · 2024-05-25T21:32:56.384Z · LW · GW

I like Adam's description of an exploratory AI safety PhD:

You'll also have an unusual degree of autonomy: You’re basically guaranteed funding and a moderately supportive environment for 3-5 years, and if you have a hands-off advisor you can work on pretty much any research topic. This is enough time to try two or more ambitious and risky agendas.

Ex ante funding guarantees, like The Vitalik Buterin PhD Fellowship in AI Existential Safety or Manifund or other funders, mitigate my concerns around overly steering exploratory research. Also, if one is worried about culture/priority drift, there are several AI safety offices in Berkeley, Boston, London, etc. where one could complete their PhD while surrounded by AI safety professionals (which I believe was one of the main benefits of the late Lightcone office).

Comment by Ryan Kidd (ryankidd44) on Ryan Kidd's Shortform · 2024-05-25T18:07:42.155Z · LW · GW

I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:

Comment by Ryan Kidd (ryankidd44) on Talent Needs of Technical AI Safety Teams · 2024-05-25T18:00:56.568Z · LW · GW

I plan to respond regarding MATS' future priorities when I'm able (I can't speak on behalf of MATS alone here and we are currently examining priorities in the lead up to our Winter 2024-25 Program), but in the meantime I've added some requests for proposals to my Manifund Regrantor profile.

Comment by Ryan Kidd (ryankidd44) on Talent Needs of Technical AI Safety Teams · 2024-05-25T16:51:57.245Z · LW · GW

An interesting note: I don't necessarily want to start a debate about the merits of academia, but "fund a smart motivated youngster without a plan for 3 years with little evaluation" sounds a lot like "fund more exploratory AI safety PhDs" to me. If anyone wants to do an AI safety PhD (e.g., with these supervisors) and needs funding, I'm happy to evaluate these with my Manifund Regrantor hat on.

Comment by Ryan Kidd (ryankidd44) on Talent Needs of Technical AI Safety Teams · 2024-05-25T14:50:03.470Z · LW · GW

I can understand if some people are confused by the title, but we do say "the talent needs of safety teams" in the first sentence. Granted, this doesn't explicitly reference "funding opportunities" too, but it does make it clear that it is the (unfulfilled) needs of existent safety teams that we are principally referring to.

Comment by Ryan Kidd (ryankidd44) on Talent Needs of Technical AI Safety Teams · 2024-05-25T00:39:04.151Z · LW · GW

As a concrete proposal, if anyone wants to reboot Refine or similar, I'd be interested to consider that while wearing my Manifund Regrantor hat.