December 2022 updates and fundraising
post by AI Impacts (AI Imacts) · 2022-12-22T17:20:05.382Z · LW · GW · 1 commentsContents
News New Hires and role changes Summer Internship Program AI Impacts Wiki Research Finished this year In progress Funding Request for more funding None 1 comment
Harlan Stewart and Katja Grace*, 22 December, 2022
News
New Hires and role changes
In 2022, the AI Impacts team has grown from two to seven full time staff. Out of more than 250 applicants, we hired Elizabeth Santos as Operations Lead, Harlan Stewart as Research Assistant, and three Research Analysts: Zach Stein-Perlman, Aysja Johnson, and (are in the process of hiring) Jeffrey Heninger. We’re excited to have them all, and you can learn more about them on our about page.
Rick and Katja have traded some responsibilities: Rick is now Director of AI Impacts, and Katja is Lead Researcher. This means Rick is generally in charge of making decisions about running the org, though Katja has veto power. Katja is responsible for doing research, as well as directing and overseeing it.
Summer Internship Program
We ran an internship program during the summer. Between May and September, six interns worked on various research projects on topics such as international coordination, explanations of historic human success, case studies in risk mitigation, R&D funding in AI, our new survey of Machine Learning researchers, current AI capabilities, technologies that are strategically-relevant to AI, and the scale of machine learning models.
AI Impacts Wiki
We intend to replace our pages with an AI Impacts Wiki. Our pages have always been functionally something like a wiki, so hopefully this new format will make it clearer how to interact with them (as distinct from our blog posts), as well as easier to navigate for readers and easier to update for researchers. The AI Impacts Wiki will launch soon and can be previewed here. . We’ll say more about other minor changes when we launch it, but AI Impacts’ past and future public research will be either detailed on the wiki or findable through the wiki. You can let us know what you think using our feedback form as well as comments on this blog post.
Research
Finished this year
This year, our main new pages and research-heavy blog posts are:
- A survey of 738 machine learning experts, about progress in AI. This survey was a rerun of the one conducted by AI Impacts in 2016, and a blog post on the tentative conclusions (Katja and Zach in collaboration with Ben Weinstein-Raun)
- Detailed arguments answering the question, ‘Will Superhuman AI be created?’ with a tentative ‘yes’ (Katja)
- Review of US public opinion surveys on AI (Zach)
- A database of inducement prizes (Elizabeth)
- A literature review of notable cognitive abilities of honeybees (Aysja)
- An analysis of discontinuities in historic trends in manned altitude (Jeffrey)
- A list of counterarguments to the basic AI x-risk case (Katja)
- A list of possible incentives to create AI that is known to pose extinction risks (Katja)
- Lists of sources arguing for and against existential risk from AI (Katja)
AI Impacts is in large part a set of pages that are intended to get updated over time, so our research should not necessarily show up as new pages, and is generally a bit harder to measure than in more standard research institutions. On this occasion, the above pages and posts probably represent most of our finished research output this year.
In progress
Things people are working on lately:
- Noteworthy capabilities and limitations of state-of-the-art AI (Zach, Harlan)
- A case study of Alexander Fleming’s efforts to warn the world about antibiotic resistance (Harlan)
- A literature review of notable cognitive abilities of ants (Aysja)
- Review and analysis of AI forecasting methods (Zach)
- Case studies of actors deciding not to pursue technologies, despite apparent incentives to do so. (Jeffrey, Aysja)
- Strategically significant narrow AI capabilities (Zach)
- The implications of the fermi paradox and anthropics for AI (Zach)
- What evidence from computational irreducibility says about where powerful AI should not be able to strongly outperform humans (Jeffrey, Aysja)
- Evidence about how uniform the brain’s cortex is (Aysja)
- Minor additions to Discontinuous Progress Investigation, a project looking for historical examples of discontinuous progress in technological trends
- Whether interventions to slow down AI progress should be considered more seriously (Katja)
- Arguments for AI being an existential risk (Katja)
- An paper about the survey (Zach)
- Finishing up of various summer internship projects mentioned earlier (interns)
- (Probably some other things)
Funding
Thank you to our recent funders! Including Jaan Tallinn, who just gave us a $546k grant through the Survival and Flourishing Fund, and Open Philanthropy who supported us for three months recently with a grant of of $364,893.
We expected to receive a grant from the FTX Future Fund to cover running the 2022 Expert Survey on Progress in AI, but didn’t receive the money due to FTX’s collapse. If anyone wants the funders’ share of moral credit for paying our survey participants in particular, at a cost of around $30k for the whole thing, please get in touch! (The ex-Future Fund team still deserves credit for substantial encouragement in making the survey happen—thank you to them!)
Request for more funding
We would love to be funded more!
We currently have around $662k. This is about 5-8 months of runway depending on our frugality (spending more money might look like e.g. an additional researcher, a 2023 internship program, freer budgets for travel and training etc, setting salaries more apt to the job market). We are looking for another $395k-$895k to cover 2023, and would also ideally like to extend our runway.
If you might be interested in this, and want to hear more about why we think our work is important enough to fund, Katja’s blog post, Why work at AI Impacts? outlines some of our reasons. If you want to talk to us about why we should be funded or hear more details about what we would do with money, please write to Elizabeth, Rick or Katja at [firstname]@aiimpacts.org.
If you’d like to donate to AI Impacts, you can do so here. (And we thank you!)
*Harlan did much of the work, Katja put it up without Harlan seeing its final state, so is responsible for any errors.
1 comments
Comments sorted by top scores.
comment by Austin Chen (austin-chen) · 2022-12-23T01:17:06.156Z · LW(p) · GW(p)
Thanks for writing this up! I've just added AI Impacts to Manifold's charity list, so you can now donate your mana there too :)
I find the move from "website" to "wiki" very interesting. We've been exploring something similar for Manifold's Help & About pages. Right now, they're backed by an internal Notion wiki and proxied via super.so, but our pages are kind of clunky; plus we'd like to open it up to allow our power users to contribute. We've been exploring existing wiki solutions (looks like AI Impacts is on DokuWiki?) but it feels like most public wiki software was designed 10+ years ago, whereas modern software like Notion is generally targeted for the internal use case. I would also note that LessWrong seems to have moved away from having an internal wiki, too. There's some chance Manifold ends up building an in-house solution for this, on top of our existing editor...