[Link] Eric Schmidt's new AI2050 Fund
post by Aryeh Englander (alenglander) · 2022-02-16T21:21:28.525Z · LW · GW · 3 commentsContents
Schmidt Futures Launches AI2050 to Protect Our Human Future in the Age of Artificial Intelligence None 3 comments
This is a linkpost for https://www.schmidtfutures.com/schmidt-futures-launches-ai2050-to-protect-our-human-future-in-the-age-of-artificial-intelligence/
I am posting this here as it may be of interest to some members.
Schmidt Futures Launches AI2050 to Protect Our Human Future in the Age of Artificial Intelligence
$125 million, five-year commitment by Eric and Wendy Schmidt will support leading researchers in artificial intelligence making a positive impact
New York — Today, Schmidt Futures announced the launch of “AI2050,” an initiative that will support exceptional people working on key opportunities and hard problems that are critical to get right for society to benefit from AI. Eric and Wendy Schmidt are committed to funding $125 million over the next 5 years, and AI2050 will make awards to support work conducted by researchers from across the globe and at various stages in their careers. These awards will primarily aim to enable and encourage these AI2050 Fellows to undertake bold and ambitious work, often multi-disciplinary, that is typically hard to fund but critical to get right for society to benefit from AI.
I was particularly interested to see the following items listed in their Hard Problems Working List:
What follows is a working list of hard problems we must solve or get right for AI to benefit society in response to the following motivating question:
"It’s 2050, AI has turned out to be hugely beneficial to society and generally acknowledged as such. What happened? What are the most important and beneficial opportunities we realized, the hard problems we solved and the most difficult issues we got right to ensure this outcome, and that we should be working on now?"
...
2. Solved AI’s continually evolving safety and security, robustness, performance, output challenges and other shortcomings that may cause harm or erode public trust of AI systems, especially in safety-critical applications and uses where societal stakes and risk are high. Examples include bias and fairness, toxicity of outputs, misapplications, goal misspecification, intelligibility, and explainability.
3. Solved challenges of safety and control, human alignment and compatibility with increasingly powerful and capable AI and eventually AGI. Examples include race conditions and catastrophic risks, provably beneficial systems, human-machine cooperation, challenges of normativity.
...
5. Solved the economic challenges and opportunities resulting from AI and its related technologies. Examples include new modes of abundance, scarcity and resource use, economic inclusion, future of work, network effects and competition, and with a particular eye towards countries, organizations, communities, and people who are not leading the development of AI.
...
8. Solved AI-related risks, use and misuse, competition, cooperation, and coordination between countries, companies and other key actors, given the economic, geopolitical and national security stakes. Examples include cyber-security of AI systems, governance of autonomous weapons, avoiding AI development/deployment race conditions at the expense of safety, mechanisms for safety and control, protocols and verifiable AI treaties, and stably governing the emergence of AGI.
3 comments
Comments sorted by top scores.
comment by gwern · 2022-02-16T23:29:59.660Z · LW(p) · GW(p)
That is certainly a broad remit. But very safe and conservative grants so far:
Replies from: alenglanderTo coincide with the launch of AI2050, the initiative has also announced an inaugural cohort of AI2050 Fellows, who collectively showcase the range of research that will be critical toward answering our motivating question. This inaugural cohort includes Erik Brynjolfsson, Professor at Stanford and Director of the Stanford Digital Economy Lab; Percy Liang, Associate Professor of Computer Science and Director of the Center for Research on Foundation Models at Stanford University; Daniela Rus, Professor of Electrical Engineering and Computer Science and Director of the Computer Science and AI Laboratory at MIT, Stuart Russell, Professor of Computer Science and Director of the Center for Human-Compatible Artificial Intelligence at UC Berkeley; and John Tasioulas, Professor of Ethics and Legal Philosophy and Director of the Institute for Ethics in AI at the University of Oxford.
Some of the problems these fellows are working on include: Percy Liang is studying and improving massive “foundation models” for AI. Daniela Rus is developing and studying brain-inspired algorithms called liquid neural networks. Stuart Russell is studying probabilistic programming with a goal of improving AI’s interpretability, provable safety, and performance.
↑ comment by Aryeh Englander (alenglander) · 2022-02-16T23:56:50.609Z · LW(p) · GW(p)
Also note the Percy Liang's Stanford Center for Research on Foundation Models seems to have a strong focus on potential risks as well as potential benefits. At least that's what it seemed to me based on their inaugural paper and from a lot of the talks at the associated workshop last year.
comment by Chris_Leong · 2022-02-18T02:45:29.415Z · LW(p) · GW(p)
It appears that this initiative will advance capabilities as well. I'm really glad to see that at least some of the funds look likely to go to safety researchers, but it's unclear whether the net result will be positive or negative.