Announcement: AI for Math Fund
post by sarahconstantin · 2024-12-05T18:33:13.556Z · LW · GW · 9 commentsThis is a link post for https://renaissancephilanthropy.org/news-and-insights/renaissance-philanthropy-and-xtx-markets-launch-new-9-million-ai-for-math-fund/
Contents
About Renaissance Philanthropy About XTX Markets None 9 comments
Renaissance Philanthropy and XTX Markets today announced the launch of the AI for Math Fund. The fund will commit $9.2 million to support the development of new AI tools, which will serve as long-term building blocks to advance mathematics.
An increasing number of researchers, including some of the world’s leading mathematicians, are embracing AI to push the boundaries of mathematical discovery and learning. The AI for Math Fund will support projects that expand the use of leading AI technology by mathematicians globally.
Alex Gerko, Founder and co-CEO, XTX Markets said, “The fund will support this critical intersection between AI and math. Working in partnership with Renaissance Philanthropy, we want to give mathematicians the tools they need to advance the field. As AI continues to transform other sciences, we believe that mathematics will be next.”
Renaissance Philanthropy and XTX Markets are inviting proposals for innovative projects led by researchers, non-profits, companies, mathematicians, software engineers and computer scientists that are unlikely to occur under business-as-usual conditions. Click here to submit an application.
Proposals should be aligned with one of the following categories:
- Production grade software tools: AI for auto-formalization, proof generation, synthesis of verifiable code, and more
- Datasets: Open-source collections of theorems, proofs, and math problems
- Field building: Textbooks, courses, and resources to grow the AI-for-math community
- Breakthrough ideas: High-risk, high-reward approaches to AI-driven math research
XTX Markets is the founding donor of the AI for Math Fund.
“We are excited to partner with XTX Markets on this important initiative,” said Tom Kalil, CEO of Renaissance Philanthropy. “The convergence of AI and math has the potential to advance fundamental mathematics, the reasoning capability of AI systems, and the synthesis of verifiable code.”
Following a rigorous assessment of the proposals, individual grants of up to $1 million will be awarded for projects lasting up to 24 months.
Terence Tao, UCLA, Fields Medalist and AI for Math Fund advisor said, “The next generation of AI models and tools have the potential to enhance collaboration among mathematicians that was previously impossible. I am delighted to work with Renaissance Philanthropy and XTX Markets to realize this potential through the AI for Math Fund.”
About Renaissance Philanthropy
Renaissance Philanthropy is a nonprofit organization with a mission to fuel a 21st century renaissance by increasing the ambition of philanthropists, scientists, and innovators. We do this by advising philanthropists, surfacing breakthrough ideas, and incubating ambitious initiatives.
About XTX Markets
XTX Markets is a leading algorithmic trading firm which uses state-of-the-art machine learning technology to produce price forecasts for over 50,000 financial instruments across equities, fixed income, currencies, commodities and crypto. It uses those forecasts to trade on exchanges and alternative trading venues, and to offer differentiated liquidity directly to clients worldwide. The firm trades over $250bn a day across 35 countries and has over 250 employees based in London, Singapore, New York, Paris, Bristol, Mumbai and Yerevan.
XTX Markets has an unrivalled level of computational resources in the trading industry, with a growing research cluster currently containing over 25,000 GPUs with 650 petabytes of usable storage. Teams across the firm include world-class researchers with backgrounds in pure math, programming, physics, computer science and machine learning. The firm is also constructing a large-scale data centre in Finland to future-proof its significant computational capabilities.
Since 2017, XTX Markets has committed over £250 million to charities and non-profit partners, establishing the firm as a major philanthropic donor in the UK and globally. The firm’s philanthropy focuses on advancing mathematics education and research, having committed over £50 million in grants to UK charities and education institutions, with the aim of supporting more students to progress to degrees, PhDs and highly-skilled careers in maths, especially those from low-income backgrounds. XTX Markets has also committed more than £25 million to support elite mathematics talent worldwide. More broadly, the firm’s giving also supports high-impact education programmes in low- and middle-income countries, humanitarian relief and local community initiatives in the regions where our offices are located internationally.
9 comments
Comments sorted by top scores.
comment by Chris_Leong · 2024-12-06T13:12:16.509Z · LW(p) · GW(p)
Just going to put it out there, it's not actually clear that we actually should want to advance AI for maths.
Replies from: Davidmanheim, nikolas-kuhn↑ comment by Davidmanheim · 2024-12-20T06:06:43.701Z · LW(p) · GW(p)
It is critical for guaranteed safe AI and many non-prosaic alignment agendas. I agree it has risks, since all AI capabilities and advances pose control risks, but it seems better than most types of general capabilities investments.
Do you have a more specific model of why it might be negative?
Replies from: Chris_Leong, nikolas-kuhn↑ comment by Chris_Leong · 2024-12-21T04:08:21.168Z · LW(p) · GW(p)
Well, does this improve automated ML research and kick off an intelligence explosion sooner?
Replies from: Davidmanheim↑ comment by Davidmanheim · 2024-12-21T16:20:01.961Z · LW(p) · GW(p)
Plausibly, yes. But so does programming capability, which is actually a bigger deal. (And it's unclear that a traditionally envisioned intelligence explosion is possible with systems built on LLMs, though I'm certainly not convinced by that argument.)
↑ comment by Amalthea (nikolas-kuhn) · 2024-12-20T11:02:46.452Z · LW(p) · GW(p)
I think the "guaranteed safe AI" framework is just super speculative. Enough to basically not matter as an argument given any other salient points.
This leaves us with the baseline, which is that this kind of prize re-directs potentially a lot of brainpower from more math-adjacent people towards thinking about AI capabilities. Even worse, I expect it's mostly going to attract the un-reflective "full-steam-ahead" type of people.
Mostly, I'm not sure it matters at all except maybe slightly accelerating some inevitable development before e.g. deep mind takes another shot at it to finish things off.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2024-12-20T13:04:48.853Z · LW(p) · GW(p)
It is speculative in the sense that any new technology being developed is speculative - but closely related approaches are already used for assurance in practice, so provable safety isn't actually just speculative, there are concrete benefits in the near term. And I would challenge you to name a different and less speculative framework that actually deals with any issues of ASI risks that isn't pure hopium.
Uncharitably, but I think not entirely inaccurately, these include: "maybe AI can't be that much smarter than humans anyways," "let's get everyone to stop forever," "we'll use AI to figure it out, even though we have no real ideas," "we just will trust that no-one makes it agentic," "the agents will be able to be supervised by other AI which will magically be easier to align," "maybe multiple AIs will compete in ways that isn't a disaster," "maybe we can just rely on prosaic approaches forever and nothing bad happens," "maybe it will be better than humans at having massive amounts of unchecked power by default." These all certainly seem to rely far more on speculative claims, with far less concrete ideas about how to validate or ensure them.
Replies from: nikolas-kuhn↑ comment by Amalthea (nikolas-kuhn) · 2024-12-20T13:19:03.939Z · LW(p) · GW(p)
I'm not saying that it's not worth pursuing as an agenda, but I also am not convinced it is promising enough to justify pursuing math related AI capabilities, compared to e.g. creating safety guarantees into which you can plug in AI capabilities once they arise anyway.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2024-12-20T13:41:46.139Z · LW(p) · GW(p)
But "creating safety guarantees into which you can plug in AI capabilities once they arise anyway" is the point, and it requires at least some non-trivial advances in AI capabilities.
You should probably read the current programme thesis.
↑ comment by Amalthea (nikolas-kuhn) · 2024-12-06T13:18:41.262Z · LW(p) · GW(p)
Agreed, I would love to see more careful engagement with this question.