Is progress in ML-assisted theorem-proving beneficial?

post by mako yass (MakoYass) · 2021-09-28T01:54:37.820Z · LW · GW · 3 comments

I want to believe that the outputs of the recently founded Hoskinson Center for Formal Mathematics (led by Jeremy Avigad, researcher of mathematical logic, automated reasoning and philosophy of mathematics), will live up to Mr Hoskinson's aura of technical responsibility.

One of the stated goals of the HCFM is to make mathematics more legible (both between mathematicians, and to non-mathematicians), and since mathematics is itself a motor of legibility in software systems and physical engineering, maybe new techniques in automated proving will make problems in AI motivation more legible too. That would be really great.

On the other hand, I don't know how to place bounds on the set of problems that could be solved with superhuman mathematical reasoning. HCFM is likely to produce a broad set of unexpected capabilities, and I can't see a way to exclude the problem of real-world, agentic utility optimization from that set.

One of the limitations of neural models right now is that they have only very limited kind of memory, they're mostly just reflexes, very wise ones, evolved over the course of, often the equivalent of tens of thousands of years of condensed human experience, but none of these wise reflexes so far can evolve a set of coherent thoughts over time, pretty much everything they do is an instant reaction to the current state of their environment.

GPT3 was startling because words are a type of memory, and our writing often depicts evolving thoughts, so sometimes it verges on evolving thoughts, but it was a limited kind of thinking because words are imprecise and most of the training data was rambly and inconsequential. That said, I don't know what GPT3's limits would turn out to be after, for instance, being subjected fully to Anthropic's inspection technique, and the tuning technique that will grow from that.

Well, mathematical claims constitute an extremely precise and robust format for the encoding of memories. In the training data, formal proofs, reference correlates closely with salience; the components are cited, new and interesting fruits always follow from them. To evolve the ability to predict the conclusions that follow the assumptions is equivalent to evolving the mind of a mathematician, in their prime.

Each mathematical result is a kind of evolution of a kind of thought, older thoughts evolving creatively and usefully into new ones. For me, there is no ambiguity as to whether artificial mathematicians would be thinking machines, and it's difficult for me to discern the limits of what this style of thought, sped up and scaled horizontally, would be able to think about.

3 comments

Comments sorted by top scores.

comment by Quinn (quinn-dougherty) · 2022-07-11T00:53:54.998Z · LW(p) · GW(p)

Unlikely, see here [EA · GW] (Rohin wrote a TLDR for alignment newsletter, see the comment).

comment by mako yass (MakoYass) · 2021-09-28T07:47:06.038Z · LW(p) · GW(p)

Regarding the Hoskinson Center, a presentation from Avigad: 

comment by mako yass (MakoYass) · 2021-09-28T02:21:04.012Z · LW(p) · GW(p)

If it is definitely beneficial, then I will have identified a technical (not cultural or institutional) frontier that's definitely beneficial! Wow! That would be a first for me. I'd be very happy. Maybe I would want to work on it?