Linkpost for a generalist algorithmic learner: capable of carrying out sorting, shortest paths, string matching, convex hull finding in one network

post by lovetheusers (CrazyPyth) · 2022-12-09T00:02:09.690Z · LW · GW · 1 comments

This is a link post for https://twitter.com/DeepMind/status/1600852768125726720

Contents

1 comment

Introducing a generalist neural algorithmic learner, capable of carrying out 30 different reasoning tasks, with a 𝘴𝘪𝘯𝘨𝘭𝘦 graph network.

These include: 

https://arxiv.org/abs/2209.11142

The cornerstone of neural algorithmic reasoning is the ability to solve algorithmic tasks, especially in a way that generalises out of distribution. While recent years have seen a surge in methodological improvements in this area, they mostly focused on building specialist models. Specialist models are capable of learning to neurally execute either only one algorithm or a collection of algorithms with identical control-flow backbone. Here, instead, we focus on constructing a generalist neural algorithmic learner -- a single graph neural network processor capable of learning to execute a wide range of algorithms, such as sorting, searching, dynamic programming, path-finding and geometry. We leverage the CLRS benchmark to empirically show that, much like recent successes in the domain of perception, generalist algorithmic learners can be built by "incorporating" knowledge. That is, it is possible to effectively learn algorithms in a multi-task manner, so long as we can learn to execute them well in a single-task regime. Motivated by this, we present a series of improvements to the input representation, training regime and processor architecture over CLRS, improving average single-task performance by over 20% from prior art. We then conduct a thorough ablation of multi-task learners leveraging these improvements. Our results demonstrate a generalist learner that effectively incorporates knowledge captured by specialist models.

1 comments

Comments sorted by top scores.

comment by Ulisse Mini (ulisse-mini) · 2022-12-09T00:32:17.159Z · LW(p) · GW(p)

From skimming the benchmark and the paper this seems overhyped (like Gato). roughly it looks like

  • May 2022: Deepmind releases a new benchmark for learning algorithms
  • ...Nobody cares (according to google scholar citations)
  • Dec 2022: Deepmind releases a thing that beats the baselines on their benchmark

I don't know much about GNNs & only did a surface-level skim so I'm interested to hear other takes.