RFC: a tool to create a ranked list of projects in explainable AI

post by eamag (dmitrii-magas) · 2025-04-06T21:18:34.220Z · LW · GW · 0 comments

This is a link post for https://eamag.me/2025/Paper-To-Project

Contents

  TL; DR
  Why
  Call to action
None
No comments

TL; DR

Inspired by a recent post by Neel Nanda on Research Directions [AF · GW], I'm building a tool that extracts projects from ICLR 2025 and uses tournament-like ranking of them based on how impactful they are. You can see them here https://openreview-copilot.eamag.me/projects if you filter by primary area. There are many ways to improve it, starting from but I want to get your feedback on how useful it is and what are the most important things to iterate on.

Why

I think the best way to learn things is by building something. People in universities are building simple apps to learn how to code, for example. Won't it be better if they were building something that's more useful for the world? I'm extracting projects from recent ML papers based on different level of competency, from no-coding to PhD. I rank undergraduate-level projects (mostly in explainable AI area, but also just top ranked papers from that conference) to find the most useful. More details on the motivation and implementation are in the linked post.

We can probably increase the speed of research in AI alignment by involving more people in it, and to do so we have to lower the barriers of entry, and prove that the things people can work on are actually meaningful. The ranking now is subjective and automatic, but it's possible to add another (weighed) voting system on top to rerank projects based on researchers' intuition.

Call to action

0 comments

Comments sorted by top scores.