Degree of duplication and coordination in projects that examine computing prices, AI progress, and related topics?post by riceissa · 2019-04-23T12:27:18.314Z · score: 28 (10 votes) · LW · GW · No comments
This is a question post.
I have been noticing that an increasing number of organizations and researchers are looking into historical computing hardware prices, progress in AI systems, the relationship between hardware and AI progress, and related topics.
To list the research efforts/outputs that I am aware of:
- Open Philanthropy Project: "One of my first projects was a study of how hardware advances have fed into artificial intelligence progress in the past decades. As a first step, I gathered historical data on computing hardware prices. This alone turned out to be much more difficult and complicated than anyone expected - which is one of the great lessons of research! Once the hardware data was roughly in place, I could compare it with historical AI progress. This requires estimating the level of “intelligence” of various AI systems, which is a qualitative and somewhat speculative task. My ongoing projects aim to put such estimates on a more empirical and quantitative footing."
- AI Impacts: "Effect of marginal hardware on artificial general intelligence", "Progress in general purpose factoring", "Trends in algorithmic progress"
- Median Group: "Insight-based AI timelines model" and list of insights, "The Brain and Computation", "How rapidly are GPUs improving in price performance?"
- OpenAI: "AI and Compute", "How AI Training Scales"
- Ryan Carey: "Interpreting AI Compute Trends"
- Ben Garfinkel: "Reinterpreting “AI and Compute”"
- Some posts on LessWrong: "Reasons compute may not drive AI capabilities growth" [LW · GW] by Kythe and "On AI and Compute" [LW · GW] by johncrox.
- Vipul Naik: Computing Data Project, a data portal for historical computing, network, and storage costs. (Note that this project is in its very early stages. Also I am helping out with this project.)
- Going back further in time, there is e.g. Eliezer Yudkowsky's "Intelligence Explosion Microeconomics".
I am curious to hear about the degree of overlap/duplication in this kind of work, and also the extent to which different groups are coordinating/talking to each other (I also welcome other projects/outputs that I missed in my list). In particular, I am potentially worried about a bunch of groups doing a lot of work "in-house" without coordination/sharing work, leading to duplicated effort (and correspondingly fewer insights as different projects don't build on each other). Another potential worry I have is if there are a bunch of scattered projects and no "clear winner", it is more difficult for outsiders to form an opinion. There are also benefits to decentralization (it serves as a kind of replication to ensure independent efforts can reach the same conclusions; if there are differences in vision the most enlightened/competent groups can work without being slowed down by coordinating with less competent groups; and so on).
Acknowledgments: Thanks to Vipul Naik for suggesting that I write this question up, and for paying for the time I spent writing the question.
Comments sorted by top scores.