Degree of duplication and coordination in projects that examine computing prices, AI progress, and related topics?

post by riceissa · 2019-04-23T12:27:18.314Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    14 ryan_b
None
No comments

I have been noticing that an increasing number of organizations and researchers are looking into historical computing hardware prices, progress in AI systems, the relationship between hardware and AI progress, and related topics.

To list the research efforts/outputs that I am aware of:

I am curious to hear about the degree of overlap/duplication in this kind of work, and also the extent to which different groups are coordinating/talking to each other (I also welcome other projects/outputs that I missed in my list). In particular, I am potentially worried about a bunch of groups doing a lot of work "in-house" without coordination/sharing work, leading to duplicated effort (and correspondingly fewer insights as different projects don't build on each other). Another potential worry I have is if there are a bunch of scattered projects and no "clear winner", it is more difficult for outsiders to form an opinion. There are also benefits to decentralization (it serves as a kind of replication to ensure independent efforts can reach the same conclusions; if there are differences in vision the most enlightened/competent groups can work without being slowed down by coordinating with less competent groups; and so on).

Acknowledgments: Thanks to Vipul Naik for suggesting that I write this question up, and for paying for the time I spent writing the question.

Answers

answer by ryan_b · 2019-04-23T17:01:12.269Z · LW(p) · GW(p)

I propose that the motivation for all of these projects is not to find the answer, but rather to build the intuitions of the project members. If you were to compare the effects on intuition of reading research vs. performing research, I strongly expect that performing research would be greater.

Because of this, I expect that a significant chunk of all the people who are working in any capacity on the AI risk problem will take a direct shot at similar projects themselves, even if they don't write it up. I would also be surprised to find an org without any such people in it.

No comments

Comments sorted by top scores.