Jukebox: how to update from AI imitating humans?

post by Michaël Trazzi (mtrazzi) · 2020-04-30T20:50:13.844Z · score: 9 (5 votes) · LW · GW · None comments

This is a question post.

Four hours ago, OpenAI released Jukebox, "a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles". Last year, with MuseNet, they had something that could imitate Chopin, but now it can also imitate David Bowie singing with invented words, or even rickroll.

Now, given that's the progress they made in one year, how much should we update about the date when more than 10% of people listen to at least one ML-generated music once a week? More generally, how much should we update on massive ML trainings that gets us to something like GPT-2 or Jukebox, given that it uses lots of data and compute to imitate humans?

For instance, for the Alpha{Go,Zero} timeline, there went from "using human data" to "rediscovering the human data" in about a year, but go is well defined enough that you can do self-play etc. This seems trickier for text/music generation. The problem should be something like "does it make economic sense to use the output of an AI vs. a human here?".

Like, how far are we from having a Jukebox on Spotify where we can refresh and get a new AI generated music, say if we liked it or not and then get a bunch of new ai-generated songs in our discover weekly?

Answers

None comments

Comments sorted by top scores.