Is training data going to be diluted by AI-generated content?
post by Hannes Thurnherr (hannes-thurnherr) · 2022-09-07T18:13:28.615Z · LW · GW · 7 commentsContents
7 comments
This is my first post on here so please be lenient if I fail to follow any norms.
With the explosion of AI-generated images and text from Dall-E 2, Midjourney AI and GPT3, it does not seem unreasonable to assume that a non-negligible part of the content of the internet might become AI-generated. This in itself is not problematic. I actually look forward to the internet where everyone has access to the means to create awesome media for almost nothing.
But, since most large language models and multimodal AIs are trained on a dataset that basically consists of "all of the internet", we could end up with a feedback loop. If a lot of the old model's output is used as part of the training data for a new model, the development will slow down and the new models will essentially emulate the old ones. Maybe I'm underestimating the amount of content on the internet or overestimating the use of AI. This would of course also only occur if the development of AI is slow enough for the Internet to "fill up" with AI-generated content.
And sure we could say that we only use human-generated content for the training data, but since we're literally trying to make AI-generated content indistinguishable from human-generated content, we are actively working against ourselves and if we have any success we won't have success.
Essentially my question is "is training data dilution going to be a thing?"
7 comments
Comments sorted by top scores.
comment by gwern · 2022-09-07T19:28:26.936Z · LW(p) · GW(p)
My earlier comment on this question where I argue no [LW(p) · GW(p)], precisely for the same reasons (ie. if the generated samples are indistinguishable from human samples and 'pollute' the dataset then mission accomplished).
Replies from: hannes-thurnherr↑ comment by Hannes Thurnherr (hannes-thurnherr) · 2022-09-08T12:49:39.919Z · LW(p) · GW(p)
I tend to agree with you. But I am not sure that our way of distinguishing AI-generated from human-generated content will reach the perfection required for this to "work". Assuming that the mechanism of distinguishing the two will remain imperfect at least a bit of a feedback loop will remain, which will slow down development.
comment by Ben (ben-lang) · 2022-09-08T13:21:05.415Z · LW(p) · GW(p)
I think there will be feedback, but that it won't necessarily be a bad thing.
Today most fiction produced by humans is based strongly on the writer's experience with other fiction, not just there personal lived experience (although obviously that contributes too). Perhaps in distant pre-history there were works of fiction composed by people who had experienced almost no fiction themselves, but that was long ago.
Similarly, all the AI images now are based on non-AI images. But when the loop closes that doesn't seem like a bad thing. An author reading some books is probably good for their writing.
comment by trevor (TrevorWiesinger) · 2022-09-08T00:28:33.568Z · LW(p) · GW(p)
I wouldn't be surprised if training data becomes hard to come by for any reason, including dilution.
comment by ChristianKl · 2022-09-08T13:08:25.411Z · LW(p) · GW(p)
Most of the images via those models are not shared online. People who do share images online often generate a lot of images and then pick the best ones.
If someone wanted better training data that process could even be accelerated. Let the user start with a prompt and then the AI generates 9 images. The user clicks on one image and then 8 other images. The user again presses on the image that matches what he had in mind the most. If you allow the user to go through that process a few more times then you have good training data on what image should be produced through the prompt.
You can also allow the user to specify in words what he likes to be different in the image.
But, since most large language models and multimodal AIs are trained on a dataset that basically consists of "all of the internet", we could end up with a feedback loop.
That claim seems false. Dalle 2 is trained on 650 million images. On the other hand, there are 136 billion images on Google Images. Every day twice the amount of images get posted on Instagram than have been used to train Dalle 2.
I would expect that in the future improvements in image generation will be made by working to curate better data sets. I would expect that training the models to detect their own errors and improve on them will be helpful. The best software for creating images will likely train their models to predict a lot of user interaction.
Replies from: hannes-thurnherr↑ comment by Hannes Thurnherr (hannes-thurnherr) · 2022-09-12T13:49:06.046Z · LW(p) · GW(p)
I haven't thought of training the models by evaluating the selection of the image by the user. And thanks for correcting me on my Dalle 2 training data - claim.
What do you mean by "training a model to detect its own errors"? Maybe this is a naive question (i am an ML newcomer) but isn't that impossible by definition? Why would a model make a mistake if it's capable of identifying it as such? Do you mean that through continuous improvement the model could correct the mistakes it made in the past, after some time has passed?
The problem of dilution remains for GPTs in my view. Widespread use seems likely over the coming years and the resulting text is unlikely to be properly labeled as AI-generated. Thus it seems likely that the text produced by today's models will get absorbed into the training data of future GPTs, which will cause them to at least partially attempt to emulate their predecessors. Am I making a mistake somewhere in this thought process?
comment by the gears to ascension (lahwran) · 2022-09-07T23:14:38.397Z · LW(p) · GW(p)
it's going to be approximately the same degree of problem for humans as it is for ai. I'm not sure we can assume that's zero, because current models' lack of casual prediction makes their words less meaningful, and could on average introduce noise that makes it harder to extract causality from mere text. could be fine, if causal meaning-association can be established for words in the minds of both humans and ais.