Posts

Decompiling Tracr Transformers - An interpretability experiment 2024-03-27T09:49:38.081Z
Sentience in Silicon: The Challenges of AI Consciousness 2023-04-25T13:15:56.358Z
Is training data going to be diluted by AI-generated content? 2022-09-07T18:13:28.615Z

Comments

Comment by Hannes Thurnherr (hannes-thurnherr) on Is training data going to be diluted by AI-generated content? · 2022-09-12T13:49:06.046Z · LW · GW

I haven't thought of training the models by evaluating the selection of the image by the user. And thanks for correcting me on my Dalle 2 training data - claim.

What do you mean by "training a model to detect its own errors"? Maybe this is a naive question (i am an ML newcomer) but isn't that impossible by definition? Why would a model make a mistake if it's capable of identifying it as such? Do you mean that through continuous improvement the model could correct the mistakes it made in the past, after some time has passed?

The problem of dilution remains for GPTs in my view. Widespread use seems likely over the coming years and the resulting text is unlikely to be properly labeled as AI-generated. Thus it seems likely that the text produced by today's models will get absorbed into the training data of future GPTs, which will cause them to at least partially attempt to emulate their predecessors. Am I making a mistake somewhere in this thought process?

Comment by Hannes Thurnherr (hannes-thurnherr) on Is training data going to be diluted by AI-generated content? · 2022-09-08T12:49:39.919Z · LW · GW

I tend to agree with you. But I am not sure that our way of distinguishing AI-generated from human-generated content will reach the perfection required for this to "work".  Assuming that the mechanism of distinguishing the two will remain imperfect at least a bit of a feedback loop will remain, which will slow down development.