What are the fundamental differences between teaching the AIs and humans?

post by StanislavKrym · 2025-04-06T18:17:59.284Z · LW · GW · No comments

This is a question post.

Contents

No comments

As of now, I don't understand what fundamental differences there exist between teaching the AI and humans except for the very fact that the AI, unlike humans, can be presented with an arbitrary amount of questions. For instance, the phenomenon of delayed generalisation, which is massive changes in the weights after the network managed to learn to output correct results on the training data, seems to have a human analogue of adopting the wrong conjecture and discovering the fact that it is wrong; a similar idea might explain the double descent: when the number of parameters and data coincide, wrong conjectures are more likely to be adopted and not forgotten. 

Another quasi-explainable phenomena is data poisoning, the phenomena when mild modifications of the data lead the network to look for the secret messages, not for the actual data.  Is it possible to check the wrong conjecture adoption hypothesis by, say, training a network to separate cats from dogs on a dataset of, say, 1M cat photos and 1M dog photos with or without poisonous marks that are somehow, but not fully correlated with cats and dogs?  Or maybe one could check the maximal correlation of poisonous marks with the real property of having a cat or a dog on the photo such that the network manages to overcome the poison effects?

Answers

No comments

Comments sorted by top scores.