"DL training == human learning" is a bad analogy

post by kman · 2025-02-02T20:59:21.259Z · LW · GW · 0 comments

Contents

  Why the analogy is bad
  What about the "DL training == evolution" analogy?
None
No comments

A more correct but less concise statement of the analogy might be DL training : DL training code :: human learning : human genome, read as "DL training is to DL training code what human learning is to the human genome". This is sometimes contrasted with an alternative analogy DL training : DL-based AGI :: evolution : human mind.

Why the analogy is bad

Human learning mostly doesn't look much like DL training:

The higher level problem with this analogy is how it misleadingly/incorrectly assigns credit to processes for the work of building minds. To me, it's clear from the above points that human learning is doing very little of the work of building human minds; evolution did most of that work. On the other hand, the writing of DL training code (at least in the current paradigm) does very little of the work in building the mind of a hypothetical DL-based AGI: the DL process itself is doing most of the work.

What about the "DL training == evolution" analogy?

DL training also doesn't look much like biological evolution in some ways (e.g. much less of an information bottleneck). The reason this analogy works for the specific purpose of establishing an existence proof of inner misalignment with an outer optimizer is that it correctly identifies the optimization process which is doing the bulk of the actual mind-building work in each case.

0 comments

Comments sorted by top scores.