If the "one cortical algorithm" hypothesis is true, how should one update about timelines and takeoff speed?

post by Adam Scholl (adam_scholl) · 2019-08-26T07:08:19.634Z · score: 24 (9 votes) · LW · GW · 3 comments

This is a question post.

Contents

  Answers
    6 steve2152
    1 Gurkenglas
    -2 avturchin
None
3 comments

Set aside the specifics of Jeff Hawkins' proposed model [LW · GW] of this algorithm—i.e., that grid cells also exist in neocortex, and that displacement cells both exist and exist in neocortex, and in general that the machinery we use for navigating concepts is a repurposed version of the machinery we use for navigating physical spaces. If the basic underlying claim is true—that the neocortex is a homogeneous general-purpose computing substrate that basically just runs one learning algorithm throughout—how should we expect this to affect the development of transformative AI?

I wasn't able to find much discussion of this question, aside from old Hanson/Yudkowsky foom debates and this AI Impacts post arguing that the existence of "one algorithm" wouldn't lead to discontinuous AI progress.

My own intuition would be to update strongly toward shorter timelines and faster takeoff, but I think I may well be missing things.


Answers

answer by steve2152 · 2019-08-29T14:26:37.061Z · score: 6 (3 votes) · LW · GW

My own updates after I wrote that were:

  • Increased likelihood of self-supervised learning algorithms as either a big part or even the entirety of the technical path to AGI—insofar as self-supervised learning is the lion's share of how the neocortex learning algorithm supposedly works. That's why I've been writing posts like Self-Supervised Learning and AGI safety [LW · GW].
  • Shorter timelines and faster takeoff, insofar as we think the algorithm is not overwhelmingly complicated
  • Increased likelihood of "one algorithm to rule them all" over Comprehensive AI Services [LW · GW]. This might be on the meta-level of one learning algorithm to rule them all, and we feed it biology books to get a superintelligent biologist, and separately we feed it psychology books and nonfiction TV to get a superintelligent psychological charismatic manipulator, etc. Or it might be on the base level of one trained model to rule them all, and we train it with all 50 million books and 100,000 years of YouTube and anything else we can find. The latter can ultimately be more capable (you understand biology papers better if you also understand statistics, etc. etc.), but on the other hand the former is more likely if there are scaling limits where memory access grinds to a halt after too many gigabytes get loaded into the world-model, or things like that. Either way, it would make it likelier for AGI (or at least the final missing ingredient of AGI) to be developed in one place, i.e. the search-engine model rather than the open-source software model.
answer by Gurkenglas · 2019-08-27T20:52:46.271Z · score: 1 (1 votes) · LW · GW

I see no reason that the most capable learner should be simple. If humans turn out to have some complexity limiter on their learning algorithm, such as complex machinery always being universal within a sexually reproducing species (otherwise it'd be too fragile), I expect the first cortical entity with self-modification ability to foom all the way up.

comment by Adam Scholl (adam_scholl) · 2019-08-29T07:16:05.375Z · score: 2 (2 votes) · LW · GW

Confused what you mean—is the argument in your second sentence that a low-complexity learner will foom more easily?

comment by Gurkenglas · 2019-08-29T11:14:44.135Z · score: 3 (3 votes) · LW · GW

I expect that a low-complexity learner is easy to improve, by tacking on specialized modules. If evolution doesn't do this, it will be easy to outperform.

answer by avturchin · 2019-08-26T09:43:42.348Z · score: -2 (5 votes) · LW · GW

It will appear in a random moment of time when someone will guess it. However, this "randomness" is not evenly distributed. The probability of guessing the correct algo is higher with time (as more people is trying) and also it is higher in a DeepMind-like company than in a random basement as Deep Mind (or similar company) has already hired best minds. Also larger company has higher capability to test ideas, as it has higher computational capacity and other resources.

comment by Pattern · 2019-08-26T20:48:16.530Z · score: 2 (2 votes) · LW · GW
and also it is higher in a DeepMind-like company than in a random basement

Than in a given random basement. Not necessarily any.* (Also, the probability isn't just coming up with an idea, but implementing it.)

*My point is, there's a lot of basements and garages. (Though most of them probably aren't concerned with AI, a lot of the time.)

3 comments

Comments sorted by top scores.

comment by ESRogs · 2019-08-26T23:25:12.251Z · score: 4 (2 votes) · LW · GW
and that displacement cells both exist and exist in neocortex

Both exist and exist?

comment by Adam Scholl (adam_scholl) · 2019-08-26T23:45:42.700Z · score: 8 (3 votes) · LW · GW

Grid cells are known to exist elsewhere in the brain—for example, in the entorhinal cortex. There are preliminary hints that grid cells may exist in neocortex too, but this hasn't yet been firmly established. Displacement cells, on the other hand, have never been observed anywhere—they're just hypothesized cells Hawkins predicts must exist, assuming his theory is true. So I took him to be making a few distinct claims: 1) grid cells also exist in neocortex, 2) displacement cells exist 3) displacement cells are located in neocortex.

comment by Adam Scholl (adam_scholl) · 2019-08-29T07:09:38.136Z · score: 3 (2 votes) · LW · GW

The specifics of the proposal, at least, seem relatively easy to falsify. For example, he not only predicts the existence of cortical grid and displacement cells, but also their specific location—that they'll be found in layer 6 and layer 5 of the neocortex, respectively. So we may find out whether he's right fairly soon.