What would you expect a massive multimodal online federated learner to be capable of?

post by Aryeh Englander (alenglander) · 2022-08-27T17:31:07.153Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    1 Nathan Helm-Burger
    1 deepthoughtlife
None
No comments

One intuitive picture I have for what very rapid ML capability gains might look like is a massive multimodal deep learning model that uses some form of online federated learning to continually learn from many devices simultaneously, and which is deployed to hundreds of millions or billions of users. For example, imagine a multimodal Google or Facebook chatbot with say 10 trillion parameters that could interact with billions of users simultaneously and which improved its weights from every interaction. My impression is that we basically have the tech for this today, or we will in the very near future. Now add in some RL to actively optimize some company-relevant goals (ad revenue, reported user satisfaction, etc.) and, intuitively at least, that seems to me like it's very close to the kind of scary AGI we've been talking about. But that seems like it could easily be 2-5 years in the future rather than 10-50.

Did I misunderstand something? How close to scary-type AGI would you expect this kind of model to become within a few months after deployment?

Answers

answer by Nathan Helm-Burger · 2022-08-28T00:54:27.206Z · LW(p) · GW(p)

Yes, I think the question is more about what we expect such a model to be critically lacking in which might make the difference in whether it is actively dangerous. Some people have been discussing ideas of things we should check for to determine if a model is sorta safe vs critically dangerous. For instance its ability to: deceive, self-improve, have sitational awareness / episodic memory (as discussed in neuroscience), have agentic goals, do long term strategic planning (vs being more safely myopic), do active search & and experimentation to disambiguate between competing hypotheses, jump to useful novel insights from a number of subtle hints in the available evidence, self-assess (did I succeed at the recent task I tried or fail? Can I proceed to the next step in my plan or do I need to try again, or perhaps create a whole new plan? Did I fail several times in a row using a particular strategy, implying I should try a different approach?) . I'm sure there's more to add to this list. I don't think current models are at literally zero on all of these. I think coming up with evaluations to measure models vs humans on these tasks seems hard but important. I think this list is probably incomplete, but sufficient if a model was super-humanly skilled at all these things simultaneously. What do you think? Am I missing something? Including something unnecessary?

I'm currently thinking about this paper and wondering how we can come up with better evaluations of real-world generality: https://www.nature.com/articles/s41598-021-01997-7

answer by deepthoughtlife · 2022-08-28T00:31:44.711Z · LW(p) · GW(p)

You're assuming that it would make sense to have a globally learning model, one constantly still training, when that drastically increases the cost of running the model over present approaches. Cost is already prohibitive, and to reach that many parameters any time soon exorbitant (but that will probably happen eventually). Plus, the sheer amount of data necessary for such a large one is crazy, and you aren't getting much data per interaction. Note that Chinchilla recently showed that lack of data is a much bigger issue right now for models than lack of parameters so they probably won't focus on parameter counts for a while.

Additionally, there are many fundamental issues we haven't yet solved for DL-based AI. Even if it was a huge advancement over present model, which I don't believe it would be at that size, it would still have massive weaknesses around remembering, or planning, and would largely lack any agency. That's not scary. It could be used for ill-purposes, but not at human (or above) levels.

I'm skeptical of AI in the near term because we are not close. (And the results of scaling are sublinear in many ways. I believe that mathematically, it's a log, though how that transfers to actual results can be hard to guess in advance.)

comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-08-28T00:59:49.631Z · LW(p) · GW(p)

I agree that current models seem to be missing some critical pieces (thank goodness!). I think perhaps you might be overestimating how hard it will be to add in those missing pieces if the capabilities research community focuses their primary intention on them. My guess is it'd be more like 5-10 years than 20-30 years.

Replies from: deepthoughtlife
comment by deepthoughtlife · 2022-08-29T14:01:01.049Z · LW(p) · GW(p)

I was replying to someone asking why it isn't 2-5 years. I wasn't making an actual timeline. In another post elsewhere on the sight, I mention that they could give memory to a system now and it would be able to write a novel.

Without doing so, we obviously can't tell how much planning they would be capable of if we did, but current models don't make choices, and thus can only be scary for whatever people use them for, and their capabilities are quite limited.

I do believe that there is nothing inherently stopping the capabilities researchers from switching over to more agentic approaches with memory and the ability to plan, but it would be much harder than the current plan of just throwing money at the problem (increasing compute and data.).

It will require paradigm shifts (I do have some ideas as to ones that might work) to get to particularly capable and/or worrisome levels, and those are hard to predict in advance, but they tend to take a while. Thus, I am a short term skeptic of AI capabilities and danger.

No comments

Comments sorted by top scores.