Google announces Pathways: new generation multitask AI Architecture

post by Ozyrus · 2021-10-29T11:55:21.797Z · LW · GW · 1 comments

This is a link post for https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/

Contents

1 comment

Instead, we’d like to train one model that can not only handle many separate tasks, but also draw upon and combine its existing skills to learn new tasks faster and more effectively. That way what a model learns by training on one task – say, learning how aerial images can predict the elevation of a landscape – could help it learn another task -- say, predicting how flood waters will flow through that terrain.

Seems like a model capable of generalisation.

Pathways could enable multimodal models that encompass vision, auditory, and language understanding simultaneously. So whether the model is processing the word “leopard,” the sound of someone saying “leopard,” or a video of a leopard running, the same response is activated internally: the concept of a leopard. The result is a model that’s more insightful and less prone to mistakes and biases.

And a multimodal one, too.

Anyone got more info or demo of that? They seem to claim a lot, but don't have anything to show yet; it's not clear to me why they would release an announcement so abstract. More to come, I guess?

That’s why we’re building Pathways. Pathways will enable a single AI system to generalize across thousands or millions of tasks, to understand different types of data, and to do so with remarkable efficiency – advancing us from the era of single-purpose models that merely recognize patterns to one in which more general-purpose intelligent systems reflect a deeper understanding of our world and can adapt to new needs.
 

Seems like they have an architecture but they are yet to build on it. And they don't share any details of the architecture; maybe they consider it a memetic hazard? [LW · GW]

This all sounds very concerning and checks many of the boxes of potential true AGI for me.

What are your thoughts?

1 comments

Comments sorted by top scores.

comment by Logan Zoellner (logan-zoellner) · 2021-10-29T12:54:29.449Z · LW(p) · GW(p)

Sounds  like they're  planning  to  build a multimodal transformer.  Which isn't surprising, given that Facebook and OpenAI are working on in this as well.  Think of this as Google's version of GPT-4.

I'm firmly in the "GPT-N is not AGI" camp,  but opinions vary regarding  this particular point.