Posts

Comments

Comment by glazgogabgolab on Empowerment is (almost) All We Need · 2022-10-24T06:27:19.710Z · LW · GW
Comment by glazgogabgolab on Deepmind's Gato: Generalist Agent · 2022-05-13T08:04:03.872Z · LW · GW

there was a result (from Pieter Abbeel's lab?) a couple of years ago that showed that pretraining a model on language would lead to improved sample efficiency in some nominally-totally-unrelated RL task

Pretrained Transformers as Universal Computation Engines
From the abstract:

We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning – in particular [...] a variety of sequence classification tasks spanning numerical, computation, vision, and protein fold prediction

Comment by glazgogabgolab on Lies Told To Children · 2022-04-18T08:43:28.034Z · LW · GW

Given your perspective, you may enjoy: Lies Told To Children: Pinocchio, Which I found posted here.

Personally I think I'd be fine with the bargain, but having read that alternative continuation, I think I better understand how you feel.

Comment by glazgogabgolab on [Intro to brain-like-AGI safety] 1. What's the problem & Why work on it now? · 2022-01-28T23:35:21.233Z · LW · GW

Oops, strangely enough I just wasn't thinking about that possibility. It's obvious now, but I assumed that SL vs RL would be a minor consideration, despite the many words you've already written on reward.

Comment by glazgogabgolab on [Intro to brain-like-AGI safety] 1. What's the problem & Why work on it now? · 2022-01-28T00:58:38.142Z · LW · GW

Hey Steve, I might be wrong here but I don't think Jon's question was specifically about what architectures you'd be talking about. I think he was asking more specifically about how to classify something as Brain-like-AGI for the purposes of your upcoming series.

The way I read your answer makes it sound like the safety considerations you'll be discussing depend more on whether the NTM is trained via SL or RL rather than whether it neatly contains all your (soon to be elucidated) Brain-like-AGI properties.

Though that might actually have been what you meant so I probably should have asked for clarification before I presumptively answered Jon for you.

Comment by glazgogabgolab on [Intro to brain-like-AGI safety] 1. What's the problem & Why work on it now? · 2022-01-28T00:53:05.675Z · LW · GW

If I'm reading your question right I think the answer is:

I’m going to make a bunch of claims about the algorithms underlying human intelligence, and then talk about safely using algorithms with those properties. If our future AGI algorithms have those properties, then this series will be useful, and I would be inclined to call such an algorithm "brain-like".

i.e. The distinction depends on whether or not a given architecture has some properties Steve will mention later. Which, given Steve's work, are probably the key properties of "A learned population of Compositional Generative Models + A largely hardcoded Steering Subsystem".

Comment by glazgogabgolab on How I'm thinking about GPT-N · 2022-01-18T00:31:49.187Z · LW · GW

Regarding "posts making a bearish case" against GPT-N, there's Steve Byrnes', Can you get AGI from a transformer.

I was just in the middle of writing a draft revisiting some of his arguments, but in the meantime one claim that might be of particular interest to you is that: "...[GPT-N type models] cannot take you more than a couple steps of inferential distance away from the span of concepts frequently used by humans in the training data"