Teaser: Hard-coding Transformer Models

post by MadHatter · 2021-12-12T22:04:53.092Z · LW · GW · 19 comments

Contents

19 comments

Transformer models are incredibly powerful for natural language tasks (and they are starting to find uses in many other fields of machine learning). Unfortunately, it is nigh-impossible to interpret what goes on inside them. OR IS IT???

In this post, I am trying to gauge potential community interest in a strand of research that I have been doing in my spare time off and on for the past year and a half (roughly).

I have found that I can, with a fair amount of effort, hard-code the weights of a transformer model in order to perform some very crude versions of linguistic tasks. So far I have achieved English-to-French translation (on a toy corpus of about 150 sentences), text classification (is a sentence grammatical or not? on a toy corpus of a couple hundred sentences), and sentiment analysis (again on a limited corpus). These results are obviously not impressive compared to the state of the machine learning field, but I am pretty sure that they can all be drastically scaled up with the investment of some time and energy. Unfortunately, I have a fairly demanding day job, and haven't found the time and energy yet.

All of this is done by inspection (no gradient descent!). The process is a lot like programming, although it is more difficult than programming, at least right now for me. I am fairly certain that better tools and better notation can be developed to make the process easier. It is also almost certainly possible to combine hard-coding with gradient descent approaches to be able to scale these methods up in a slightly less labor-intensive way.

I think that these ideas could prove useful in alignment research - if we understand how a language model works in excruciating detail, it seems drastically more likely that we will be able to reason about and predict various misunderstandings rooted in the ambiguity of language. Given that language is (arguably) a fully general means of interacting with an artificial intelligence, it seems plausible to me that this work is on the critical path to alignment.

19 comments

Comments sorted by top scores.

comment by evhub · 2021-12-13T08:05:17.872Z · LW(p) · GW(p)

Unfortunately, I have a fairly demanding day job, and haven't found the time and energy yet.

Have you considered applying for a grant from the Long-Term Future Fund [EA · GW] to buy out your day job so you can spend all your time working on this? As a fund manager for the LTFF, this is definitely the sort of thing we're often happy to fund, and I think that the research you're describing sounds pretty exciting.

Replies from: habryka4, MadHatter
comment by habryka (habryka4) · 2021-12-13T08:25:37.378Z · LW(p) · GW(p)

Yeah, I also thought it was pretty interesting. I only thought about it for a few minutes, but it seems interesting enough to give it a shot, IMO.

comment by MadHatter · 2021-12-13T12:41:49.226Z · LW(p) · GW(p)

I have definitely not thought about that before. Feedback from people I have shown this work to has ranged from (literally) "you are a madman" to "that looks cool" (and then never engaging with it).

Replies from: Vivek
comment by Vivek Hebbar (Vivek) · 2022-02-24T01:36:31.115Z · LW(p) · GW(p)

Any update on this (applying for funding)?

comment by mtaran · 2021-12-12T23:46:36.500Z · LW(p) · GW(p)

Sounds intriguing! You have a GitHub link? :)

Replies from: MadHatter
comment by MadHatter · 2021-12-12T23:59:49.646Z · LW(p) · GW(p)

It's very, very rough, but: https://github.com/epurdy/hand

Replies from: mtaran
comment by mtaran · 2021-12-13T00:48:17.860Z · LW(p) · GW(p)

I'll make sure to run it when I get to a laptop. But if you ever get a chance to set the distill.pub article up to run on heroku or something, that'll increase how accessible this is by an order of magnitude.

Replies from: igor-ostrovsky
comment by Igor Ostrovsky (igor-ostrovsky) · 2021-12-13T20:59:01.647Z · LW(p) · GW(p)

I (not the OP) put it up here for now: https://igor0.github.io/hand/distill/

I'll take it down if MadHatter asks me or once there is an official site.

Replies from: MadHatter
comment by MadHatter · 2021-12-13T21:24:44.159Z · LW(p) · GW(p)

Thanks for throwing it up there!!!

comment by gwern · 2021-12-13T02:06:33.975Z · LW(p) · GW(p)

Any relation to RASP?

Replies from: gwern, Kenoubi, rudi-c, MadHatter
comment by Kenoubi · 2021-12-13T17:08:08.598Z · LW(p) · GW(p)

Thank you for sharing this. I know it's probably not why you posted it, but reading this paper was extremely helpful to me in understanding what Transformers are actually doing in the first place.

comment by Rudi C (rudi-c) · 2021-12-13T19:13:10.668Z · LW(p) · GW(p)

(Unrelated.) Have you considered putting an RSS field of your Twitter account on its bio? This way people can follow you without you needing to approve them, and since it’s read-only, your burden won’t increase.

(Not to mention that RSS is a much better medium than Twitter in the first place.)

Replies from: gwern
comment by gwern · 2021-12-13T22:56:41.596Z · LW(p) · GW(p)

I don't think Twitter allows such RSS feeds.

comment by MadHatter · 2021-12-13T02:15:42.726Z · LW(p) · GW(p)

It's a pretty similar style of work, but I haven't communicated at all with those authors and I started my work before they published.

comment by Jsevillamol · 2021-12-13T00:04:49.107Z · LW(p) · GW(p)

I think this is very impressive and that we could learn a lot from this kind of efforts.

Can you tell us more about your "training" process and the capabilities you can achieve, with examples?

comment by Rohin Shah (rohinmshah) · 2021-12-16T18:37:34.718Z · LW(p) · GW(p)

Very cool!

A note of caution: when I handcoded weights of a neural network (in my case, to solve a gridworld RL problem), I was able to encode the optimal policy -- but the algorithm that was later learned by gradient descent was very different. Partly this was because I only required myself to produce the right action, so I often had the (equivalent of) Q-values for different actions be very very close to each other, whereas the neural network ended up having Q-values that were further apart from each other, which was incentivized by the loss function even though it didn't make a difference to the optimal policy.

So to the extent you're trying to learn what a neural net trained by gradient descent would do, I'd recommend that you spend some time looking at the trained neural net to see whether it is using a similar sort of algorithm as the one you're implementing.

Replies from: MadHatter
comment by MadHatter · 2021-12-19T17:44:34.921Z · LW(p) · GW(p)

Agree with this.

comment by Igor Ostrovsky (igor-ostrovsky) · 2021-12-13T20:54:59.717Z · LW(p) · GW(p)

Building up toy transformer models by hand that work ... that's super interesting, both for interpretability and also education.

I put up the site [here](https://igor0.github.io/hand/distill/) for now. MadHatter, let me know if you want me to take it down.