Exploring how OthelloGPT computes its world model

post by JMaar (jim-maar) · 2025-02-02T21:29:09.433Z · LW · GW · 0 comments

Contents

  Summary
  What’s OthelloGPT
  Small findings / Prerequisites
    Mine-Heads and Yours-Heads
    Attention Patterns are almost constant (across inputs)
    Visualising the board state Accuracy over every Layer and sequence position
    The Flipped probe
  The Previous Color Circuit
    Example of the Previous Color Circuit in Action
    Quantifying the Previous Color Circuit
    Attention Heads Perform the Previous Color Circuit on Different Regions of the Board
  A Flipping Circuit Hypothesis
    Summary
    Classifying Monosemantic Neurons
    Testing the Flipping Circuit Hypothesis
    Conclusion
    Next Steps
  An Unexpected Finding
  Contact
None
No comments

I completed this project for my bachelor's thesis and am now writing it up 2-3 months later. I think I found some interesting results that are worth sharing here. This post might be especially interesting for people who try to reverse-engineer OthelloGPT in the future.

Summary

What’s OthelloGPT

Small findings / Prerequisites

The other sections will build on top of this section

Mine-Heads and Yours-Heads

Average attention paid to positions an even number of steps away minus attention paid to positions an odd number of steps away, for each Attention Head and layer. Mine Heads are seen in blue, and Yours Heads in read. "Last," "First," and other types of heads are also visible. L4H5, for example, is a "Last" HeadMost Attention Heads are Mine- or Yours-Heads

Attention Patterns are almost constant (across inputs)

Attention Pattern for Layer 3, Head 0, Position 2

Visualising the board state Accuracy over every Layer and sequence position

Accuracy of the Linear Probe Across Layers and Sequence Positions (ignoring empty tiles)

The Flipped probe

The Previous Color Circuit

Overview of the actions performed by the OV circuits of attention heads involved
in the previous color circuit.
Cosine similarities of different features after the OV circuit of Mine/Yours Heads

Example of the Previous Color Circuit in Action

Example showcasing the Previous Color Circuit. Columns represent Layer and next Transformer Module (Attn/MLP). Rows represent sequence Position. Tiles use a blue/red color scale, with blue representing black and red representing white. Tiles flipped in the board representation are marked with a white/black rectangle.
Direct logit attribution to "D3 is mine" at layer 1, position 19 for each attention head and sequence position.

Quantifying the Previous Color Circuit

Average accuracy of the previous color circuit for layer 2 over all sequence
positions split into tiles on the rim and tiles in the middle of the board
Average accuracy of the previous color circuit at sequence position 15 over all
layers, split into tiles on the rim and tiles in the middle of the board

Attention Heads Perform the Previous Color Circuit on Different Regions of the Board

Contribution of each Attention Head to the "Tile is Yours" - "Tile is Mine" direction, when the head pays attention to a previous sequence position where the tile was flipped

A Flipping Circuit Hypothesis

Summary

Classifying Monosemantic Neurons

Direct Logit Attributions of Neurons in Layer 1 to "D3 is Flipped"
Neuron weights of L1N1411 projected to different linear probes. The top row displays input weight projections, while the bottom row shows output weight projections

Let 𝑅 denote the set of rules. Each rule 𝑟 ∈ 𝑅 is defined by:

Example of a rule (t="C2 Yours", F=UP-LEFT, n=1)

A rule 𝑟(𝑥) evaluates to true for a residual stream 𝑥 when 𝑛 "mine" tiles need to be flipped in the specified direction, before reaching tile 𝑡, which is yours.

Histogram with number of neurons on the x-axis and number of rules with that many corresponding neurons on the y-axis, for each layer. y-axis is log scale. Bin Size of 3

Testing the Flipping Circuit Hypothesis

Accuracy of flipping circuit across layers. Green represents the baseline, red the standard setup, and blue another setup where additionally the neuron activation of the rule-neurons are approximated by their average activation on positive samples (where the rule is active). Solid lines indicate accuracy for tiles in the board’s center, while dashed lines represent accuracy for tiles on the board’s rim.
Accuracy Comparison of flipping Circuit Variants Against the Baseline
Average Number of Neurons in Flipping Circuit per Layer

Conclusion

Next Steps

An Unexpected Finding

Contact

 

  1. ^

    I edited the Flipped direction to be orthogonal to the Yours direction. The effect of the Yours/Mine direction is stronger on the rim of the board, but I don't have a visualization on hand

  2. ^

    0.17 is roughly the minimum of the GELU function. So a mean activation difference suggests that the neuron has a positive activation when the rule is true and a negative evaluation otherwise.

0 comments

Comments sorted by top scores.