Posts

SAEBench: A Comprehensive Benchmark for Sparse Autoencoders 2024-12-11T06:30:37.076Z
Evaluating Sparse Autoencoders with Board Game Models 2024-08-02T19:50:21.525Z
Using an LLM perplexity filter to detect weight exfiltration 2024-07-21T18:18:05.612Z
OthelloGPT learned a bag of heuristics 2024-07-02T09:12:56.377Z
An Intuitive Explanation of Sparse Autoencoders for Mechanistic Interpretability of LLMs 2024-06-25T15:57:16.872Z
A Chess-GPT Linear Emergent World Representation 2024-02-08T04:25:15.222Z

Comments

Comment by Adam Karvonen (karvonenadam) on Sam Marks's Shortform · 2024-12-17T18:41:14.771Z · LW · GW

The forward hook for our best performing approach is here. As Sam mentioned, this hasn’t been deployed to production. We left it as a case study because Benchify is currently prioritizing other parts of their stack unrelated to ML.

For this demonstration, we added a forward hook to a HuggingFace Transformers model for simplicity, rather than incorporating it into a production inference stack.

Comment by Adam Karvonen (karvonenadam) on Sam Marks's Shortform · 2024-12-17T18:31:07.795Z · LW · GW

Rejection sampling is a strong baseline that we hadn’t considered, and it’s definitely worth trying out—I suspect it will perform well here. Currently, our focus is on identifying additional in-the-wild tasks, particularly from other companies, as many of Benchify’s challenges involve sensitive details about their internal tooling that they prefer to keep private. We’re especially interested in tasks where it’s not possible to automatically measure success or failure via string matching, as this is where techniques like model steering are most likely to be the most practical.

I also agree with Sam that rejection sampling would likely need to operate on entire blocks rather than individual lines. By the time an LLM generates a line containing a regular expression, it’s often already committed to that path—for example, it might have skipped importing required modules or creating the necessary variables to pursue an alternative solution.

Comment by Adam Karvonen (karvonenadam) on Using an LLM perplexity filter to detect weight exfiltration · 2024-07-22T23:54:37.311Z · LW · GW

I agree. In particular, there's a huge advantage to the defender due to the scale of the model weights. All the defense has to do is raise the bar high enough that an adversary can't exfiltrate model weights during the lifetime of the model.

If the adversary gains access to the weak model, it still reduces the possible encoded information density, as I discuss here. I haven't done the experiments, but I would guess this reduction improves significantly if the adversary doesn't have access to the weak model.

Various ways of changing the weak model weights or retraining the weak model with new data could also increase the size of the defender's advantage.

Comment by Adam Karvonen (karvonenadam) on Preventing model exfiltration with upload limits · 2024-07-22T23:18:57.106Z · LW · GW

I think it does stack if the perplexity filter is applied before the compression step, which relies on an underlying assumption that it's easier to secure the internet connection choke point rather than the entire data center.

I will try to continue the discussion here for simplicity.

Comment by Adam Karvonen (karvonenadam) on Using an LLM perplexity filter to detect weight exfiltration · 2024-07-22T23:16:22.694Z · LW · GW

The perplexity filter works best if the attacker doesn't have complete control of the entire data center, but instead limited access to some machines. An underlying assumption is that it's easier to secure the internet connection choke point, rather than everything going on inside the data center.

This can be stacked with compression by applying the perplexity filter before the compression step.

Assuming that the attacker has access to the perplexity filter model, it would still reduce the possible information density if they have to perform some sort of arithmetic encoding scheme that has low perplexity. I had coworkers working on exactly this problem of encoding information into natural language using arithmetic encoding and GPT-2 small, and there's a major trade-off between the perplexity of the language generated and the density of the encoded information, although I don't have numbers available to quantify this trade-off.

Comment by Adam Karvonen (karvonenadam) on Preventing model exfiltration with upload limits · 2024-07-22T20:44:09.892Z · LW · GW

Another potential option (that could stack with text compression) is to filter outgoing data. In particular, if we only allow outgoing data to be language tokens, and we filter the outgoing data to have low perplexity as measured by a small, secret language model, then it could become very difficult to encode terabytes of model weights in a way that has low perplexity.

I discuss this idea more here.

Comment by Adam Karvonen (karvonenadam) on Using an LLM perplexity filter to detect weight exfiltration · 2024-07-22T02:52:09.780Z · LW · GW

Thanks for this comment, by the way! I added a paragraph to the beginning to make this post more clear.

Comment by Adam Karvonen (karvonenadam) on Using an LLM perplexity filter to detect weight exfiltration · 2024-07-22T02:07:17.870Z · LW · GW

The purpose of this proposal is to limit anyone from transferring model weights out of a data center. If someone wants to steal the weights and give them to China or another adversary, the model weights have to leave physically (hard drive out of the front door) or through the internet connection. If the facility has good physical security, then the weights have to leave through the internet connection.

If we also take steps to secure the internet connection, such as treating all outgoing data as language tokens and using a perplexity filter, then the model weights can be reasonably secure.

We don't even have to filter all outgoing data. If there was 1 Gigabyte of unfiltered bandwidth per day, it would take 2,000 days to transfer GPT-4's 2 terabytes of weights out (although this could be reduced by compression schemes).

Comment by Adam Karvonen (karvonenadam) on OthelloGPT learned a bag of heuristics · 2024-07-06T00:26:43.261Z · LW · GW

I would guess that it would learn an exact algorithm rather than heuristics. The challenging part for OthelloGPT is that the naive algorithm to calculate board state from input tokens requires up to 60 sequential steps, and it only has 8 layers to calculate the board state and convert this to a probability distribution over legal moves.

Comment by Adam Karvonen (karvonenadam) on OthelloGPT learned a bag of heuristics · 2024-07-06T00:24:14.289Z · LW · GW

I think it's pretty plausible that this is true, and that OthelloGPT is already doing something that's somewhat close to optimal within the constraints of its architecture. I have also spent time thinking about the optimal algorithm for next move prediction within the constraints of the OthelloGPT architecture, and "a bag of heuristics that promote / suppress information with attention to aggregate information across moves" seems like a very reasonable approach.

Comment by Adam Karvonen (karvonenadam) on OthelloGPT learned a bag of heuristics · 2024-07-06T00:20:08.196Z · LW · GW

In Othello, pieces must be played next to existing pieces, and the game is initialized with 4 pieces in the center. Thus, it's impossible for the top left corner to be played within the first 5 moves, and extremely unlikely in the early portion of a randomly generated game.

Comment by Adam Karvonen (karvonenadam) on A Chess-GPT Linear Emergent World Representation · 2024-04-01T00:23:02.479Z · LW · GW

I had the following results:

Stockfish level 2 vs Stockfish level 0, 0.01 seconds per move, 5k games:

0 random moves: win rate 81.2%

20 random moves: win rate 81.2%

40 random moves: 77.9%

95% confidence interval is about +- 1%

Stockfish level 15 vs level 9, 0.01 seconds per move, 5k games:

0 random moves: 65.5%

20 random moves: 72.8%

40 random moves: 67.5%

Once again, 95% confidence interval is about +- 1%

At 120 seconds per move, both of these level differences correspond to ~300 Elo: https://github.com/official-stockfish/Stockfish/commit/a08b8d4

This is 0.01 seconds per move. It appears that less search time lowers the Elo difference for level 15 vs level 9. A 65% win rate corresponds to a ~100 Elo difference, while a 81% win rate corresponds to a 250-300 Elo difference.

Honestly not too sure what to make of the results. One possible variable is that in every case, the higher level player is White. Starting in a game with a random position may favor the first to move. Level 2 vs level 0 seems most applicable to the Chess-GPT setting.

Comment by Adam Karvonen (karvonenadam) on A Chess-GPT Linear Emergent World Representation · 2024-03-31T17:25:07.903Z · LW · GW

Both are great points, especially #1. I'll run some experiments and report back.

Comment by Adam Karvonen (karvonenadam) on A Chess-GPT Linear Emergent World Representation · 2024-02-09T02:56:02.272Z · LW · GW

That's an interesting idea, I may test that out at some point. I'm assuming the softmax would be for kings / queens, where there is typically only one on the board, rather than for e.g. blank squares or pawns?

Comment by Adam Karvonen (karvonenadam) on A Chess-GPT Linear Emergent World Representation · 2024-02-09T02:54:58.206Z · LW · GW

The all stockfish data engine played at a level that was 100-200 Elo higher in my tests, with a couple caveats. First, I benchmarked the LLMs against stockfish, so an all stockfish dataset seems helpful for this benchmark. Secondly, the stockfish LLM would probably have an advantage for robustness because I included a small percentage of stockfish vs random move generator games in the stockfish dataset in the hopes that it would improve its ability.

I haven't done an in depth qualitative assessment of their abilities to give a more in depth answer unfortunately.

Comment by Adam Karvonen (karvonenadam) on A Chess-GPT Linear Emergent World Representation · 2024-02-09T02:48:34.791Z · LW · GW

Yes, in this recent OpenAI superalignment paper they said that GPT-4's training dataset included a dataset of chess games filtered for players with greater than 1800 Elo. Given gpt-3.5-turbo-instruct's ability, I'm guessing that its dataset included a similar collection.