Posts

Comments

Comment by tjbai on What the cost difference in processing input vs. output tokens with LLMs? · 2024-08-08T17:08:46.838Z · LW · GW

Output tokens certainly do not scale linearly, even with a KV cache. The KV cache means you don't need to recompute the k/q/v vectors for each of the previous tokens, but you still need to compute n kq dot products for the (n+1)'st token.

Comment by tjbai on An issue with training schemers with supervised fine-tuning · 2024-06-28T21:28:03.461Z · LW · GW

Just to make another note, "Solving the problem in theory" is also equivalent to the [forward training algorithm](https://www.cs.cmu.edu/~sross1/publications/Ross-AIStats10-paper.pdf), which preceded DAgger by the same authors.

I do think there are some interesting ideas to consider in the alignment setting. For example, the chunk size k is equivalent to the number of roll-out steps in IL. "Chunking" the roll-out to a fixed window is a common optimization if the task has a long time horizon and the expert is expensive to query. On the other hand, longer roll-outs provide stronger guarantees on how well the learned policy matches the expert. 

Classically, this is a simple tradeoff between performance and speed. But, as you mention k must also be kept intentionally small so that the AI can not detect it is being trained on human generations. How does one choose the chunk size to favor both strong alignment and avoid discrimination? Dynamic roll-out strategies have been proposed in the IL literature, though I'm not very familiar.

Comment by tjbai on An issue with training schemers with supervised fine-tuning · 2024-06-28T20:10:19.353Z · LW · GW

It's not clear to me that you do get stronger guarantees because the setting and method is so similar to that of classical imitation learning. In both cases, we seek to learn a policy that is aligned with the expert (human). Supervised fine-tuning (behavioral cloning) is problematic because of distribution shift, i.e. the learned policy accumulates error (at a quadratic rate!) and visits states it did not see in training.

You say this failure mode is dangerous because of scheming AI and I say it's dangerous because the policy is OOD, but it appears you agree that the AI only "recognizes" it's not in training because of distribution shift—"Halfway through the generation, the AI could detect those imitation mistakes..." To me, it appears the differing justifications for why the AI performs poorly/dangerously is a matter of interpretation, not a fundamental difference.

I also don't think it's fair to describe DAgger as just "correcting errors that humans can recognize" because it actually provides formal bounds on error accumulation, which would appear to limit the failure mode you describe here.  Admittedly, I'm very new to safety research as a whole, but this feels a bit like a reframing of an old problem. 

Comment by tjbai on An issue with training schemers with supervised fine-tuning · 2024-06-27T16:41:10.659Z · LW · GW

How does this differ from DAgger (https://arxiv.org/abs/1011.0686)?