Would it be useful to collect the contexts, where various LLMs think the same?

post by Martin Vlach (martin-vlach) · 2023-08-24T22:01:50.426Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    2 kuira
None
No comments

My initial idea was Let's see where the small, interpretable, model makes the same inference as the huge, ¯dangerous, model and focus on those cases in the small model to help explain the bigger one. Quite likely I am wrong, but with a tiny chance for good impact, I have set up a repository
I would love your feedback on that direction before starting to actually generate the pairs/sets of context+LMs that match on that context.

Answers

answer by [deleted] · 2023-08-25T04:49:12.140Z · LW(p) · GW(p)

(i haven't done any interpretability research, and i'm just trying to think about this idea logically) this seems like a good idea to me! it's possible that the same neural patterns in the small model happen in the larger ones to generate those outputs. if this is only true some of the time, but sometimes the large model does some different process (e.g "simulating the underlying real-world process which led to that output") then that could also be interesting.

No comments

Comments sorted by top scores.