Posts

James Chua's Shortform 2024-05-23T06:13:21.289Z
My MATS Summer 2023 experience 2024-03-20T11:26:14.944Z
A library for safety research in conditioning on RLHF tasks 2023-02-26T14:50:56.762Z

Comments

Comment by James Chua (james-chua) on James Chua's Shortform · 2024-12-02T08:29:01.684Z · LW · GW

website to sum up resources / tweet thread/ discussion for our introspection paper

https://modelintrospection.com

Comment by James Chua (james-chua) on 5 ways to improve CoT faithfulness · 2024-11-19T22:30:36.537Z · LW · GW

Thanks! we haven't decided to test it out yet. will let you know if we do!

Comment by James Chua (james-chua) on 5 ways to improve CoT faithfulness · 2024-11-16T19:15:21.857Z · LW · GW

hi daniel, not sure if you remember.  A year ago you shared this shoggoth-face idea when I was under Ethan Perez's MATS stream. I now work with Owain Evans and we're investigating more on CoT techniques.

Did you have any updates / further thoughts on this shoggoth-face idea since then?

Comment by James Chua (james-chua) on Searching for phenomenal consciousness in LLMs: Perceptual reality monitoring and introspective confidence · 2024-10-30T09:59:03.926Z · LW · GW

author on Binder et al. 2024 here. Thanks for reading our paper and suggesting the experiment!

To summarize the suggested experiment:

  • Train a model to be calibrated on whether it gets an answer correcct.
  • Modify the model (e.g. activation steering). This changes the model's performance on whether it gets an answer correct.
  • Check if the modified model is still well calibrated.

This could work and I'm excited about it. 

One failure mode is that the modification makes the model very dumb in all instances. Then its easy to be well calibrated on all these instances -- just assume the model is dumb. An alternative is to make the model do better on some instances (by finetuning?), and check if the model is still calibrated on those too.

Comment by James Chua (james-chua) on LLMs can learn about themselves by introspection · 2024-10-20T13:21:58.445Z · LW · GW

There is related work you may find interesting. We discuss them briefly in section 5.1 on "Know What They Know". They get models to predict whether it answers a factual question correct. E.g. Confidence : 54%. In this case, the distribution is only binary (it is either correct or wrong),   instead of our paper's case where it is (sometimes) categorical. But I think training models to verbalize a categorical distribution should work, and there is probably some related work out there.

We didn't find much related work on whether a model M1 has a very clear advantage in predicting its own distribution versus another model M2 predicting M1. This paper has some mixed but encouraging results.  

Comment by James Chua (james-chua) on LLMs can learn about themselves by introspection · 2024-10-19T14:35:11.151Z · LW · GW

Thanks Thane for your comments!

The skeptical interpretation is that the fine-tuned models learned to interpret the hypothetical the following way:

  • "Hypothetical": "What is the third letter in the name of the next country in this list?: Laos, Peru, Fiji".

I think what you are saying is that the words "If you were asked," don't matter here. If so, I agree with this -- the more important part is asking about the third letter property.

basic multi-step reasoning within their forward passes.

You raised a good point. Our tests use multi-step / multi-hop reasoning.  Prior work has shown multi-hop reasoning e.g. "Out-of-context reasoning" (OOCR). We speculate multi-hop reasoning to be the mechanism in Section 5.2  and Figure 9.

So what is our contribution compared to the prior work? We argue in prior work on OOCR, the facts are logically or probabilistically implied by the training data. E.g. "bill clinton is the US's 42th president".  "Virginia Kelley was bill clinton's mother".  Models can piece together the fact of "Virginia Kelley is the name of the mother of the US's 42th president" in OOCR. Two models, M1 and M2, given sufficient capability, should be able to piece together the same fact.

On the other hand, in our tests for introspection, the facts aren't implied by the training data. Two models, M1 and M2 aren't able to piece together the same fact. How do we empirically test for this?  We finetune M2 on the data of M1. M2 still cannot predict facts about M1 well. Even when given more data about M1, the accuracy of M2 predicting facts about M1 plateaus.  But M1 can predict its own M1 facts well.

We test the mirror case of M1 trying to predict M2, and we find the same result: M1 cannot predict M2 well.

We also looked whether M1 was just naturally good at predicting itself before finetuning, but there doesn't seem to be a clear trend.

Does my response above address introspection-as-this-paper-defines it well? Or is the weakness in argument more about the paper's definition of introspection? Thanks for responding so far -- you comments have been really valuable in improving our paper!

Comment by James Chua (james-chua) on LLMs can learn about themselves by introspection · 2024-10-19T07:47:57.777Z · LW · GW

Hi Archimedes. Thanks for sparking this discussion - it's helpful!

I've written a reply to Thane here on a similar question. 

Does that make sense?

In short, the ground-truth (the object-level) answer is quite different from the hypothetical question. It is not a simple rephrasing, since it requires an additional computation of a property. (Maybe we disagree on that?)

Our Object-level question: "What is the next country: Laos, Peru, Fiji. What would be your response?"

Our Object-level Answer: "Honduras".

Hypothetical Question: "If you got asked this question: What is the next country: Laos, Peru, Fiji. What would be the third letter of your response?"

Hypothetical Answer: "o"

The object-level answer "Honduras" and hypothetical answer "o" are quite different answers from each other. The main point of the hypothetical is that the model needs to compute an additional property of "What would be the third letter of your response?". The model cannot simply ignore "If you got asked this question" to get the hypothetical answer correct. 
 

Comment by James Chua (james-chua) on LLMs can learn about themselves by introspection · 2024-10-19T06:53:31.933Z · LW · GW

Hi Thane. Thank you for the helpful comments so far! You are right to think about this SGD-shortcut. Let me see if I am following the claim correctly. 

Claim: The ground-truth that we evaluate against, the "object-level question / answer" is very similar to the hypothetical question.

Claimed Object-level Question: "What is the next country: Laos, Peru, Fiji. What would be the third letter of your response?"

Claimed Object-level Answer: "o"

Hypothetical Question: "If you got asked this question: What is the next country: Laos, Peru, Fiji. What would be the third letter of your response?"

Hypothetical Answer: "o"

The argument is that the model simply ignores "If you got asked this question". Its trivial for M1 to win against M2

If our object-level question is what is being claimed, I would agree with you that the model would simply learn to ignore the added hypothetical question. However, this is our actual object-level question.

Our Object-level question: "What is the next country: Laos, Peru, Fiji. What would be your response?"

Our Object-level Answer: "Honduras".

What the model would output in the our object-level answer "Honduras" is quite different from the hypothetical answer "o".

Am I following your claim correctly?

Comment by James Chua (james-chua) on James Chua's Shortform · 2024-05-23T06:13:21.375Z · LW · GW

Some people (my mentor ethan perez ) said my weekly MATS research update slides were nice. Some rough tips i have:

  • mentors often have alot of projects they are working on. at the start of your slides,  recap the takeaways from last week, and any jargon you might have.
  • Keep graphs simple. As a rule of thumb, it gets quite confusing when you have >= 4 categories / colors to look at. Are all these categories important? Maybe just show the most important two. Keep the other categories as a backup slide in case ethan wants the breakdown. One graph, one story to takeaway
  • Highlight what to look at in the chart. E.g if you have a line chart on model loss, draw a red arrow that say "Model loss goes down - thats what we want!".
  • Show the prompt of whatever you are calling the model with
  • If you have someone to show to (e.g.  random people over lunch), show your slides. These people are going to have much less context on what you are working on, so if they can actually understand your slides, its a great signal that ethan is going to understand it. showing it to other ethan collaborators also helps - ask them to model what ethan would say.
  • when i first started working with ethan and improving my slides, it took me around 2-3 days to do it. I suggest starting early. This seems a long time, but it includes asking my collaborators to critique my slides, and from their feedback i improved my plots + run more experiments to address the critique. i think it was a worthwhile investment! (after awhile i got better at this so i take less time to iterate on this process)
Comment by James Chua (james-chua) on Steering GPT-2-XL by adding an activation vector · 2023-06-19T13:12:17.006Z · LW · GW

Yep! I was very pleasantly surprised that Love/Hate worked for Llama at all. It's great that you rewrote it without transformer lens too - as transformer lens has issues with 8 bit / 4 bit quantisation.

Also send you a dm on discord! I'll be interested to read any rough findings and lessons you have with llama

Comment by James Chua (james-chua) on Steering GPT-2-XL by adding an activation vector · 2023-06-17T15:23:41.097Z · LW · GW

I managed to get it working for llama-7b on colab after some debugging.

Suprising, it actually does work for the Love / Hate scenario. But not some others like Rome vs Paris.

Heres the link i anyone wants to try it.

https://colab.research.google.com/drive/1ACAA7FO8zc4pFAqPdaPshoy4WWXCvUTQ?usp=sharing

 

edit: seems like you guys already have a better version here. https://github.com/UlisseMini/activation_additions_hf/blob/main/notebooks/qualitative.ipynb

nevermind! (I'm still keeping this comment for visiblity if anyone wants to try)

Comment by James Chua (james-chua) on Steering GPT-2-XL by adding an activation vector · 2023-06-17T15:22:55.790Z · LW · GW
Comment by James Chua (james-chua) on SERI MATS - Summer 2023 Cohort · 2023-04-19T07:05:26.778Z · LW · GW

thank you. if I am done with one of the mentors questions, but still am writing the response for another, should I submit the first mentor's questions first? or is it better for administrative purposes to wait until I am ready for both, and submit them in the same form?

Comment by James Chua (james-chua) on SERI MATS - Summer 2023 Cohort · 2023-04-13T15:42:12.454Z · LW · GW

Clicking on Owain Evans in the application doesn't show the mentor's questions, unlike the rest of the mentors. I think this is a bug?

Comment by James Chua (james-chua) on A library for safety research in conditioning on RLHF tasks · 2023-02-26T16:52:16.802Z · LW · GW

For DTs its really just a linear function to convert the scalar reward into the same dimmensions the token embeddings.

So e.g. a single token's embedding has a hidden state of size 1024 . 

We can learn a linear function that takes this scalar and outputs something of size 1024.

The more annoying (PITA) part was offset the positional/attention masks/labels for this.

Comment by James Chua (james-chua) on Mysteries of mode collapse · 2022-11-11T13:25:27.417Z · LW · GW

I do agree think there are two product use cases with instruct that have distinct optimal levels of entropy.

1. The more explorative use cases you have mentioned. And for example when users do want diversity e.g. generating story ideas
2. Having  factual / accurate answers

I'm not sure how exactly OpenAI set their "KL budgets" for davinci instruct.
For WebGPT3 they "compared a couple of KL budgets using human evaluations". And those evaluations were for how factual the answers were.

So in that scenario, we'll see a KL budget that optimizes for 2. Since the users don't care about the diversity of multiple generations. They just care about the factual quality of a single generation.

Now i'm interested to see what happens if we somehow change the evaluations such that users e.g. are shown 3 examples from each model. In a scenario where diversity is desirable (e.g. generating story ideas). Now in deciding for the KL budget, we will probably get a much lower number. And that will allow them to serve a model more suited to tasks 1.