Posts

Litigate-for-Impact: Preparing Legal Action against an AGI Frontier Lab Leader 2024-12-07T21:42:29.038Z
Bridging the VLM and mech interp communities for multimodal interpretability 2024-10-28T14:41:41.969Z
Interpretability in Action: Exploratory Analysis of VPT, a Minecraft Agent 2024-07-18T17:02:06.179Z
Laying the Foundations for Vision and Multimodal Mechanistic Interpretability & Open Problems 2024-03-13T17:09:17.027Z

Comments

Comment by Sonia Joseph (redhat) on Litigate-for-Impact: Preparing Legal Action against an AGI Frontier Lab Leader · 2024-12-07T23:01:16.306Z · LW · GW

Ok, thank you for your openness. I find that in-person conversations about sensitive matters like these are easier as tone, facial expression, body language are very important here. It is possible that my past comments on EA that you refer to came off as more hostile than intended due to the text-based medium.

Fwiw, the contents of this original post actually have nothing to do with EA itself, or the past articles that mentioned me.

Comment by Sonia Joseph (redhat) on Litigate-for-Impact: Preparing Legal Action against an AGI Frontier Lab Leader · 2024-12-07T22:25:00.602Z · LW · GW

Hi habryka,

 

Thank you for your comment. It contains a few assumptions that are not quite true. I am not sure that the comment section here is the best place to address them, and in person diplomacy may be wise. I would be down to get coffee the next time we are in the same city and discuss in more detail.

Comment by Sonia Joseph (redhat) on Litigate-for-Impact: Preparing Legal Action against an AGI Frontier Lab Leader · 2024-12-07T21:45:44.614Z · LW · GW

Apologies, the post is still getting approved by the EA forum as I've never posted there under this account.

Comment by Sonia Joseph (redhat) on Bridging the VLM and mech interp communities for multimodal interpretability · 2024-10-29T13:49:54.611Z · LW · GW

Good note, thank you.

Comment by Sonia Joseph (redhat) on Laying the Foundations for Vision and Multimodal Mechanistic Interpretability & Open Problems · 2024-03-28T23:44:09.116Z · LW · GW

Thanks for your comment. Some follow-up thoughts, especially regarding your second point:

There is sometimes an implicit zeitgeist in the mech interp community that other modalities will simply be an extension or subcase of language. 

I want to flip the frame, and consider the case where other modalities may actually be a more general case for mech interp than language. As a loose analogy, the relationship between language mech interp and multimodal mech interp may be like the relationship between algebra and abstract algebra. I have two points here.

Alien modalities and alien world models

The reason that I’m personally so excited by non-language mech interp is due to the philosophy of language (Chomsky/Wittgenstein). I’ve been having similar intuitions to your second point. Language is an abstraction layer on top of perception. It is largely optimized by culture, social norms, and language games. Modern English is not the only way to discretize reality, but the way our current culture happens to discretize reality.

To present my point in a more sci-fi way, non-language mech interp may be more general because now we must develop machinery to deal with alien modalities.  And I suspect many of these AI models will have very alien world models! Looking at the animal world, animals communicate with all sorts of modalities like bees seeing with ultraviolet light, turtles navigating with magnet fields, birds predicting weather changes with barometric pressure sensing, aquatic animals sensing dissolved gases in the water, etc. Various AGIs may have sensors to take in all sorts of “alien” data that the human language may not be equipped for. I am imagining a scenario in which a superintelligence discretizes the world in seemingly arbitrary ways, or maybe following a hidden logic based on its objective function.

Language is already optimized by humans to modularize reality into this nice clean way. Perception already filtered through language is by definition human interpretable so the deck is already largely stacked in our favor. You allude to this with your point photographers, dancers, etc developing their own language to describe subtle patterns in perception that the average human does not have language for.  Wine connoisseurs develop vocabulary to discretize complex wine-tasting percepts into words like “bouquet” and “mouth-feel.” Make-up artists coin new vocabulary for around contouring, highlighting, cutting the crease, etc to describe subtle artistry that may be imperceptible to the average human.

I can imagine a hypothetical sci-fi scenario where the only jobs available are apprenticing yourself to a foundation model at a young age for life, deeply understanding its world model, and communicating its unique and alien world model to the very human realm of your local community (maybe through developing jargon or dialect, or even through some kind of art, like poetry, or dance, communication forms humans currently use to bypass the limitations of language).

Self-supervised vision models like DINO are free of a lot of human biases but may not have as interpretable of a world model as CLIP, which is co-optimized with language. I believe DINO’s lack of language bias to be either a safety issue or a superpower, depending on the context (safety in that we may not understand this “alien” world model, but superpower in that DINO may be freer from human biases that may be, in many contexts, unwanted!).

As a toy example, in this post, the above vision transformer classifies the children paying with the lion as “abaya.” This is an ethnically biased classification, but the ViT only has 1k ImageNet concepts. The limits of its dictionary are quite literally the limits of its world (in a Wittgenstein sense)! But there are so many other concepts we can create to describe the image!

Text-perception manifolds

Earlier, I mentioned that English is currently the way our culture happens to discretize reality, and there may be other coherent ways to discretize the same reality. 

Consider the scene of a fruit bowl on a table. You can start asking questions such as, How many ways are there to carve up this scene into language? How many ways can we describe this fruit bowl in English? In all human languages, including languages that don’t have the concepts of fruit or bowls? In all possible languages? (which takes us to Chomsky). These question have a real analysis flavor to them, in that you’re mapping continuous perception to discrete language (yes, perception is represented discretely on a computer, but there may be advantages to framing this in a continuous way). This manifold may be very useful in navigating alignment problems.

For example, there was a certain diffusion model that would always generate salads in conjunction with women due to the spurious correlation. One question I’m deeply interested in: is there a way to represent the model’s text-to-perception world model as a manifold, and then modify it? Can you then  modify this manifold to decorrelate women and salad?

A text-image manifold formalization could further answer useful questions about granularity. For example, consider a man holding an object, where object can map to anything from a teddy bear to a gun. By representing the mapping between the text/semantics of the word "object" and the perceptual space of teddy bears, guns, and other pixel blobs that humans might label as objects as a manifold, we could capture the model's language-to-perception world model in a formal mathematical structure.

 

The above two points are currently just intuitions pending formalization. I have a draft post on why I’m so drawn to non-language interp for these reasons, which I can share soon.

Comment by Sonia Joseph (redhat) on Influence functions - why, what and how · 2024-03-27T05:34:11.105Z · LW · GW

Thank you for this. How would you think about the pros/cons of influence functions vs activation patching or direct logit attribution in terms of localizing a behavior in the model? 

Comment by Sonia Joseph (redhat) on Laying the Foundations for Vision and Multimodal Mechanistic Interpretability & Open Problems · 2024-03-14T17:01:19.886Z · LW · GW

Right now, there's a lot to exploit with CLIP and ViTs so that will be the focus for awhile. We may expand to Flamingo or other models if there is demand.

Other modalities would be fascinating. I imagine they have their own idiosyncrasies. I would be interested in audio in the future but not at the expense of first exploiting vision. 

Ideally, yes; a unified interp framework for any modality is the north star. I do think this will be a community effort. Research in language built off findings from many different groups and institutions. Vision and other modalities are currently just not in the same place.

Comment by Sonia Joseph (redhat) on Laying the Foundations for Vision and Multimodal Mechanistic Interpretability & Open Problems · 2024-03-14T16:55:54.233Z · LW · GW

It was surprising to me too. It is possible that the layers do not have aligned basis vectors. That's why corroborating the results with a TunedLens is a smart next step, as they currently may be misleading.

Comment by Sonia Joseph (redhat) on Laying the Foundations for Vision and Multimodal Mechanistic Interpretability & Open Problems · 2024-03-14T16:35:46.875Z · LW · GW

Noted, and thank you for flagging. I mostly agree, and do not have much to add (as we seem mostly in agreement that diverse, bluesky research is good), other than this may shape the way I present this project going forward.

Comment by Sonia Joseph (redhat) on Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research] · 2023-07-12T20:22:50.179Z · LW · GW

Thank you for this write-up! 

I am wondering how to relate causal scrubbing to @Arthur Conmy's ACDC method. 

It seems that causal scrubbing becomes relevant when one is testing a relatively specific hypothesis (e.g. induction heads), while ACDC can work with simply a dataset, metric, behavior? If so, would it be accurate to say that ACDC would be a more general pass, and part of an earlier workflow, to develop your hypothesis? And causal scrubbing can validate it? Curious about trade-offs in types of insight, resources, computational complexity, positioning in one's mech interp workflow, and in what circumstances one would use each.