Image generation and alignment

post by rpglover64 (alex-rozenshteyn) · 2023-01-05T16:05:44.231Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    1 Ilio
None
2 comments

There isn't a lot of talk about image models (e.g. Dall-E and StableDiffusion) on LW in the context of alignment, especially compared to LLMs. Why is that? Some hypotheses:

Answers

answer by Ilio · 2023-01-07T19:13:42.290Z · LW(p) · GW(p)

Following Scott Aaronson, we might say the answer depend on wether we’re talking reform|orthodox vision of alignement. Adversarial pictures and racial bias are definitely real concerns for automatic vision, then for reform alignement. But many animal species mastered vision, movement, or olfaction better than humans as a species, for hundred of millions years without producing anything that could challenge the competitive advantage of the human language, so I guess for orthodox alignement vision looks much less scary than language model.

I’m curious if those at ease with either orthodox or reform label would corroborate these predictions of their feelings?

2 comments

Comments sorted by top scores.

comment by the gears to ascension (lahwran) · 2023-01-05T20:59:46.866Z · LW(p) · GW(p)

idk if this is The Reason or anything, but one factor might be that current image models use a heavily convolutional architecture and are as a result quite a bit weaker. transformers are involved, but not as heavily as in current language models.

Replies from: alex-rozenshteyn
comment by rpglover64 (alex-rozenshteyn) · 2023-01-06T01:33:57.131Z · LW(p) · GW(p)

You're saying that transformers are key to alignment research?

I would imagine that latent space exploration and explanation is a useful part of interpretability, and developing techniques that work for both language and images improves the chance that the techniques will generalize to new neural architectures.