[Linkpost] Applicability of scaling laws to vision encoding models

post by Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2023-08-05T11:10:35.599Z · LW · GW · 2 comments

This is a linkpost for https://arxiv.org/abs/2308.00678.

In this paper, we investigated how to build a high-performance vision encoding model to predict brain activity as part of our participation in the Algonauts Project 2023 Challenge. The challenge provided brain activity recorded by functional MRI (fMRI) while participants viewed images. Several vision models with parameter sizes ranging from 86M to 4.3B were used to build predictive models. To build highly accurate models, we focused our analysis on two main aspects: (1) How does the sample size of the fMRI training set change the prediction accuracy? (2) How does the prediction accuracy across the visual cortex vary with the parameter size of the vision models? The results show that as the sample size used during training increases, the prediction accuracy improves according to the scaling law. Similarly, we found that as the parameter size of the vision models increases, the prediction accuracy improves according to the scaling law. These results suggest that increasing the sample size of the fMRI training set and the parameter size of visual models may contribute to more accurate visual models of the brain and lead to a better understanding of visual neuroscience.

2 comments

Comments sorted by top scores.

comment by Capybasilisk · 2023-08-06T15:48:05.683Z · LW(p) · GW(p)

It’s occurred to me that training a future, powerful AI on your brainwave patterns might be the best way for it to build a model of you and your preferences. It seems that it’s incredibly hard, if not impossible, to communicate all your preferences and values in words or code, not least because most of these are unknown to you on a conscious level.

Of course, there might be some extreme negatives to the AI having an internal model of you, but I can’t see a way around if we’re to achieve “do what I want, not what I literally asked for”.

Replies from: gwern
comment by gwern · 2023-08-06T22:20:29.834Z · LW(p) · GW(p)

This is a paradigm I've dubbed "brain imitation learning" (links).