MinusGix's Shortform

post by MinusGix · 2022-08-29T12:37:21.216Z · LW · GW · 1 comments

Contents

1 comment

1 comments

Comments sorted by top scores.

comment by MinusGix · 2022-08-29T12:37:21.676Z · LW(p) · GW(p)

One minor thing I've noticed when thinking on interpretability is that of in-distribution versus out-of-distribution versus - what I call - out-of-representation data. I would assume this has been observed elsewhere, but I haven't seen it mentioned before.
In-distribution could be considered inputs in the same ''structure'' of what you trained the neural network on; out-of-distribution is exotic inputs, like an adversarially noisy image of a panda or a picture of a building for an animal-recognizer NN.
Out-of-representation would be when you have a neural network that takes in inputs of a certain form/encoding that restricts the representable values. However, the neural network can theoretically take anything in between, it just shouldn't ever.
The most obvious example would be if you had a NN that was trained on RGB pixels from images to classify them. Each pixel value is normalized in the range of . Out of representation here would be if you gave it a very 'fake' input of . All of the images when you give them to NN, whether noisy garbage or a typical image, would be properly normalized within that range. However, with direct access to the neural networks inputs, you give it out-of-representation values that aren't properly encoded at all.
I think this has some benefits for some types of interpretability, (though it is probably already paid attention to?), in that you can constrain the possible inputs when you consider the network. If you know the inputs to the network are always bounded in a certain range, or even just share a property like being positive, then you can constrain the intermediate neuron outputs. This would potentially help in ignoring out-of-representation behavior, such as some neurons only being a good approximation of a sine-wave for in-representation inputs.