JoNeedsSleep's Shortform

post by JoNeedsSleep (joanna-j-1) · 2024-10-24T04:50:44.976Z · LW · GW · 2 comments

Contents

2 comments

2 comments

Comments sorted by top scores.

comment by JoNeedsSleep (joanna-j-1) · 2025-01-27T23:05:34.407Z · LW(p) · GW(p)

My best attempt at attempting to characterize Kant's Transcendental Idealism - Kant's idealism says that essence--not existence--is dependent on us. That is to say, what it is to be is dependent on how we understand. For example, the schema of classification in biology, such as genetic proximity, depends on what purposes they serve to us. What it is for animals to be depends, in other words, on the biologist. To draw the biology analogy ad absurdum, transcendental idealism says something like "the genetic composition is the condition of the possibility of how we are able to make sense of biological objects in the first place". The existence of these classification schema is dependent on our mind a priori.

comment by JoNeedsSleep (joanna-j-1) · 2024-10-24T04:50:45.251Z · LW(p) · GW(p)

The distinction between inner and outer alignment is quite unnatural. For example, even the concept of reward hacking implies the double-fold failure of a reward that is not robust enough to exploitation, and a model that develops instrumental capabilities as to find a way to trick the reward; indeed, in the case of reward hacking, it's worth noting that depending on the autonomy of the system in question, we could attribute the misalignment as inner or outer. At its core, this distinction comes out of the policy <-> reward scheme of RL, though prediction <-> loss function in SL can be similarly characterized; I doubt how well this framing generalizes to other engineering choices.