Posts

An information-theoretic study of lying in LLMs 2024-08-02T10:06:39.312Z
Degeneracies are sticky for SGD 2024-06-16T21:19:53.362Z
Understanding mesa-optimization using toy models 2023-05-07T17:00:52.620Z
Metalignment: Deconfusing metaethics for AI alignment. 2019-08-23T10:25:38.756Z

Comments

Comment by Guillaume Corlouer (Tancrede) on Towards Measures of Optimisation · 2023-05-16T23:02:34.179Z · LW · GW

Right, I got confused because I thought your problem was about trying to define a measure of optimisation power - for ex analogous to the Yudkowsky measure - that was also referring to a utility function, while being invariant from scaling and translation but this is different from asking

"what fraction of the default expected utility comes from outcomes at least as good as this one?’"

Comment by Guillaume Corlouer (Tancrede) on Towards Measures of Optimisation · 2023-05-16T18:52:14.552Z · LW · GW

What about optimisation power of  as a measure of outcome that have utility greater than the utility of 

Let  be the set of outcome with utility greater than  according to utility function :

The set    is invariant under translation and non-zero rescaling of the utility function  and we define the optimisation power of the outcome ' according to utility function  as:

This does not suffer from comparing w.r.t a worst case and seem to satisfies the same intuition as the original OP definition while referring to some utility function.

This is in fact the same measure as the original optimisation power measure with an order given by the utility function

Comment by Guillaume Corlouer (Tancrede) on "Brain enthusiasts" in AI Safety · 2022-06-21T20:21:53.235Z · LW · GW

Nice recommendations! In addition to brain enthusiasts being useful for empirical work, there also are theoretical tools from system neuroscience that could be useful for AI safety. One area in particular would be for interpretability: if we want to model a network at various levels of "emergence", recent development in information decomposition and multivariate information theory to move beyond pairwise interaction in a neural network might be very useful. Also see recent publications to model synergestic information  and dynamical independance to perhaps automate macro variables discovery which could also be well worth exploring to study higher levels of large ML models. This would actually require both empirical and theoretical work as once the various measures of information decomposition are clearer one would need to empirically estimate test them and use them in actual ML systems for interpretability if they turn out to be meaningful.

Comment by Guillaume Corlouer (Tancrede) on Metalignment: Deconfusing metaethics for AI alignment. · 2019-08-31T13:57:42.043Z · LW · GW

Thanks for all the useful links! I'm also always happy to receive more feedback.

I agree that the sense in which I use metaethics in this post is different from what academic philosophers usually call metaethics. I have the impression that metaethics, in academic sense, and metaphilosophy are somehow related. Studying what morality itself is, how to select ethical theories and what is the process behind ethical reasoning seems not independent. For example if moral nihilism is more plausible then it seems to be less likely that there is some meaningful feedback loop to select ethical theories or that there is such a meaningful thing as a ‘good’ ethical theory (at least in an observer independent way) . If moral emotivism is more plausible then maybe reflecting on ethics is more like emotions rationalisation, e.g. typically expressing in a sophisticated way something that just fundamentally means ‘boo suffering’. In that case having better understanding of metaethics in the academic sense seems to bring some light to a process that generates ethical theories, at least in humans.

Comment by Guillaume Corlouer (Tancrede) on Metalignment: Deconfusing metaethics for AI alignment. · 2019-08-31T13:33:52.142Z · LW · GW

Sure, I'm happy to read/discuss your ideas about this topic.

Comment by Guillaume Corlouer (Tancrede) on Metalignment: Deconfusing metaethics for AI alignment. · 2019-08-23T18:36:22.642Z · LW · GW

I am not sure about what computer aided analysis mean but one possibility could be to have formal ethical theories and prove some theorem inside their formal framework. But this raises questions about the sort of formal framework that one could use to 'prove theorems' under ethics in a meaningful way.