Is anyone developing optimisation-robust interpretability methods?

post by Jono (lw-user0246) · 2024-06-11T13:14:51.018Z · LW · GW · No comments

This is a question post.

Contents

No comments

With optimisation-robust I mean that it withstands point 27 from AGI Ruin:

When you explicitly optimize against a detector of unaligned thoughts, you're partially optimizing for more aligned thoughts, and partially optimizing for unaligned thoughts that are harder to detect.  Optimizing against an interpreted thought optimizes against interpretability.

Are you aware of any person or group that is working expressly on countering this failure mode?

Answers

No comments

Comments sorted by top scores.