The First Filter

post by adamShimi, Gabriel Alfour (gabriel-alfour-1) · 2022-11-26T19:37:04.607Z · LW · GW · 5 comments

Contents

5 comments

Consistently optimizing for solving alignment (or any other difficult problem) is incredibly hard.

The first and most obvious obstacle is that you need to actually care about alignment and feel responsible for solving it. You cannot just ignore it or pass the buck; you need to aim for it.

If you care, you now have to go beyond the traditions you were raised in. Be willing to go beyond the tools that you were given, and to use them in inappropriate and weird ways. This is where most people who care about alignment tend to fail — they tackle it like a normal problem from a classical field of science and not an incredibly hard and epistemologically fraught problem [AF · GW].

If you manage to transcend your methodological upbringing, you might come up with a different, fitter approach to attack the problem — your own weird inside view. Yet beware becoming a slave to your own insight, a prisoner to your own frame; it’s far too easy to never look back and just settle in your new tradition.

If you cross all these obstacles, then whatever you do, even if it is not enough, you will be one of the few who adapt, who update, who course-correct again and again. Whatever the critics, you’ll actually be doing your best.

This is the first filter. This is the first hard and crucial step to solve alignment: actually optimizing for solving the problem.

When we criticize each other in good faith about our approaches to alignment, we are acknowledging that we are not wedded to any approach or tradition. That we’re both optimizing to solve the problem. This is a mutual acknowledgement that we have both passed the first filter.

Such criticism should thus be taken as a strong compliment: your interlocutor recognizes that you are actually trying to solve alignment and open to changing your ways.

5 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2022-11-26T21:08:07.421Z · LW(p) · GW(p)

Well written. Do you have a few examples of pivoting when it becomes apparent that the daily grind no longer optimizes for solving the problem?

Replies from: mruwnik, adamShimi
comment by mruwnik · 2022-11-27T00:06:30.052Z · LW(p) · GW(p)

Or also how to notice it?

Replies from: shminux
comment by Shmi (shminux) · 2022-11-27T03:31:26.643Z · LW(p) · GW(p)

Good point, noticing is always how one starts.

comment by adamShimi · 2022-11-29T16:32:42.377Z · LW(p) · GW(p)

In a limited context, the first example that comes to me is high performers in competitive sports and games. Because if they truly only give a shit about winning (and the best generally do), they will throw away their legacy approaches when they find a new one, however it pains them.

comment by Coafos (CoafOS) · 2022-11-27T02:19:15.529Z · LW(p) · GW(p)

That's the second filter, because "optimizing" is two words: having a goal and maximising (or minimising) it.

First, one has to aknowledge that solving aligment is a goal. Many people does not recognize that it's a problem, beacuse smart robots will learn what love means and won't hurt us.

What you talked about in your post comes after this. When someone is walking towards the goalpost of alignment, they should realize that there might be multiple routes there and they should choose the quickest one, because only winning matters.