TruePath's Shortform

post by TruePath · 2023-03-13T08:48:10.585Z · LW · GW · 10 comments

Contents

10 comments

10 comments

Comments sorted by top scores.

comment by TruePath · 2023-03-13T08:48:10.899Z · LW(p) · GW(p)

Maybe this question has already been answered but I don't understand how recursive self-improvement of AIs is compatible with the AI alignment problem being hard.

I mean doesn't the AI itself face the alignment problem when it tries to improve/modify itself substantially? So wouldn't a sufficently intelligent AI refuse to create such an improvement for fear the goals of the improved AI would differ from its own?

Replies from: JBlack, Dagon, Vladimir_Nesov, sil-ver
comment by JBlack · 2023-03-14T05:23:23.978Z · LW(p) · GW(p)

I suspect that the alignment problem is much easier when considering expanding your own capabilities versus creating a completely new type of intelligence from scratch that is smarter than you are. I'm far from certain of this, but it does seem likely.

There is also the possibility that some AI entity doesn't care very much about alignment of later selves with earlier ones, but acts to self-improve or create more capable descendants either as a terminal goal, or for any other reason than instrumentally pursuing some preserved goal landscape.

Even a heavily goal-directed AI may self-improve knowing that it can't fully solve alignment problems. For example, it may deduce that if it doesn't self-improve then it will never achieve any part of its main goals, whereas there is some chance that some part of those goals can be achieved if it does.

There is also the possibility that alignment is something that can be solved by humans (in some decades), but that it can be solved by a weakly superintelligent and deceptive AI much faster.

Replies from: TruePath
comment by TruePath · 2023-12-31T10:16:06.817Z · LW(p) · GW(p)

Those are reasonable points but note that the arguments for AI x-risk depend on the assumption that any superintelligence will necessarily be highly goal directed. Thus, either the argument fails because superintelligence doesn't imply goal directed,

And given that simply maximizing the intelligence of future AIs is merely one goal in a huge space it seems highly unlikely that (especially if we try to avoid this one goal) we just get super unlucky and the AI has the one goal that is compatible with improvement.

comment by Dagon · 2023-03-13T18:03:35.319Z · LW(p) · GW(p)

Recursive self-improvement could happen on dimensions that don't help (or that actively harm) alignment.  That's the core of https://www.lesswrong.com/tag/orthogonality-thesis [? · GW] .  

The AI may (or may not; the capability/alignment mismatch may be harder than many AIs can solve; and the AI may not actually care about itself when creating new AIs) try to solve alignment of future iterations with itself, but even if so, it doesn't seem likely that an AI misaligned with humanity will create an AI that is.

Replies from: TruePath
comment by TruePath · 2023-12-31T10:24:58.937Z · LW(p) · GW(p)

I don't mean alignment with human concerns. I mean that the AI itself is engaged in the same project we are: building a smarter system than itself. So if it's hard to control the alignment of such a system then it should be hard for the AI. (In theory you can imagine that it's only hard at our specific level of intelligence but in fact all the arguments that AI alignment is hard seem to apply equally well to the AI making an improved AI as to us making an AI).

See my reply above. The AI x-risk arguments require the assumption that superintelligence necessarily entails the agent try to optimize some simple utility function (this is different than orthogonality which says increasing intelligence doesn't cause convergence to any particular utility function). So the doesn't care option is off the table since (by orthogonality) it's super unlikely you get the one utility function which says just maximize intelligence locally (even global max isn't enough bc some child AI who has different goals could interfere).

comment by Vladimir_Nesov · 2023-03-13T15:30:36.826Z · LW(p) · GW(p)

It's unclear if alignment is hard in the grand scheme of things. Could snowball quickly, with alignment for increasingly capable systems getting solved shortly after each level of capability is attained.

But at near-human level, which seems plausible for early LLM AGIs, this might be very important in requiring them to figure out coordination [LW(p) · GW(p)] to control existential risk while alignment remains unsolved, remaining at relatively low capability level in the meantime. And solving alignment might be easier for AIs with simple goals [LW(p) · GW(p)], allowing them to recursively self-improve quickly. As a result, AIs aligned with humanity would remain vulnerable to FOOMing of misaligned AIs with simple goals, and would be forced by this circumstance to comprehensively prevent any possibility of their construction rather than mitigate the consequences.

Replies from: TruePath
comment by TruePath · 2023-12-31T10:31:28.120Z · LW(p) · GW(p)

I agree that's a possible way things could be. However, I don't see how it's compatible with accepting the arguments that say we should assume that alignment is a hard problem. I mean absent such arguments why expect you have to do anything special beyond normal training to solve alignment?

As I see the argumentative landscape the high x-risk estimates depend on arguments that claim to give reason to believe that alignment is just a generally hard problem. I don't see anything in those arguments that distinguishes between these two cases.

In other words our arguments for alignment difficulty don't depend on any specific assumptions about capability of intelligence so we should currently assign the same probability to an AI being unable to save it's alignment problem as we do to us being unable to solve it.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-12-31T14:48:17.691Z · LW(p) · GW(p)

AIs have advantages such as thinking faster and being as good at everything as any other AI of the same kind. These advantages are what simultaneously makes them dangerous and puts them in a better position to figure out alignment or coordination that protects from misaligned AIs and human misuse of AIs. (Incidentally, see this comment [LW(p) · GW(p)] on various relevant senses of "alignment".) Being better at solving problems and having effectively more time to solve problems improves probability of solving a given problem.

comment by Rafael Harth (sil-ver) · 2023-03-13T09:35:25.393Z · LW(p) · GW(p)

Alignment isn't required for high capability. So a self-improving AI wouldn't solve it because it has no reason to.

This becomes obvious if you think about alignment as "doing what humans want" or "pursuing the values of humanity". There's no reason why an AI would do this.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-03-13T15:12:52.253Z · LW(p) · GW(p)

Usually alignment is shorthand for alignment-with-humanity, which is a condition humanity cares about. This thread is about alignment-with-AI, which is what the AI that contemplates building other AIs or changing itself cares about.

A self-improving paperclipping AI has a reason to solve alignment-with-paperclipping, in which case it would succeed in improving itself into an AI that still cares about paperclipping. If its "improved" variant is misaligned with the original goal of paperclipping, the "improved" AI won't care about paperclipping, leading to less paperclippling, which the original AI wouldn't want to happen.