Alignment is Hard: An Uncomputable Alignment Problem
post by Alexander Bistagne (statistical-sprocket) · 2023-11-19T19:38:23.564Z · LW · GW · 4 commentsThis is a link post for https://github.com/Alexhb61/Alignment/blob/main/Draft_3.pdf
Contents
4 comments
Work supported by a Manifund Grant titled Alignment is hard.
While many people have made the claim that the alignment problem is hard in an engineering sense, this paper makes the argument that the alignment problem is impossible in at least one case in a theoretical computer science sense. The argument being formalized is that if we can't prove a program will loop forever, we can't prove an agent will care about us forever. More Formally, when the agent's environment can be modeled with discrete time, the agent's architecture is agentically-turing complete and the agent's code is immutable, testing the agent's alignment is CoRE-Hard if the alignment schema is Demon-having, angel-having, universally-betrayal-sensitive, perfect and thought-apathetic. Further research could be done to change most assumptions in that argument other than the immutable code.
This is my first major paper on alignment. Since there isn't really an alignment Journal, I'm aiming to have this post act as a peer review step, but forums are weird. Getting the formatting right seems dubious, so I'm posting the abstract and linking the pdf.
4 comments
Comments sorted by top scores.
comment by jbash · 2023-11-19T22:02:00.760Z · LW(p) · GW(p)
... but the inability to solve the halting problem doesn't imply that you can't construct a program that you can prove will or won't halt, only that there are programs for which you can't determine that by examination.
I originally wrote "You wouldn't try to build an 'aligned' agent by creating arbitrary programs at random and then checking to see if they happened to meet your definition of alignment"... but on reflection that's more or less what a lot of people do seem to be trying to do. I'm not sure a mere proof of impossibility is going to deter somebody like that, though.
Replies from: statistical-sprocket↑ comment by Alexander Bistagne (statistical-sprocket) · 2023-11-20T01:23:01.116Z · LW(p) · GW(p)
On your first point, correct the thing shown to be uncomputable is testing alignment. And yes, uncomputability is a worst case claim. Would it be clearer to call the paper an uncomputable alignment TEST as opposed to an uncomputable alignment PROBLEM? (Im considering editing the paper before submitting it to a journal)
Detering a few would be nice. More realistically, proofs in this vain could help convince regulators to ignore opaque box makers claims about detecting an agent's alignment.
Replies from: jbash↑ comment by jbash · 2023-11-20T19:11:31.149Z · LW(p) · GW(p)
I think that would help. I think the existing title primed me to expect something else, more in the line of it being impossible for an "aligned" program to exist because it couldn't figure out what to do.
Or perhaps the direct-statement style "Aligned status of software is undecideable" or something like that.
Replies from: statistical-sprocket↑ comment by Alexander Bistagne (statistical-sprocket) · 2023-11-20T20:19:14.540Z · LW(p) · GW(p)
Thanks for the feedback!