Posts
Comments
Hi any idea how this would compare to just replacing the loss with a smoothed loss function? Something like (summed across the sparse representation).
Thanks for the link!
But actually what I had in mind is something simpler that would not necessarily need such tools to be feasible. Basically akin to taking the main argument of each approach, as expressed in natural language, without worrying too much about all the baggage at finer levels of abstraction. But I guess this is not quite what the article is about ..
I agree that mismatch between “assumptions” and “real world“ make getting formal certificates of real world alignment largely intractable.
E.g. if you make a broad assumption like “pure self supervised learning does not exhibit strategic behaviour” (suitably formalised), that is almost certainly not justifiable in the real world, but it would be a good starting point to reason about other alignment schemes.
My point is, the list of assumptions you have to make for each alignment approach could be an interesting metric to track. You end up with a table where alignment approaches are rows and the set of necessary assumptions are the columns. Alignment approaches are then ranked based on how grounded the necessary subset of assumptions are (in aggregate), and progress is made by incrementally improving the proofs in ways that replace broad assumptions with more grounded ones.
Will read the link
Not an expert, but what if we took all our current proposed solutions to alignment, and the assumptions they implicitly make, and try and formalise them using some type of proof assistant? Could that at least be a useful pedagogical tool for understanding the current research landscape?