Posts

Comments

Comment by fvncc on Improving Dictionary Learning with Gated Sparse Autoencoders · 2024-04-26T00:47:49.550Z · LW · GW

Hi any idea how this would compare to just replacing the loss with a smoothed loss function? Something like (summed across the sparse representation).

Comment by fvncc on Speedrunning 4 mistakes you make when your alignment strategy is based on formal proof · 2023-02-17T01:12:17.759Z · LW · GW

Thanks for the link!

But actually what I had in mind is something simpler that would not necessarily need such tools to be feasible. Basically akin to taking the main argument of each approach, as expressed in natural language, without worrying too much about all the baggage at finer levels of abstraction. But I guess this is not quite what the article is about ..

Comment by fvncc on Speedrunning 4 mistakes you make when your alignment strategy is based on formal proof · 2023-02-16T23:31:50.824Z · LW · GW

I agree that mismatch between “assumptions” and “real world“ make getting formal certificates of real world alignment largely intractable.

E.g. if you make a broad assumption like “pure self supervised learning does not exhibit strategic behaviour” (suitably formalised), that is almost certainly not justifiable in the real world, but it would be a good starting point to reason about other alignment schemes.

My point is, the list of assumptions you have to make for each alignment approach could be an interesting metric to track. You end up with a table where alignment approaches are rows and the set of necessary assumptions are the columns. Alignment approaches are then ranked based on how grounded the necessary subset of assumptions are (in aggregate), and progress is made by incrementally improving the proofs in ways that replace broad assumptions with more grounded ones.

Will read the link

Comment by fvncc on Speedrunning 4 mistakes you make when your alignment strategy is based on formal proof · 2023-02-16T05:19:38.641Z · LW · GW

Not an expert, but what if we took all our current proposed solutions to alignment, and the assumptions they implicitly make, and try and formalise them using some type of proof assistant? Could that at least be a useful pedagogical tool for understanding the current research landscape?