0 comments
Comments sorted by top scores.
comment by jacob_cannell · 2022-11-12T18:52:47.114Z · LW(p) · GW(p)
so, my claim we should find the fact that the problem and the solution has a lot of research in common (studying powerful AI) is a weird interesting fact, and we should generally assume that the research that is involved in the problem won't particularly be helpful to research that is involved in the solution — for example, if FAS is made in a way that is pretty different from current AI or XAI.
This essay seemed straightforward until this last paragraph, because: the "problem and solution has alot of research in common" seems to directly contradict "research that is involved in the problem won't particularly be helpful to research that is involved in the solution".
I think I can predict what you intended based on " FAS is made in a way that is pretty different from current AI or XAI", but that's a low probability miracle at this point. The difference between FAS and XAI is just one of utility functions; both will need to be built on powerful efficient approximations of bayesian model-based planning agents, the structure of which is dictated by physics and ends up looking like advanced neural nets as that is simply what efficient approximate bayesian inference over circuit space entails.
Replies from: carado-1↑ comment by Tamsin Leake (carado-1) · 2022-11-13T11:00:12.261Z · LW(p) · GW(p)
(i kinda made a mess out of that last paragraph; i've edited in a much more readable version of it)
This essay seemed straightforward until this last paragraph, because: the "problem and solution has alot of research in common" seems to directly contradict "research that is involved in the problem won't particularly be helpful to research that is involved in the solution".
okay, yeah i didn't explain that super well. what i was trying to say was: we have found them to have a lot in common so far, in retrospect, but we shouldn't have expected that in advance and we shouldn't necessarily expect that in the future.
that's a low probability miracle at this point
this isn't to say that they'll have nothing in common, hopefully we can reuse current ML tech for at least parts of FAS. but i think that that low-probability miracle is still our best bet, and a bottleneck to saving the world — i think other solutions either need to get there too eventually, or are even harder to turn into FAS. (i say that with not super strong confidence, however)