Posts

Comments

Comment by ersatz (Raphaël Lévy) on Scott Aaronson on "Reform AI Alignment" · 2022-11-22T17:32:41.838Z · LW · GW
Comment by ersatz (Raphaël Lévy) on I Converted Book I of The Sequences Into A Zoomer-Readable Format · 2022-11-14T16:54:36.806Z · LW · GW

You should probably use Google Neural2 voices which are far better.

Comment by ersatz (Raphaël Lévy) on New book on s-risks · 2022-10-30T15:45:17.878Z · LW · GW

I just bought a copy. Thank you very much for writing this book Tobias.

Comment by ersatz (Raphaël Lévy) on Dan Luu on Futurist Predictions · 2022-09-15T15:10:49.490Z · LW · GW

An interesting section in the appendices, a criticism of Ajeya Cotra’s “Forecasting Transformative AI with Biological Anchors”:

If you do a sensitivity analysis on the most important variable (how much Moore's law will improve FLOPS/$), the output behavior doesn't make any sense, e.g., Moore's law running out of steam after "conventional" improvements give us a 144x improvement would give us a 34% chance of transformative AI (TAI) by 2100, a 144*6x increase gives a 52% chance, and a 144*600x increase gives a 66% chance (and with the predicted 60000x improvement, there's a 78% chance), so the model is, at best, highly flawed unless you believe that going form a 144x improvement to a 144*6x improvement in computer cost gives almost as much increase in the probability of TAI as a 144*6x to 144*60000x improvement in computer cost.

The part about all of this that makes this fundamentally the same thing that the futurists here did is that the estimate of the FLOPS/$ which is instrumental for this prediction is pulled from thin air by someone who is not a deep expert in semiconductors, computer architecture, or a related field that might inform this estimate.

[...]

If you say that, based on your intuition, you think there's some significant probability of TAI by 2100; 10% or 50% or 80% or whatever number you want, I'd say that sounds plausible but wouldn't place any particular faith in the estimate. But if you take a model that produces nonsense results and then pick an arbitrary input to the model that you have no good intuition about to arrive at an 80% chance, you've basically picked a random number that happens to be 80%.

Comment by Raphaël Lévy on [deleted post] 2022-06-04T16:36:17.903Z

Just pasting the link in the rich text editor but I don’t know the Markdown syntax sorry.

Comment by ersatz (Raphaël Lévy) on Worse than an unaligned AGI · 2022-06-03T22:06:53.373Z · LW · GW
Comment by Raphaël Lévy on [deleted post] 2022-06-01T21:26:50.755Z

Integrating Metaculus (as this example in this comment) would be quite useful is fairly straightforward:

Comment by ersatz (Raphaël Lévy) on Worse than an unaligned AGI · 2022-04-10T14:12:13.603Z · LW · GW

I think so, by definition, nothing can be worse than that.