Posts

Comments

Comment by RAB on We Need To Know About Continual Learning · 2023-05-23T20:30:46.107Z · LW · GW

Worth noting that LLMs are no longer using quadratic context window scaling. See e.g. Claude-Long. Seems they've figured out how to make it ~linear. Looking at GPT-4 with a 32K context window option for corporate clients, seems like they're also not using quadratic scaling any more.

Comment by RAB on AI Forecasting: One Year In · 2022-07-04T18:26:10.382Z · LW · GW

Kudos for tracking the predictions, and for making the benchmark! I'd be really excited to see more benchmarks that current AI does really badly on being created. Seems like a good way to understand capabilities going forward.

Comment by RAB on Where I agree and disagree with Eliezer · 2022-06-20T19:26:44.512Z · LW · GW

I really appreciate your including a number here, that's useful info. Would love to see more from everyone in the future - I know it takes more time/energy and operationalizations are hard, but I'd vastly prefer to see the easier versions over no versions or norms in favor of only writing up airtight probabilities.

(I also feel much better on an emotional level hearing 20% from you, I would've guessed anywhere between 30 and 90%. Others in the community may be similar: I've talked to multiple people who were pretty down after reading Eliezer's last few posts.)