Posts

Comments

Comment by dhar174 on GPT-4 Specs: 1 Trillion Parameters? · 2023-04-08T17:24:33.336Z · LW · GW

You're missing the possibility that parameters during training were larger than models used for inference. It is common practice now to train large, then distill into a series of smaller models that can be used based on the task need.

Comment by dhar174 on The idea that ChatGPT is simply “predicting” the next word is, at best, misleading · 2023-02-22T02:06:02.691Z · LW · GW

To those that believe language models do not have internal representations of concepts:

I can help at least partially disprove the assumptions behind that.

There is convincing evidence otherwise, as demonstrated through an Othello in an actual experiment:

https://thegradient.pub/othello/ The researchers conclusion:

"Our experiment provides evidence supporting that these language models are developing world models and relying on the world model to generate sequences." )