Posts

Can any LLM be represented as an Equation? 2024-03-14T09:51:26.430Z
Can someone explain to me what went wrong with ChatGPT? 2024-02-24T11:50:14.762Z
What experiment settles the Gary Marcus vs Geoffrey Hinton debate? 2024-02-14T09:06:49.072Z
Why do we need an understanding of the real world to predict the next tokens in a body of text? 2024-02-06T14:43:50.559Z
A list of all the deadlines in Biden's Executive Order on AI 2023-11-01T17:14:31.074Z
Sydney the Bingenator Can't Think, But It Still Threatens People 2023-02-20T18:37:44.500Z

Comments

Comment by Valentin Baltadzhiev (valentin-baltadzhiev) on What experiment settles the Gary Marcus vs Geoffrey Hinton debate? · 2024-02-14T16:32:46.577Z · LW · GW

Thanks for your answer. Would it be fair to say that both of them are oversimplifying the other's position and that they are both, to some extent, right?

Comment by Valentin Baltadzhiev (valentin-baltadzhiev) on Why do we need an understanding of the real world to predict the next tokens in a body of text? · 2024-02-07T09:16:39.695Z · LW · GW

Thank you for your answers!

Comment by Valentin Baltadzhiev (valentin-baltadzhiev) on Why do we need an understanding of the real world to predict the next tokens in a body of text? · 2024-02-07T09:16:27.259Z · LW · GW

I think I understand, thank you. For reference, this is the tweet which sparked the question: https://twitter.com/RichardMCNgo/status/1735571667420598737

I was confused as to why you would necessarily need "understanding" and not just simple next token prediction to do what ChatGPT does

Comment by Valentin Baltadzhiev (valentin-baltadzhiev) on Why do we need an understanding of the real world to predict the next tokens in a body of text? · 2024-02-06T21:09:32.905Z · LW · GW

I think it does, thank you! In your model does a squirrel perform better than ChatGPT at practical problem solving simply because it was “trained” on practical problem solving examples and ChatGPT performs better on language tasks because it was trained on language? Or is there something fundamentally different between them?

Comment by Valentin Baltadzhiev (valentin-baltadzhiev) on Why do we need an understanding of the real world to predict the next tokens in a body of text? · 2024-02-06T18:47:08.559Z · LW · GW

I don’t really have a coherent answer to that but here it goes (before reading the spoiler): I don’t think the model understands anything about the real world because it never experienced the real world. It doesn’t understand why “a pink flying sheep” is a language construct and not something that was observed in the real world.

Reading my answer maybe we also don’t have any understanding of the real world, we have just come up with some patterns based on the qualia (tokens) that we have experienced (been trained on). Who is to say whether those patterns match to some deeper truth or not? Maybe there is a vantage point from which our “understanding” will look like hallucinations.

I have a vague feeling that I understand the second part of your answer. Not sure though. In that model of yours are the hallucinations of ChatGPT just the result of an imperfectly trained model? And can a model be trained to ever perfectly predict text?

Thanks for the answer it gave me some serious food for thought!

Comment by Valentin Baltadzhiev (valentin-baltadzhiev) on Why do we need an understanding of the real world to predict the next tokens in a body of text? · 2024-02-06T18:36:45.756Z · LW · GW

Okay, all of that makes sense. Could this mean that the model didn’t learn anything about the real world, but it learned something about the patterns of words which give it thimbs up from the RLHFers?

Comment by Valentin Baltadzhiev (valentin-baltadzhiev) on Why do we need an understanding of the real world to predict the next tokens in a body of text? · 2024-02-06T16:41:03.318Z · LW · GW

Thanks for the detailed answer! I think that helped

Does the following make sense:

We use language to talk about events and objects (could be emotions, trees, etc). Since those are things that we have observed, our language will have some patterns that are related to the patterns of the world. However, the patterns in the language are not a perfect representation of the patterns in the world (we can talk about things falling away from our planet, we can talk about fire which consumes heat instead of producing it, etc). An LLM trained on text then learns the patterns of the language but not the patterns of the world. Its "world" is only language, and that's the only thing it can learn about. 

Does the above sound true? What are the problems with it?

I am ignoring your point that neural networks can be trained on a host of other things since there is little discussion around whether or not Midjourney "understands' the images it is generating. However, the same point should apply to other modalities as well

Comment by Valentin Baltadzhiev (valentin-baltadzhiev) on ' petertodd'’s last stand: The final days of open GPT-3 research · 2024-01-23T14:38:32.553Z · LW · GW

I love the idea that petertodd and Leilan are somehow interrelated with the archetypes of the trickster and the mother goddess inside GPT's internals. I would love to see some work done in discovering other such prototypes, and weird seemingly-random tokens that correlate with them. Thigs like the Sun God, a great evil snake, a prophet seem to pop up in religions all over the place, so why not inside GPT as well?

Comment by Valentin Baltadzhiev (valentin-baltadzhiev) on A list of all the deadlines in Biden's Executive Order on AI · 2023-11-01T22:43:05.084Z · LW · GW

Glad to hear that!

Comment by Valentin Baltadzhiev (valentin-baltadzhiev) on Update on the UK AI Taskforce & upcoming AI Safety Summit · 2023-10-13T17:03:04.639Z · LW · GW

On the bright side Connor Leahy from Conjecture is going to be at the summit so there will be at least one strong voice for existential risk present there

Comment by Valentin Baltadzhiev (valentin-baltadzhiev) on Bing Chat is blatantly, aggressively misaligned · 2023-02-20T23:21:35.408Z · LW · GW

For what it’s worth it’s probably a good thing that the Bing chatbot is like that. The overall attitude towards AI for the last few months has been one of unbridled optimism and people seeing a horribly aligned model in action might be a wake up call for some, showing that the people deploying those models are unable to control them.

Comment by Valentin Baltadzhiev (valentin-baltadzhiev) on Sydney the Bingenator Can't Think, But It Still Threatens People · 2023-02-20T21:23:08.629Z · LW · GW

You make interesting points. What about the other examples of the ToM task (the agent writing the false label themselves, or having been told by a trusted friend what is actually in the bag)?