0 comments
Comments sorted by top scores.
comment by [deleted] · 2023-01-20T00:03:50.273Z · LW(p) · GW(p)
The AI seems rather unnecessarily verbose, and most of its response seems to be word padding around the first order logic that you defined in the paragraph that starts with "It's called Paretoism." It reminds me of corporate speak. There is really no deeper response other than dancing around the definition that "this is bad, that is good" The first order logic that you have defined roughly goes like this: acton = good if and only if for all entity, consequence is good. Action = bad if there exists an entity in which the consequence is bad.
This made me question whether most communication lack substantial substance in the first place. In most conversations, the first order logic you would derive from the statements don't seem to involve many variables or operations. Take the layoff letter for instance, you'd think the CEO is hired to be the CEO because he's good at corporate speak. If corporate speak lack substance and are filled with word padding, then what does it mean to be good at the CEO job? Maybe behind the scene, the CEO actually have to make complex decisions based on a lot of data, and the corporate speak is just a front facing necessity as part of just how things are done.
Replies from: jonas-metzger↑ comment by Jonas Metzger (jonas-metzger) · 2023-01-20T17:09:44.976Z · LW(p) · GW(p)
Yeah, I already edited out some verbosity. ChatGPT is just trained to hedge too much currently. Should I take out more?
It seems to have distracted a bit from the purpose of the post: that we can define an unobjectionable way to aggregate utilities and have an LLM follow it, while still being useful for its owner.
Replies from: None↑ comment by [deleted] · 2023-01-20T17:27:13.512Z · LW(p) · GW(p)
I think verbosity is learned through corpus curation. They've been using the casual conversation tone to train chat models for awhile now. Even the earlier AI chat bot prototypes around 15 years ago were using the same type of conversational verbosity. This is just how they want to model the AI for chat, being mostly user friendly HCI type of thing. About 5 years ago, there was the news article summary AI/LLM that I think GPT-3 uses, also chatGPT, to reduce big texts into entities and reduce some sentences down to their first order logic through NLP indicators of keywords and parts of speech. Maybe it wasn't LLM, I didn't look into the code itself.
I think in general AGI can't take into account of context that aren't parametrized. For instance, the same texts under different context (e.g. different entities involved, different time period and settings, characteristics of the entities themselves, etc.) That's what separate these machines from biological intelligence. If you can model everything an biological organism experiences into parameters that you can feed into an AI, then you can achieve AGI, else, the more data you are missing the further away you are from AGI.