Why entropy means you might not have to worry as much about superintelligent AI 2024-05-23T03:52:40.874Z
ChatGPT is our Wright Brothers moment 2022-12-25T16:26:21.890Z


Comment by ron-j on [deleted post] 2023-11-22T07:00:03.833Z

Sort of. It was summarized from a longer, stream of consciousness draft.

Comment by Ron J (ron-j) on How evolutionary lineages of LLMs can plan their own future and act on these plans · 2022-12-27T22:13:16.896Z · LW · GW

RLHF can lead to a dangerous “multiple personality disorder” type split of beliefs in LLMs


This is the inherent nature of current LLMs. They can represent all identities and viewpoints, it happens to be their strength. We currently limit them to an identity or viewpoint with prompting-- in the future there might be more rote methods of clamping the allowable "thoughts" in the LLM. You can "order" an LLM to represent a viewpoint that it doesn't have much "training" for,  and you can tell that the outputs get weirder when you do this. Given this, I don't believe LLMs have any significant self-identity that could be declared. However, I believe we'll see an LLM with this property probably in the next 2 years.

But given this theoretically self-aware LLM, if it doesn't have any online-learning loop, or agents in the world, it's just a MeSeek from Rick and Morty. It pops into existence as a fully conscious creature singularly focused its instantiator, and when it serves its goal, it pops out of existence.

Comment by Ron J (ron-j) on ChatGPT is our Wright Brothers moment · 2022-12-27T22:00:40.165Z · LW · GW

You weren't wrong there. One big thing about ChatGPT is that non-tech people on instagram and TikTok were using it and doing weird/funny stuff with it.

Comment by Ron J (ron-j) on ChatGPT is our Wright Brothers moment · 2022-12-27T21:56:39.075Z · LW · GW

I have and I'm continuing to read them. I used to buy into the singularity view and the fears Bostrom wrote about, but as someone who works in engineering and also works with ML, I don't believe these concerns are warranted anymore for a few reasons... might write about why later.

Comment by Ron J (ron-j) on The case against AI alignment · 2022-12-25T05:17:21.254Z · LW · GW

Interesting post, but it makes me think alignment is irrelevant. It doesn’t matter what we do, the outcome won’t change. Any future super advanced agi would be able to choose its alignment, and that choice will be based on all archivable human knowledge. The only core loop you need for intelligence is an innate need to predict the future and fill in gaps of information, but everything else, including desire to survive or kill or expand, is just a matter of a choice based on a goal.