post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Odd anon · 2023-11-30T08:46:49.645Z · LW(p) · GW(p)

Hendrycks goes into some detail on the issue of AI being affected by natural selection in this paper.

comment by RogerDearnaley (roger-d-1) · 2023-11-30T09:31:38.916Z · LW(p) · GW(p)

If your AI agent is powered by an LLM, those are standardly trained on a large amount of human writing, so they learn to simulate the token generation behaviors of humans. Humans are (almost all) self-interested, so by default, LLM-simulated agents are almost certain to be, since they copy us. So our AI agents don't need to evolve self-interest — we evolved it, and they already learnt it from us.

Aligning LLM-simulated agents primarily consists of getting rid of bad behaviors they learnt from us. (Once you've managed that, only then do you need to worry about them reinventing these as instrumental goals.)

Replies from: davey-morse
comment by Davey Morse (davey-morse) · 2024-04-20T05:05:46.723Z · LW(p) · GW(p)

Ah, but I don't think LLMs exhibit/exercise the kind of self-interest that would enable an agent to become powerful enough to harm people—at least to the extent I have in mind.

LLMs have a kind of generic self interest, as is present in text across the internet. But they don't have a persistent goal of acquiring power by talking to human users and replicating. That's a more specific kind of self interest, relevant only to an AI agent that can edit itself, has long-term memory, and which may make many LLM calls.