post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by tailcalled · 2024-01-09T13:36:09.279Z · LW(p) · GW(p)

A paperclip maximizer would be one way AI systems might act in unexpected ways, but we could also imagine AI researchers had less alarming ways in mind, e.g. that maybe AIs will find unexpected shortcuts that allows goals to be achieved with less intense interventions.

Replies from: anders-lindstroem
comment by Anders Lindström (anders-lindstroem) · 2024-01-09T15:32:44.080Z · LW(p) · GW(p)

it does not have to turn into a paperclip maximizer, but could turn into a billion other things that are equally bad and behaves unexpectedly. Again, to me unexpected means that it has behave in a way that no one had foreseen or predicted. Hence that any guardrails would work would be just by chance or luck. Either we know how the system will behave and we can control that, or we don't. Any unexpected or surprising behavior COULD spell disaster. If you that think that this kind of system behavior is to be expected you are in my world just saying that you expect disaster to happen.

Replies from: tailcalled
comment by tailcalled · 2024-01-09T15:37:55.703Z · LW(p) · GW(p)

You want the AI to do unexpected things, otherwise you might as well just encode all your expectations as GOFAI.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2024-01-09T18:59:35.615Z · LW(p) · GW(p)

can you rephrase what you mean by "you want"? I'm not sure I follow what level of implication that's on or who wants it why.