damiensnyder's Shortform

post by damiensnyder · 2021-01-08T08:12:35.300Z · LW · GW · 1 comments

Contents

1 comment

1 comments

Comments sorted by top scores.

comment by damiensnyder · 2021-01-08T08:12:35.839Z · LW(p) · GW(p)

I have only a broad overview of AI, AI risk, and the topics surrounding it. I've encountered the idea that superintelligent AI picking an optimal future is likely to be lured by "siren worlds," which are bad but are optimized to seem optimal. I vaguely grasped why this might happen, but I didn't give the theory much credence. (Mainly I thought it was less likely that a bad world could seem optimal than that a good world could seem optimal.) However, I just discovered this comment [LW(p) · GW(p)]:

This optimistic conjecture could be tested by looking to see what image maximally triggers a ML classifier. Does the perfect cat, the most cat-like cat according to ML actually look like a cat to us humans? If so, then by analogy the perfect utopia according to ML would also be pretty good. If not...

I have seen images that are "maximum possible score for cat," and they don't look like cats. Usually they look like mutant cats with three faces and five eyes. This question and the "siren world" concept seem to relate to each other somewhat. Is the connection between image classifiers and siren worlds:

  • evidential (i.e., image classifiers should strengthen my belief that superintelligent AI would be lured by siren worlds);
  • not evidential, but useful as an analogy, because the scenarios share similar causes;
  • useful as an analogy, though the technical concerns of the two scenarios are essentially unrelated; or
  • not applicable?