Strategic Considerations Regarding Autistic/Literal AI

post by Chris_Leong · 2022-04-06T14:57:11.494Z · LW · GW · 2 comments

Epistemic Status: Take with a grain of salt. This post was written relatively quickly and it heavily relies on making an analogy between AI and human behaviour. This post kind of just takes a sledgehammer to these concerns and tries to reason it out using these analogies anyway. I'd encourage others to consider whether I've fallen into the trap of mistakenly anthropomorphising AI.

Update:

  1. ^

    Given the stakes of the alignment problem, I made the decision to emphasise clarity over political correctness.

2 comments

Comments sorted by top scores.

comment by Garrett Baker (D0TheMath) · 2022-04-06T20:27:55.660Z · LW(p) · GW(p)

I don't think your use of "autistic" in this post was very clarifying. Do you just mean that the AI doesn't consider the context of the problem we give it in order to deduce the actual problem? If so, it's not clear to me that an AI with greater capabilities will necessarily be "less autistic".

Replies from: Chris_Leong
comment by Chris_Leong · 2022-04-07T03:35:49.125Z · LW(p) · GW(p)

I meant that it takes instructions a bit too literally since it doesn't fully understand implicit instructions.