Yair Halberstadt's Shortform

post by Yair Halberstadt (yair-halberstadt) · 2022-11-08T19:33:53.853Z · LW · GW · 3 comments

3 comments

Comments sorted by top scores.

comment by Yair Halberstadt (yair-halberstadt) · 2023-02-16T08:16:21.547Z · LW(p) · GW(p)

It seems LLMs are less likely to hallucinate answers if you end each question with 'If you don't know, say "I don't know"'.

They still hallucinate a bit, but less. Given how easy it is I'm surprised openAI and Microsoft don't already do that.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2023-02-16T18:03:18.499Z · LW(p) · GW(p)

Has its own failure modes. What does it even mean not to know something? It is just yet another category of possible answers. 

Still a nice prompt. Also works on humans.

comment by Yair Halberstadt (yair-halberstadt) · 2022-11-08T19:33:54.135Z · LW(p) · GW(p)

Fun fact I just discovered - Asian elephants are actually more closely related to wooly mammoths than they are to African elephants!