Illusion of truth effect and Ambiguity effect: Bias in Evaluating AGI X-Risks
post by Remmelt (remmelt-ellen) · 2023-01-05T04:05:21.732Z · LW · GW · 2 commentsContents
2 comments
2 comments
Comments sorted by top scores.
comment by justinpombrio · 2023-01-06T02:03:17.049Z · LW(p) · GW(p)
Meta comment: I'm going to be blunt. Most of this sequence has been fairly heavily downvoted. That reads to me as this community asking to not have more such content. You should consider not posting, or posting elsewhere, or writing many fewer posts of much higher quality (e.g. spending more time, doing more background research, asking someone to proofread). As a data point, I've only posted a couple times, and I spent at least, I dunno, 10+ hours writing each post. As an example of how this might apply to you, if you wrote this whole sequence as a single "reference on biases" and shared that, I bet it would be better received.
comment by Remmelt (remmelt-ellen) · 2023-01-05T04:12:03.681Z · LW(p) · GW(p)
Say maybe Illusion of Truth and Ambiguity Effect each are biasing how researchers in AI Safety evaluate one option below.
If you had to choose, which bias would more likely apply to which option?
- A: Aligning AGI to be safe over the long term is possible in principle.
- B: Long-term safe AGI is impossible fundamentally.