post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Seth Herd · 2025-02-08T23:04:05.337Z · LW(p) · GW(p)

Hi! I'm just commenting to explain why this post will get downvotes no matter how good it is. I personally think these are good reasons although I have not myself downvoted this post.

  1. We on LessWrong tend to think that improvements in LLM cognition are likely to get us all killed. Thus, articles about ideas for doing it faster are not popular. The site is chock-full of carefully-reasoned articles on risks of AGI. We assume that progress in AI is probably going to speed up the advent of AGI, and raise the odds that we die because we haven't solved the alignment problem adequately by then. Thus, we don't typically share our ideas about improving AI capabilities here. I encourage you to take the arguments for risk seriously; it seems that people who dismiss those risks are almost never really understanding the arguments for severe risks from better-than-human AGI.

  2. We do not trust ChatGPT or other LLMs as authors. They are good at generating ideas, but not good at discerning which are really valid and valuable. Thus, we worry that large amounts of LLM-authored content will be "AI slop" that confuses everyone more than it produces valuable ideas and clarifies our thinking. Thus, LLM as an assistant author is tolerated (with some suspicion), while LLMs as full co-authors are discouraged.