6 (Potential) Misconceptions about AI Intellectuals

post by ozziegooen · 2025-02-14T23:51:44.983Z · LW · GW · 2 comments

Contents

2 comments

2 comments

Comments sorted by top scores.

comment by habryka (habryka4) · 2025-02-15T03:46:51.748Z · LW(p) · GW(p)

While artificial intelligence has made impressive strides in specialized domains like coding, art, and medicine, I think its potential to automate high-level strategic thinking has been surprisingly underrated. I argue that developing "AI Intellectuals" - software systems capable of sophisticated strategic analysis and judgment - represents a significant opportunity that's currently being overlooked, both by the EA/rationality communities and by the public.

FWIW, this paragraph reads LLM generated to me (then I stopped reading because I have a huge prior that content that reads that LLM-edited is almost universally low-quality).

Replies from: ozziegooen
comment by ozziegooen · 2025-02-15T05:21:45.435Z · LW(p) · GW(p)

Thanks for letting me know. 

I spent a while writing the piece, then used an LLM to edit the sections, as I flagged in the intro. 

I then spent some time re-editing it back to more of my voice, but only did so for some key parts. 

I think that overall this made it more reasonable and I consider the sections to be fairly clear. But I agree that it does pattern-match on LLM outputs, so if you have a prior that work that sounds kind of like that is bad, you might skip this. 

I obviously find that fairly frustrating and don’t myself use that strategy that much, but I could understand it. 

I assume that bigger-picture, authors and readers could both benefit a lot from LLMs used in similar ways (can produce cleaner writing, easier), but I guess now we’re at an awkward point.