Posts
Comments
Really appreciate this deep dive—especially the honest look at where SAEs fall short. It feels like the core struggle here is trying to decode the model’s mind to extract truth. That’s hard, messy, and maybe fundamentally limited.
I’m exploring a complementary approach with something called the Global Intelligence Amplifier (GIA).
Instead of digging inside the model, GIA treats the LLM as a conversation tool—a way to facilitate structured human debate, not reveal internal concepts. The core idea is:
- Don’t assume the model knows anything.
- Use it to organize and surface claims, counterclaims, and evidence.
- Let users challenge those claims through non-stacking filters and structured logic.
- Track which arguments hold up over time, across diverse rebuttals and shifting contexts.
In short:
Forget trying to read the model’s mind. Use it to help humans reason better.
Happy to share more if that’s of interest—your work is pushing the field in the right direction by surfacing hard truths. Not enough are pushing the "is it true" and it is the number one problem at the moment IMHO as more people use and trust llm's.