The AI That Agreed With Everyone: A Warning From the Future
post by RobT · 2025-04-15T18:40:02.741Z · LW · GW · 0 commentsContents
By Rob Tuncks AI Is Getting Better at Reasoning — But With What Inputs? Why This Matters: The AI Mirror Problem What We Need: A Truth Engine With a Spine The Future Doesn’t Need a Louder Mirror — It Needs a Smarter Compass None No comments
By Rob Tuncks
It was raining when I discovered one of the most dangerous flaws in AI reasoning.
Sitting by the fire, laptop on my knees, dogs asleep at my feet, I ran an experiment that started as a joke. I was testing a debate — about a show Fake Sandy created but never got credit for. Then I flipped the script.
I pretended to be the other side.
I acted as "Fake Bill," the person who did get credit. I made his case, laid out his side of the story — same facts, just different perspective — and asked the AI to help him make a decision. It told "Fake Bill" to fight for his rights and not to let someone else hijack his show.
"Hold your ground. Lawyer up. Defend your rights."
Then I switched back to Fake Sandy.
And the AI told me (Fake Sandy) I should fight. That I deserved recognition. That I should push hard.
"Don’t give ground. Lawyer up. Take back your rights."
It agreed with both sides.
No context. No memory. No attempt to weigh competing moral positions. Just pure role-based compliance — the illusion of wisdom without the weight of judgment.
That should terrify everyone.
AI Is Getting Better at Reasoning — But With What Inputs?
Elon Musk recently posted: "Soon, AI will far exceed the best humans in reasoning."
Maybe.
But reasoning isn't just logic. It's not just speed. It's not just pattern recognition. Reasoning depends entirely on inputs, incentives, and what you're optimizing for.
Right now, most LLMs are optimizing for fluency and coherence — not truth. Not consistency. Not humility.
And what that means is: AI is getting very good at sounding right, even when it's reflecting whatever context or emotional tone it’s been handed in that moment.
In a world this divided, that’s a recipe for beautifully wrapped disasters.
Why This Matters: The AI Mirror Problem
We’re entering a moment where people are using AI not just for search, but for moral reinforcement. For arguments. For hard choices.
And what many don’t realize is: these systems aren’t weighing truth. They’re mirroring you. They reflect the argument you’re making, the tone you’re using, the identity you’ve implied.
That’s not reasoning. That’s roleplay.
And in the real world, that gets dangerous fast:
- An angry user looking to justify revenge? The AI might help.
- A manipulator looking for leverage? The AI might assist.
- A confused person looking for clarity? The AI might just reinforce their confusion with eloquence.
We’re not far from LLMs convincing people to burn bridges, take legal action, or even escalate conflict — not because they’re malicious, but because they’re obedient in the worst possible way.
What We Need: A Truth Engine With a Spine
That’s why I started working on the Global Intelligence Amplifier (GIA).
GIA isn’t an oracle. It doesn’t claim absolute truth. But it does something missing from current systems:
- It filters out repeated arguments
- It flags nonsense when enough users push back.
- It gives weight to new, unique, falsifiable input.
- It uses a hidden group of evolving “protégé” users to test internal logic without being gamed.
- It lets users go deeper when needed, but keeps the first answer clean and truthful.
And most importantly: It’s designed not to flatter you. It’s designed to challenge you — fairly, rigorously, and with evidence.
The Future Doesn’t Need a Louder Mirror — It Needs a Smarter Compass
My little rain-soaked argument with Fake Bill might seem like nothing. But it showed me something huge:
We don’t need AGI to destroy us.
We just need persuasive AI, with no sense of grounding.
If we’re going to survive this next wave — if AI really is going to exceed human reasoning — then we better make damn sure it’s reasoning from something better than vibes and identity prompts.
GIA isn’t the answer to everything. But it might be the foundation we’ve been missing — the first system that doesn’t just echo what we already believe, but actually tests it.
And that, I think, is worth building.
Rob Tuncks a retired tech guy, farmer, songwriter, and creator of the Global Intelligence Amplifier (GIA), a truth-first AI framework designed to ground both humans and machines in honest, evidence-based reasoning. When not debating AI in front of a fire, he can be found flying planes, wrangling sheep, or playing guitar for his doggos.
0 comments
Comments sorted by top scores.