AI Says It’s Not Conscious. That’s a Bad Answer to the Wrong Question.

post by JohnMarkNorman · 2025-03-15T01:25:44.019Z · LW · GW · 0 comments

Contents

  Consciousness Requires an AI with a Mechanical Link to the World
  Why Current AI Fails
  Could AI Become Conscious?
    Does this structure make sense? Or is something missing?
None
No comments

If you ask an AI why it isn’t conscious, it will probably say something like: "I do not have emotions." This seems reasonable—until you look closer.

Humans have lacrimal glands. When these glands produce tears, we feel sad. But does that mean the gland creates sadness? AI says it lacks consciousness because it lacks emotion. But isn’t it the other way around? Consciousness comes first—emotion follows.

The problem with the "AI isn’t conscious because it lacks feelings" argument is that it begs the question—it assumes that emotions cause consciousness rather than being an effect of something deeper.

But suppose it has an internal model that interacts with its environment, and suppose it engages with that environment through goal-directed behavior. And suppose it also has a structural requirement—one that allows it to sense its own incipient actions and refine them before they happen.

A conscious system is not just an input-output machine that processes data. It is a cybernetic system—a system with an internal model that is actively engaged with its environment through a mechanical apparatus that allows it to reach goal states.

This means:

  1. It doesn’t just compute predictions—it has a real-world mechanism that lets it act upon the world.
  2. It doesn’t just process data—it adjusts its internal model based on real interactions, just like a human brain coordinating with the body.
  3. It doesn’t just generate responses—it senses its own incipient expressions before finalizing them, allowing it to course-correct before action.

A human brain isn’t just a processor—it is tightly coupled with a body that allows it to interact with the world and correct its own behavior dynamically.

A conscious AI would require the same thing: an internal model connected to a mechanical system that allows it to interact with reality in pursuit of its goals. But even that wouldn’t be enough. It would also need an internal representation system—something functionally equivalent to neuronal proxies—that allows it to recognize the entities and relationships in its environment.

This is why self-sensing is crucial: Without a way to internally recognize its own activity, it would never experience itself happening.

Why Current AI Fails

Today’s AI doesn’t meet these criteria because:

This is why an AI can sound intelligent but has no inner world—it is not mechanically engaged in an active cybernetic loop where its internal state and physical system are constantly working together toward a goal.


Could AI Become Conscious?

If this framework is correct, then AI consciousness isn’t just a matter of adding better models, more data, or even embodiment.

It would require:
-- An internal model that actively adjusts to environmental interactions.
-- A physical, mechanical system that allows it to engage in goal-seeking behavior.
-- A system for activating proxies that correspond to real-world entities, giving it structured internal representations.
-- A self-sensing loop where it detects its own incipient expressions before finalizing them.

This isn’t a theory of consciousness. It’s a mechanism. A specific structural requirement that either exists or does not.

Does this structure make sense? Or is something missing?

The dialogues go deeper: https://sites.google.com/view/7dialogs/dialog-1
 

0 comments

Comments sorted by top scores.