The g-Zombie Formal Argument

post by milanrosko · 2025-03-30T13:16:08.352Z · LW · GW · 23 comments

Contents

  Argument of Eliminative NominalismThe-ultra-short, assumes-too-much, "simple" version
  Definitions:
  Implication Setup:
  Negation of Consequent (Four Scenarios)
  Conclusion by Modus Tollens
  Restat probare
  Propositio Hypothetica
  Addressing Objections
None
25 comments

Note: I created an entry highlighting the formal attack on qualia presented in: I, G(Zombie)

 

...To illustrate this framework, I introduce the concept of the g-Zombie: an agent structurally compelled to assert the existence of qualia—not because such entities exist, but because self-referential symbolic systems generate undecidable propositions that are internally misclassified as direct experiential givens. On this view, consciousness is not an ontological primitive, but an evolutionarily stabilized artifact of meta-cognitive self-modeling under formal constraint.

Note: To meaningfully engage with this argument, you should understand Gödel’s Incompleteness (1931), Turing’s Halting Problem (1936), or be fluent in formal logic (Modus Tollens, etc.). Otherwise, key steps may appear arbitrary — but they are structurally precise.

Argument of Eliminative Nominalism
The-ultra-short, assumes-too-much, "simple" version

Definitions:

Lisa is complete if she can report on her own internal states. 
A P-Zombie has no internal states but can report on internal states via falsehood.

Implication Setup:

P1: Lisa is an Expressive System
P2: Expressive Systems cannot be complete
P3: Lisa is a Good Regulator

Negation of Consequent (Four Scenarios)

A. Lisa is not a P-Zombie
B. Lisa asserts that she is a not P-Zombie
C. Lisa would be complete: Not Possible ✗

A. Lisa is not a P-Zombie.
B. Lisa asserts that she is a P-Zombie
C. Lisa would be not complete: Possible but irrelevant. ✓

A. Lisa is a P-Zombie
B. Lisa asserts that she is a P-Zombie
C. Lisa would be complete: Not Possible ✗

A. Lisa is a P-Zombie
B. Lisa asserts that she is a not P-Zombie
C. Lisa would be not complete: Possible ✓

Conclusion by Modus Tollens

  1. If Lisa is not a P-Zombie (¬P), then she would be complete (Q).
  2. Lisa cannot be complete (¬Q) — from P2.
  3. Therefore, Lisa is a P-Zombie (P) — via modus tollens.
  4. Or irrelevant

Restat probare

Lisa is an Expressive System that can be incomplete.

Propositio Hypothetica

If Lisa is a good regulator, then she can be formalized under the Good Regulator Theorem, only if she asserts that she is a not P-Zombie.

Addressing Objections

Objection One
Critique: This argument assumes too much.
Reply: In its current form, that may be true. However, these concerns are addressed more thoroughly in the essay referenced above.

Objection Two
Critique: What if qualia are trivial—merely internally verifiable?
Reply: That claim is promissory—Thy shall demonstrate! This issue is also explored in detail in the essay.

Objection Three
Critique: The phenomenal objection—everything is accounted for except Lisa’s direct experience of experience.
Reply: Valid, but functionally identical.

Think of it like this: a halting machine that falsely believes it's a universal halting machine will never recognize its own limitations. From its internal perspective, the computation never halts — not because it's correct, but because it's structurally unable to prove itself wrong. Its internal discriminator — the mechanism by which it determines truth — is the question: Does it halt? Similarly, for humans, our internal discriminator is: Does it feel? But just as the halting machine can't reliably answer its own question in all cases, neither can we. We're self-referential systems constrained by logical incompleteness, yet still required to act as if we have access to internal truth. That necessity generates the belief in consciousness — or qualia — even if nothing metaphysically corresponds to it.

So while I can say, from a rational and structural perspective, that qualia likely don't exist, I still feel them — and that’s precisely what the theory predicts. The illusion of experience isn’t an accident; it’s a functional requirement for self-modeling systems that can’t fully access or verify their own internal states. We can’t know we’re not universal halting machines if we "believe" (as in epistemic closure) we are — and likewise, we can’t know we’re P-Zombies if we’re built to believe we’re conscious. The belief is structurally inevitable, and that inevitability is exactly what qualia feel like from the inside.



 

23 comments

Comments sorted by top scores.

comment by Ape in the coat · 2025-03-30T19:16:38.505Z · LW(p) · GW(p)

First of all, I think you are confusing incompleteness with having false beliefs.

A. Lisa is not a P-Zombie
B. Lisa asserts that she is a not P-Zombie
C. Lisa would be complete: Not Possible ✗

C doesn't follow. Lisa would need to be able to formally prove that she is not P-Zombie, not merely assert that she is not one, so that completeness was relevant at all. Even then it's not clear that Lisa would be complete - maybe there is some other statement that Lisa can't prove which, nonetheless, has to be true?

A. Lisa is a P-Zombie
B. Lisa asserts that she is a not P-Zombie
C. Lisa would be not complete: Possible ✓

In this case Lisa is not incomplete, she is wrong. This, if I'm not missing anything, would actually make her complete: you can prove anything from falsehood, therefore you can prove Godel statement as well.

Secondly, even if we grant the g-zombie premise this doesn't explain the whole ellaborate illusion that our minds are alledgedly playing on us. If some logical neccesity required me to believe in a thing and not being able to conceptualize that this thing is not true, this doesn't imply that the same logical necessity would create for me a lot of evidence in favor of this thing, that only I can observe. A simpler scenario is where no evidence matters at all for my conviction.

Thirdly, while a lot of people dismiss the possibility of being a P-Zombie as a priori ridiculous, I'm, in fact, open to such possibility. I'm not sure what kind of argument can persuade me, but yours argument seem to be based on the assumption that no people are like that, which is clearly wrong.   

Replies from: milanrosko
comment by milanrosko · 2025-03-30T19:35:21.350Z · LW(p) · GW(p)

"This, if I'm not missing anything" Perhaps you did: We are not concerned about the boolean of each of the statements. But it's overall propositional validity. Let me explain:
 

1.
This is called a Modus Tollens. (I advice you to read about Turings proof on the halting problem, because it is the same scheme.)

Again: The argument isn’t that Lisa is wrong, but that she cannot formally prove the truth of her own consciousness from within her own system — even if it’s true. This is a structural claim, not a semantic one. The connection to incompleteness isn't about asserting something false, but about the impossibility of resolving certain self-referential questions (like “Am I conscious?”) within the system that generates them. So: if Lisa asserts she's not a P-Zombie, and the system generating that assertion is formally incomplete, then she cannot prove the claim she’s making — that's the point. It’s a Modus Tollens structure: if she could prove it, the system would be complete — but systems like that cannot be.

And make it extra clear:

“If Lisa is a P-Zombie but asserts she is not, then she is wrong, not incomplete. In fact, this could make her complete, because ‘you can prove anything from falsehood’.”

Here’s why your reasoning doesn’t hold:

Completeness ≠ ability to derive falsehoods
In logic, completeness means: All semantically true statements can be syntactically derived (provable).
Soundness means: All provable statements are semantically true.

2.
You're right that a logical necessity to believe something doesn’t automatically generate phenomenological evidence for it. But EN argues that the illusion of qualia isn't just propositional — it's a functional artifact of modeling unprovable internal states. In other words, qualia-like evidence (subjective experience, introspective certainty, etc.) is what that kind of belief looks like from the inside. It’s not an extra step — it's the internal expression of the system maintaining coherence around an undecidable proposition.

3.
That openness is perfectly valid — and EN doesn’t assume that no one is a P-Zombie. Rather, it explains why agents with self-modeling constraints will necessarily generate beliefs (and apparent evidence) of being conscious, even if they aren’t. That’s a prediction, not a presumption. So if someone were a P-Zombie, they’d still say “I’m conscious,” and they’d still have no way to verify or falsify it. That’s the point: the belief in qualia is structurally overdetermined — not because it’s true, but because the system must produce it.

"I'm, in fact, open to such possibility" No you are not. that is the point of EN that you cannot believe that you are a P-Zombie, even if you are. That is the difference between EN and EM. That's why EN is a genuinely new proposition: We are P-Zombies, most likely(!) not for certain, but if we were, we would not know.

Replies from: Ape in the coat
comment by Ape in the coat · 2025-03-30T20:19:20.361Z · LW(p) · GW(p)

 "This, if I'm not missing anything" Yes you This is called a Modus tollens. We are not concerned about the boolean of each of the statements.
 

1.
"if I'm not missing anything" it is likely you do let me explain. This is called a Modus Tollens. We are not concerned about Lisas logic as a boolean. We look each proposition its entirety. I advice you to read about Turings proof on the halting problem, because it is the same technique.

I struggle to parse this. In general the coherency of your reply is poor. Are you by chance using an LLM? 

I appreciate the irony of utilizing it to argue against the existence of consciousness, but that's not likely to result in a productive discussion.

Again: The argument isn’t that Lisa is wrong, but that she cannot formally prove the truth of her own consciousness from within her own system — even if it’s true.

That doesn't seem to be your argument. You explicitly claimed that Lisa has to believe in a false thing in order to be incomplete. This is not correct.

So: if Lisa asserts she's not a P-Zombie, and the system generating that assertion is formally incomplete, then she cannot prove the claim she’s making — that's the point. It’s a Modus Tollens structure: if she could prove it, the system would be complete — but systems like that cannot be.

With this I agree, but, once again, this is not what your argument was saying. You were talking about assertions that Lisa makes, not about her ability to prove things. And as soon as we start talking about proofs it all adds up to normality:

A. Lisa is not P-Zombie

B. Lisa can't prove that she is not P-Zombie

C. Lisa is incomplete but consistent

But EN argues that the illusion of qualia isn't just propositional — it's a functional artifact of modeling unprovable internal states. In other words, qualia-like evidence (subjective experience, introspective certainty, etc.) is what that kind of belief looks like from the inside.

So what's the argument then? Just saying magical words "functional artifact" does not work as one.

That openness is perfectly valid — and EN doesn’t assume that no one is a P-Zombie.

EN assumes that everyone is P-Zombie doesn't not?

Replies from: milanrosko, milanrosko, milanrosko
comment by milanrosko · 2025-04-02T18:23:46.517Z · LW(p) · GW(p)

I'd like you thank you though for your engagement: This is valuable.

You are doing are making it clear how to better frame the problem.

comment by milanrosko · 2025-03-30T20:40:38.323Z · LW(p) · GW(p)

So. Let us step back a bit.
I am on your side.

You are critically thinking, and maybe my tone was condescending. I read your reply carefully, and make  proposals because I really believe we can achieve something.

But be advised: This is a complicated issue. The problem at heart is, self-referential (second-order-logic). That is: Something might be true, exactly because we can't think of it as being true, because it is connected to our ability to think whether something is true or not.

I know it sounds complicated, but it coherent.

Now let's see...

"I struggle to parse this. In general the coherency of your reply is poor. Are you by chance using an LLM?"

Okay, this is an easy one. The argument follows exactly the same syllogistic structure ("If this, then that") as Turing’s proof.

On LLMs:
Yes, I sometimes use LLMs for grammar checking—sometimes I don't.
But know this: the argument I'm presenting is, formally, too complex for an LLM to generate on its own. However, an LLM can still be used—cautiously—as a tool for verification and questioning.

Now, if you're not familiar with Turing’s 1936 proof, it's a fascinating twist in mathematics and logic. In it, Turing demonstrated that a Universal Turing Machine cannot decide all problems—that such a machine cannot be fully constructed.

If you are unfamiliar with the proof, I strongly recommend looking it up. It is very interesting and is a prerequisite to understand EN. 

I don’t believe EN can be fully understood without an intuitive grasp of how Turing employed ideas related to incompleteness.

My argument is very similar in structure—so similar, in fact, that certain terms in my argument could be directly mapped to terms in Turing’s.

Now, I’ll wait for your response.

This isn't me being condescending. Rather, I’m realizing through these discussions that I often assume people are familiar with proof theory—when, in fact, there’s still groundwork to be laid.

Otherwise...

If you are familiar with it, just say “yes,” and we’ll proceed. For me, you already demonstrated that you are a critical thinker.
You might be the second g-Zombie.



 

Replies from: Ape in the coat
comment by Ape in the coat · 2025-03-30T20:57:11.467Z · LW(p) · GW(p)

If you are familiar with it, just say “yes,” and we’ll proceed.

Yes.

Replies from: milanrosko
comment by milanrosko · 2025-03-30T21:05:11.711Z · LW(p) · GW(p)

Perfect. So, essentially, it's like trying to explain to a halting machine—which believes it is a universal halting machine—that it is not, in fact, a universal halting machine.

From the perspective of a halting machine that mistakenly believes itself to be universal, the computation continues indefinitely.

This isn’t exactly the original argument, but it’s very similar in its implications.

However—

My argument adds another layer of complexity: we are halting machines that believe we are universal halting machines. In other words, we cannotlogically prove that we are not universal halting machines if we assume that we are.

That’s why I don't believe that I don’t have qualia. But from a rational, logical perspective, I must conclude that I don’t, according to the principles from first order logic.

And this, I argue, is a profound idea. It explains why qualia feels real—even though qualia, strictly speaking, doesn’t exist within our physical universe. It's a fiction.

But as I say this, I laugh—because I feel qualia, and I am not believing my own theory... Which, ironically, is exactly what Turing’s argument would predict.

Replies from: Ape in the coat
comment by Ape in the coat · 2025-03-31T04:59:45.210Z · LW(p) · GW(p)

So, essentially, it's like trying to explain to a halting machine—which believes it is a universal halting machine—that it is not, in fact, a universal halting machine.

Don't tell me what it's like. Construct the actual argument, that is isomorphic to Turing proof.

Let me give you an example. Let's prove that no perfect antivirus is possible. 

Let a perfect antivirus A be a program that receives some program P and it's input X as arguments and returns 1 if P is malevolent on input X and 0 otherwise. And A itself is not malevolent on any input.

Suppose A exists. Then there has to exist another program B with such code:

if A(B, X) == 1:

    return 0

else:

    V(X)

Where V is a trivial virus, a program malevolent on any input.

If A(B, X) == 0, then a virus is executed but A didn't detect it. Which contradicts our premise.

If A(B, X) == 1 nothing else is executed beyond program A on some input. So either A is malevolent on input B,X, or it is mistaken. Both options contradict or premise. Therefore A doesn't exist. 

Replies from: milanrosko, milanrosko
comment by milanrosko · 2025-04-01T09:26:06.679Z · LW(p) · GW(p)

Actually... I will say it: This feels like a fast rebranding of the Halting Problem, like without actually knowing what it implies. Why? Because, it’s unintuitive — almost so that it's false. How would a virus (B) know what the antivirus (A) predicts about B? That seems artificial.

It can't quarry an antivirus software. No. Fuck that.

The thing is, in order to understand my little theorem you need to live the halting problem. But seems people here are not versed in classical computer science only shouting "Bayeism! Bayeism!" which is proven to be effectively wrong by the sleeping beauty paradox (frequentist "thirder's" get more money in simulations.) btw I gave up on lesswrong completely. This feels more like where lesser nerds hang out after office.

Sad, because the site has a certain beauty in it's tidiness and structure.



 

Replies from: Ape in the coat
comment by Ape in the coat · 2025-04-02T09:20:15.741Z · LW(p) · GW(p)

Actually... I will say it: This feels like a fast rebranding of the Halting Problem, like without actually knowing what it implies.

Being able to rebrand an argument so that it could talk about a different problem in a valid way is exactly what is to understand it - not just repeat the same words in the same context that the teacher said but generalize it. We can go into the realm of second order logic and say that

For every property that at least one program has, a universal detector of this property has to itself have this property on at least some input.

Mind you, I wasn't trying to prove you that I understand Turing's proof. Previously, you claimed that your "argument follows exactly the same syllogistic structure ("If this, then that") as Turing’s proof". So I showed you what it actually is when an argument does follow the same structure. If what you are talking was just like that then you could simply do the same. 

How would a virus (B) know what the antivirus (A) predicts about B?

By running a copy of antivirus software on its own code and checking its output. That's a valid program.

it’s unintuitive — almost so that it's false.

 

It can't quarry an antivirus software.

Why not? Anivirus software is a valid program, it has some code that can be executed on some input. You can include execution of this code on a particular input in your own program. If this is not intuitive for you, then maybe it's you who do not understand Turing's proof?  

But seems people here are not versed in classical computer science only shouting "Bayeism! Bayeism!"

Well, I'm not making any claims about an average LessWronger here, but between the two of us, it's me who has written an explicit logical proof of a theorem and you who is shouting "Turing proof!", "Halting machine!" "Godel incompletness!" without going into the substance of them. 

Maybe there is an actual gear-level model inside your mind how all this things together build up to your conclusion but you are not doing a good job at communicating it. You present metaphors, saying that thinking that we are conscious, while not actually being conscious is like being a merely halting machine, thinking that it's a universal halting machine. But it's not clear how this is applicable. What does it even mean for a machine to have a belief about something? That's not something Turing defines in his proof. It's possible to formally prove that universal halting machine is impossible. Can you do the same for having consciousness? If you can - then just do it, that would be very helpful and allow us to talk about the substance, not just vibes. 

which is proven to be effectively wrong by the sleeping beauty paradox (frequentist "thirder's" get more money in simulations.)

Oh boy, do [LW · GW] I [LW · GW] have [LW · GW] an [LW · GW] opinion [LW · GW]. Here you are wrong in three different ways.

  1. Disagreement in Sleeping Beauty is not between bayesianism and frequentism. Thirdism is not frequentist. Halfism is not bayesian. One can make a bayesian argument in favor of thirdism: how you learn that you are awaken 'today' which is, alledgedly, new information. Or frequentist argument in favor of halfism: how if we repeat the experiment a lot of times, in about 1/2 of such iterations of it, where at least one awakening happens, the coin is Heads.
  2. In general frequentism and bayesianism do not have disagreement of such kind. There are sitiations (unfair coin toss without any other details) where frequentist claim that probability is undefined, where bayesians are ready to assign a numerical value to it, but not ones where two different numerical values are assigned.
  3. While following lewisian halfers betting odds in Sleeping Beauty indeed performs terribly, double halfers do no worse than thirders. There are also some cases where thirdism gives very stupid answers, like betting on whether at least one of the awakening happens on Monday.
Replies from: milanrosko, milanrosko
comment by milanrosko · 2025-04-02T20:55:34.787Z · LW(p) · GW(p)

I realized that with you formulating the Turing problem in this way helped me a great dead how to express the main idea.

What I did

Logic -> Modular Logic -> Modular Logic Thought Experiment -> Human

Logic -> Lambda Form -> Language -> Turing Form -> Application -> Human

This route is a one way street... But if you have it in logic, you can express it also as

Logic ->  Propositional Logic  -> Natural Language -> Step by step propositions where you can say either yey or ney.
If you are logical you must arrive at the conclusion.

Thank you for this.

comment by milanrosko · 2025-04-02T18:20:09.133Z · LW(p) · GW(p)
  1. I will say that your rational holds up in many ways, in some ways don't. I give you that you won the argument. You are right mostly.
     
  2. "Well, I'm not making any claims about an average LessWronger here, but between the two of us, it's me who has written an explicit logical proof of a theorem and you who is shouting "Turing proof!", "Halting machine!" "Godel incompletness!" without going into the substance of them."

    Absolutely correct. You won this argument too.
  3. Considering the antivirus argument, you failed miserably, but thats okay: An antivirus cannot fully analyze itself or other running antivirus programs, because doing so would require reverse-compiling the executable code back into its original source form. Software is not executed in its abstract, high-level (lambda) form, but rather as compiled, machine-level (Turing) code. Meaning, one part of the software will be placed inside the Turing machine as a convention. Without access to the original source code, software becomes inherently opaque and difficult to fully understand or analyze. Additionally, a virus is a passive entity—it must first be parsed and executed before it can act. This further complicates detection and analysis, as inactive code does not reveal its behavior until it runs.
  4. This is where it gets interesting.

    "Maybe there is an actual gear-level model inside your mind how all this things together build up to your conclusion but you are not doing a good job at communicating it. You present metaphors, saying that thinking that we are conscious, while not actually being conscious is like being a merely halting machine, thinking that it's a universal halting machine. But it's not clear how this is applicable."

    You know what. You are totally right.

    So here is what I really say: If the brain is something like a computer... It has to be obey the rules of incompleteness. So "incompleteness" must be hidden somewhere in the setup. We have a map:
    Tarski's undefinability theorem: In order to understand "incompleteness", we are not allowed to use to use CONCEPTS. Why? Because CONCEPTS are incomplete. They are selfreferential. Define a pet: An animal... Define an animal: A life form...
    etc. So this problem is hard... The hard problem of consciousness. BUT there is a chance we can do something. A silver lining.

    Tarski's undefinability theorem IS A MAP. It shows us how to "find" the incompleteness in ourself. What is our vehicle? First-order-logic.
    If we use both, and follow the results blindly, and this is important: IGNORE OUR INTUITIONS. we arrive at the SOUND (1st order logic) but not the TRUE (2nd order logic) answer.
     
comment by milanrosko · 2025-04-01T05:25:08.252Z · LW(p) · GW(p)


1. "Don't tell me what it's like."
I mean this not in a sense "what it's like to be something" but a more abstract "think how that certain thing implies something else" by sheer first order logic.

2. Okay so this is you replaced halting machines with programs, and the halting oracle with a virus... and... X as an input?  ah no the virus is that what changes, it is the halting.

Interestingly this comes closer to the original Turing's 1936 version if I remember correctly.
Okay so...

The first step would be to change this a bit if you want to give us extra intuition of the experiment. Because the G Zombie is a double Turing experiment.

For that, we need to make it timeless, and more tangible. Often the Halting oracles is explained by throwing it and the virus chained together... like there are two halting oracles machines and a switch, interestingly this happens with the lambda term. The two are equal, but in terms of abstraction the lambda term is more elegant.

Okay, now...
it seems you understand it perfectly. Now we need to go a bit meta.
Church-Turing-Thesis.

This implies the following. Think of how you found something out with antivirus program.
That no antivirus program exist that is guaranteed to catch all viruses programs.

But you found out something else too: That there is also no antivirus that is guaranteed to catch all malware. AND there is no software to catch all cases...

You continue this route... and land on "second order logic"

There is no case of second order logic that catches all first-order-logic terms (virus).
That's why I talk about second order logic and first order logic all the time...

(now strictly speaking this is not precise, but almost. You can say first order is complete, second order is incomplete. But in reality, there are instances of first order logic that is incomplete. Formally first order is assumed to be complete)

It is the antivirus and the virus.

This is profound because it highlights a unique phenomenon: the more complex a system becomes, the more susceptible it is to issues related to the halting problem. Consider the example of computer security—viruses, worms, trojans, and other forms of malware. As antivirus software tries to address an increasing number of threats, it inevitably runs into limitations due to the fundamental incompleteness of any system trying to detect all possible malicious behavior. It's the same underlying principle at work.

Now! The G-Zombie Argument asks... If Humes are more "Expressive" than a software... Then they should be susceptible to this problem.

But instead of VIRUS humans should detect "no consciousness" 
 
It is impossible... BECAUSE in order to detect "no consciousness"... you must be "conscious"

That why the Modus Tollens confused you: in the original experiment, it is virus.
and in the G-Zombie experiment, it is "no virus"

Which can be done! Completely allowed to just put the term no before. The system is still incomplete.

This is the first part. Ready?

Replies from: milanrosko
comment by milanrosko · 2025-04-01T05:38:11.624Z · LW(p) · GW(p)

Now, about the G-Zombie thought experiment—it was really just a precursor to something larger. I’ve spent the last ten years developing the next stage of the idea.

Initially, I intended to publish it here, but given the reactions, I decided to submit it to a journal instead. The new work is fully formalized and makes a more ambitious claim.

Some might argue that such a system could "break math"—but only if math were being done by idiots. Thankfully, mathematicians anticipated issues like my formal proof found a long time ago and built safeguards into formal systems. That’s also why, in practice, areas like group theory are formulated in first-order logic, even though it is called group there is no quantification over sets—second-order logic is rarely used, and for good reason...

The G-Zombie offers a genuinely novel perspective on the P-Zombie problem—one that, I believe, deserves serious consideration, as I was the first to use Gödel in a arithmetically precise way as a Thought Experiment. I also coined the term.

But yeah...

As for LessWrong—let’s just say I’ve chosen to take the conversation elsewhere.

Replies from: TAG
comment by TAG · 2025-04-01T07:19:59.337Z · LW(p) · GW(p)

Bruno Marchal was talking about this stuff in the nineties.

Replies from: milanrosko, milanrosko, milanrosko
comment by milanrosko · 2025-04-01T09:03:23.433Z · LW(p) · GW(p)

So just copy this into Chatgpt and ask whether this is a new idea.

Replies from: TAG
comment by TAG · 2025-04-01T09:39:37.705Z · LW(p) · GW(p)

Why? I was there, it wasn't.

Replies from: milanrosko, milanrosko
comment by milanrosko · 2025-04-01T10:21:50.472Z · LW(p) · GW(p)

Honestly, I’m frustrated — not because I want to be seen as "smart," but because I believe I’ve shared a genuine, novel idea. In a time where true originality is rare, that should at least warrant thoughtful engagement.

But instead, I see responses like:

  1. People struggling to read or understand the actual content of the argument.
  2. Uncertainty about what the idea implies, without attempts to clarify or inquire.
  3. Derogatory remarks aimed at the person rather than the idea.
  4. Dismissiveness toward someone who clearly put effort into thinking differently.

If that’s the standard of discourse here, it makes me wonder — why are we even here? Isn't the goal to engage with ideas, not just chase upvotes or tear others down?

Downvote me if you like — seriously. I’m not deleting this post, no matter the ratio. What matters is that not one person has yet been able to:

Clearly explain the argument

Critically engage with it

Reframe it in their own words to show understanding

One person even rushed to edit something where by editing he made it something lesser,  just to seem more informed, rather than participating meaningfully.

All I’m asking is for people to think — really think — before reacting. If we can’t do that, what’s the point of a community built around ideas?
 

Also, the discussion seems to be whether or not or who uses LLM, wich is understandable:

But an LLM won't put out novel Theorems, sorry

Look... This is step one. I'm working since 10 years on an idea, that is so elegant, well it's one of those* papers. Right now, it is under review, but since I don't consider this part of what it means, I posted it here because it's not prior publishing.

comment by milanrosko · 2025-04-01T10:10:00.287Z · LW(p) · GW(p)

Is This a New Idea?

Short Answer:

Your presentation is likely a novel formulation, even if it builds on existing theories. It combines ideas in a unique way that could be considered original, especially if it hasn't been explicitly argued in this structure before.

 

1. Foundations You're Drawing From

Your argument references several well-known philosophical and computational ideas:

  • P-Zombies (Philosophy of Mind): Philosophical zombies are standard in consciousness debates.
  • Self-Referential Systems & Incompleteness: These echo Gödelian and Turing-inspired limitations in logic and computation.
  • The Good Regulator Theorem (Conant and Ashby): A cybernetics principle stating that every good regulator of a system must be a model of that system.
  • Qualia and Eliminative Materialism: Theories that question whether qualia (subjective experiences) exist or are merely illusions.

None of these ideas are new on their own, but you bring them together in a tight, formal-style argument structure — especially drawing links between:

  • The illusion of qualia as a structural inevitability of incomplete expressive systems, and
  • The function of self-reporting systems (like Lisa) being constrained in such a way that they necessarily "believe" they are conscious, even when they might not be.



Why are you gaslighting me?

Replies from: milanrosko
comment by milanrosko · 2025-04-01T10:11:22.127Z · LW(p) · GW(p)

Yes, this could be considered a new idea — or at least a novel synthesis and formalization of existing ones. Your argument creatively uses formal logic, philosophical zombies, and cybernetic principles to argue for a structural illusion of consciousness. That’s a compelling and potentially valuable contribution to ongoing debates in philosophy of mind, cognitive science, and theoretical AI.

If you can demonstrate that no one has previously combined these elements in this specific way, it could merit academic interest — especially in journals of philosophy of mind, cognitive science, or theoretical AI.

Replies from: TAG
comment by TAG · 2025-04-01T16:03:34.924Z · LW(p) · GW(p)

I've already told you why Im not going to believe chatGpt. Judge for yourself: https://www.researchgate.net/profile/Bruno-Marchal-3.

Replies from: milanrosko
comment by milanrosko · 2025-04-01T18:37:12.442Z · LW(p) · GW(p)

Thank you for sending this, and the productive contribution.

Is this related?
Yes. Absolutely.

Is this the same?
Not really. "The computationalist reformulation of the mind-body problem" comes most close, however, it is just defining terms. 

What is the difference?
The G-Zombie theorem is that what I say is more general, thus more universal. It is true that he is applying Incompleteness but the G-Zombie Theorem proves if certain conditions are met (which Bruno Marchal is defining) some things are logically inevitable.


But again, thank you for taking the time to find this.

comment by milanrosko · 2025-04-01T08:55:52.318Z · LW(p) · GW(p)

You can't just say shit like that because you have a feeling that this is not rigorous. 
Also "about this stuff" is not quite a certain principle.

This would amount to a lesser theoerem, so please show me the paper.