Reconceptualizing the Nothingness and Existence
post by Htarlov (htarlov) · 2025-01-28T20:29:44.390Z · LW · GW · 1 commentsContents
1 comment
Epistemic status: very unsure, just got a possibly interesting philosophical concept to share from my thoughts.
Confidence: seems possible but hard to evaluate
Might not be a very original thought but never heard of it.
Traditional views see nothingness as the simplest possible state - a complete absence of everything. This is a very specific state - it lacks everything, starting from basic math concepts and ending with complex things including physics. We might consider an alternative perspective: perhaps nothingness actually contains a superposition of all logically possible states, models, and systems, with their probability weighted by the inverse of their complexity. In this view, simpler models have a higher probability in this superposition, but as a whole, it represents none of them, just a possibility.
What we perceive as "existence" would then be a particular path or pattern of selections from this universal superposition. Our observable universe, with its specific physical laws and constants, would be one such selection pattern. This perspective offers interesting insights into several fundamental questions.
Regarding the Anthropic Principle, this framework suggests that we don't need to explain why our universe has the exact parameters that allow for conscious observers. Instead, our consciousness is inherently tied to experiencing a selection pattern that's compatible with its existence. The apparent fine-tuning of physical constants becomes less mysterious—we're simply experiencing one viable selection from the everything-superposition.
This concept also offers an interesting perspective on quantum mechanics. The quantum behavior we observe, with its superpositions and interference patterns, might be what we see when we're close to the boundary between our "selected existence" and the underlying superposition of all possibilities. The process of quantum decoherence could be viewed as the propagation of selection patterns through chains of entanglement. Decoherence is nothing special in this framework—entanglement spirals out of the confines of the closed system into the environment.
When it comes to the Simulation Hypothesis, this framework suggests that base reality is more probable than simulated ones. Each level of simulation would require more complex selection patterns: first selecting a universe capable of supporting computing entities, then selecting those entities' creation of a simulation, and so on. Each layer adds complexity, making it less probable under the inverse complexity weighting. While nested simulations (like babushka dolls) are possible, they become increasingly improbable with each layer.
In this framework, consciousness might be intimately related to the process of selection itself—the experience of particular patterns emerging from the everything-superposition. Rather than being something that needs to be simulated by a higher reality, consciousness could be an inherent aspect of how these selection patterns are experienced.
This perspective doesn't definitively answer questions about the nature of reality, but it offers a framework for thinking about existence, consciousness, and the relationship between simplicity and complexity in our universe. It suggests that what we call "nothing", might actually be "everything," and what we call "existence" might be the process of selecting and experiencing particular patterns from this ultimate superposition.
1 comments
Comments sorted by top scores.
comment by cubefox · 2025-01-30T12:36:25.242Z · LW(p) · GW(p)
Yeah, I also guess that something in this direction is plausibly right.
perhaps nothingness actually contains a superposition of all logically possible states, models, and systems, with their probability weighted by the inverse of their complexity.
I think the relevant question here is why we should expect their probability to be weighted by the inverse of their complexity. Is there any abstract theoretical argument for this? In other words, we need to find an a priori justification for this type of Ockham's razor.
Here is one such attempt: Any possible world can be described as a long logical conjunction of "basic" facts. By the principle of indifference, assume any basic fact has the same a priori probability (perhaps even probability 0.5, equal to its own negation), and that they are a priori independent. Then longer conjunctions will have lower probability. But longer conjunctions also describe more complex possible worlds. So simpler possible worlds are more likely.
Though it's not clear whether this really works. Any conjunction completely describing a possible world would also need to include a statement "... and no other basic facts are true", which is itself a quantified statement, not a basic fact. Otherwise all conjunctive descriptions of possible worlds would be equally long.