Quantum Darwinism, social constructs, and the scientific method
post by pchvykov · 2024-02-07T07:04:48.042Z · LW · GW · 12 commentsContents
Quantum Darwinism Social constructs The scientific method None 13 comments
[Cross-posted from my blog https://www.pchvykov.com/blog]
TL;DR: All three of these are centered around the idea of consensus reality – that reproducibility, or redundancy of consistent records, is what makes something "objectively true." Slight deviations from such consistency is what leads to non-classical effects in the case of Quantum Darwinism, creates opinion dynamics, conflicts, and evolution of thought in the case of society, but is generally avoided in the case of scientific method. In this essay I unpack these parallels, suggest the possibility of a unifying mathematical framework, and consider the consequences of admitting some non-reproducibility as a generalization of the scientific method.
For a few years now I’ve been thinking about parallels between three seemingly very different topics: Quantum Darwinism (a recently popular interpretation of QM = Quantum Mechanics), social constructs (like money, culture - interpersonal realities we create and live by), and the scientific method. I have not yet found a way to make these connections sufficiently rigorous develop a proper theory, and so in this essay I want to work towards that by clarifying these ideas, their connections, and point out possible implications.
Quantum Darwinism
We begin with a rough overview of Quantum Darwinism (QD) [Zurek09, wiki]. The core issue in Quantum Foundations research, as I see it, is to understand how usual (unitary) QM dynamics of a universe wavefunction can ultimately give rise to the complexity of the observed world around us (see Carroll18, also Bohm's “implicate order”). If we consider the observer to be a non-special part of this universe wavefunction, then we are lead to the many-worlds interpretation, where every possible scenario plays out in its own "branch" of the universe (at least mathematically). But how can we get branches, if all we have is a universe wavefunction, which is just a vector in some high-dimensional space, spinning around under the unitary action of the universe Hamiltonian (by the usual QM dynamics)? While I have not yet found an explanation that I find conclusively satisfying, quantum decoherence (and einselection, see Zurek01, wiki) proposes an answer, which QD further develops.
If we split the universe into our “quantum system” and “environment” (choosing this split is an issue we will skip here, see Carroll18 again), then the interaction between the two transfers information about the initial system state to the environment. So if Schrodinger’s cat starts in a superposition of dead and alive, then photons hitting it once the box opens will now carry information about its state through the universe – whether a human is there to observe them or not. Moreover, QD argues, new photons will keep hitting the cat, thus taking more and more copies of the information about its state across the universe. This way we can say that records of the cat’s state proliferate in the environment, with a high redundancy, thus allowing many different observers to independently extract (measure) these records and come to a consensus about the cat’s state. “Objective existence – hallmark of classicality – emerges from the quantum substrate as a consequence of redundancy” [Zurek09].
The core point of QD is that only those system states that can produce many redundant records of themselves in the environment, will be the ones we observe as objective reality. So for the cat, a superposition of dead and alive will never be "objective" since it is not stable under interactions with photons – and so cannot be copied many times. This way, we get a sort of “Darwinian selection” of states, such that only the “fittest” (most stable) can produce many records in the environment, and so become an observed reality in some branch of the many-worlds wavefunction.
Social constructs
This way, QD suggests that the key criterion to have an “objective reality” is consensus among the various system records in the environment – and ultimately, among the observers measuring those records. This seems to awoke the scientific method, where reproducibility of an experimental result is the core criterion for “objective truth” – we will explore this further below. But where we most commonly encounter such “consensus reality” is in our social constructs – such as money, nations, moral doctrines, etc. This parallel is further inspired by some exciting work that has recently been done in cognitive science, identifying the ways that quantum-like math may effectively model some behavioral experiment outcomes (see Quantum Cognition; note that this makes no claim about actual quantum effects in the brain).
I find money to be the most fascinating and pertinent social construct in our daily lives. The worthless pieces of paper we pass around acquire real value when we have consensus in some group to treat them as valuable. Money gets this tremendous power, which can make the difference between life and death, simply by us agreeing to play the game – as became especially clear with the introduction of bitcoin – and can just as easily go back to worthless paper – as with hyperinflation in the face of national crises. Yet, in our day-to-day lives, it is one of the most real and robust “objective realities” – in practice, often more “real” to us than birds, stars, global warming, or happiness.
And just as with QD, money only works if all observers agree on its value. Introducing even a few players who view it differently subverts the whole system – as money becomes a not-quite-universal medium of exchange, soon eroding general trust in it. In QD, two branches of many-worlds that have inconsistent records of the system’s state (cat is dead in one, alive in the other) are prohibited by laws of QM from ever interacting (if the difference between branches is large enough). For social constructs, however, there is no law of nature that prevents two inconsistent social realities from colliding – and so we ourselves decide that they “should not” collide. We thus create the relevant laws, and then spend considerable resources to enforce them – using police, military, indoctrination, childhood conditioning, language, etc. Even on a personal level, we may maintain two inconsistent images of ourselves with our colleagues versus at parties – and work to ensure those “branches” of social reality never interact. Interestingly, by doing this we sacrifice having any “objective truth” of who we are, according to the above definition.
Continuing the analogy with QD, only those social constructs that can accurately reproduce across the minds of many observers, creating a consistent understanding across culture, can become our social reality. Note that such reproduction depends not only on the construct’s value (truth value, usefulness, etc), but also on its appeal, how easy it is to understand, how well it’s presented, propensity for miscommunication, reputation of the originator, etc [c.f., memetics, and my other post on "Values Darwinism"]. For example, we could imagine modeling ideas spreading on a social network, with some probability of mutation at each replication. Only those ideas that can “infect” large segment of the network, while remaining in a sufficiently consistent form across this segment, would be considered becoming a new social reality. This way, such ‘success’ of an idea could be undermined by low propensity for replication just as much as by high mutation (miscommunication) rate. Thus we would expect winning social constructs that we ultimately live by to be simple to communicate and understand, while also being maximally engaging or emotionally triggering.
I find it deeply exciting that both our physical reality, and our social reality, could both emerge according to similar principles – former by QD, and latter by “Memetic Darwinism.” It seems to me that the core difference between the two is the propensity for collisions of inconsistent branches. In QD, such collisions follow precise laws, and can only happen when the inconsistencies are limited to microscopic systems, leading to quantum interference in a predictable, well-understood fashion. In contrast, inconsistent social realities can collide unpredictably, and on all scales, from a lie being discovered in private life, to power-struggle and censorship among global superpowers with incompatible value or belief systems. To some extent, such collisions gradually lead to finding some common-ground, which then solidifies into globally accepted “objective” reality. These realities can become so deeply rooted that we take them for granted, seeing them as fundamental laws of nature, and often cannot even conceive of an alternative way (e.g., few groups now entirely reject the use of money, and we often forget that its value is just a construct, and not intrinsic to the paper – try burning a $20 bill and see what you feel).
But it is equally important to acknowledge that as far as social realities go, collisions of inconsistent branches are entirely common, and often inform much of our worries and efforts in life. We therefore cannot disregard them and confine our social research only to consensus realities. In the QD analogy, that would be equivalent to ignoring all quantum effects and assuming a classical world. It also seems related to the equilibrium assumption in economics or finance – which grossly misrepresents reality, and has recently become a hot topic in these fields. This brings us to our final section.
The scientific method
There seems to be a deep relation between consensus realities and the reproducibility criterion of the scientific method. We may even argue that “objective truth” arising from scientific method is just a special case of such a consensus reality – an observation becomes “objective” when we all agree on it. The discussion in the last section therefore makes us to wonder whether the scientific method could be generalized beyond strict consensus.
The idea is easier to introduce on a social network, where agents make observations about each other (e.g., “Alice is kind” or “Bob has an accent” or “Eve has brown hair”). Any of N agents can “measure” (observe) any other agent, with the measurement outcome recorded on the edge A -> B between them – thus giving N^2 records. In the case of observing something like hair-color, we might expect that all edges connecting to Eve (x -> E) will agree that her hair is brown. In this special case, we can compress the information on the network by labeling just the node E with “brown” rather than all the N edges leading to it. This way, we can more efficiently say that Eve objectively has brown hair, rather than saying that everyone who looked at her saw her hair as brown. Note that while the former is more efficient, the latter is more accurate. Since our brain naturally looks for efficient representations (or compressions) of reality, we tend to think in terms of objects having properties, rather than only in terms of observations having outcomes (for all agents in the network, that gives N records, rather than N^2 records). As such, it is important for us to know when such compressions are reliable – and the scientific method is just the tool to check this.
Now, if we instead have agents make N^2 observations about each other’s personality (“Alice is kind”), we do not expect such compression to typically be possible. We thus cannot have “objective” truths about agents’ personalities themselves, and must keep data on the edges rather than the nodes (“Bob thinks that Alice is kind”, "Eve thinks Alice is mean"). As such, no compression is possible, the scientific method cannot help us, and we must keep N^2 "subjective" records to accurately describe the system. I wonder if this may be the core of why the use of scientific method in psychology research has been so hard-going compared to physics.
Finally, consider the intermediate example: “Bob has an accent.” If Bob’s accent is British, then we would expect most Americans to agree with this statement, while most Britts to disagree (cf. social constructs above). Thus, while this statement will not get universal consensus, there will be large clusters in the network, in principle allowing for some compression of the N^2 observation records. This creates a curious intermediate between objective properties of nodes and subjective observations on the edges. The scientific method would, in this case, miss the opportunity for compression, merely concluding that nothing objective can be said on the matter. (Note that you could, of course, modify the query to “Bob has a British accent,” which would lead to consensus and objective properties – but such a modification will generally be hard to find, and I think may not exist at all in some cases).
Thesis: We thus consider whether a generalization of the scientific method may be possible, which lets go of the goal of finding N objective properties of nodes, and instead looks for any possible compressed representation of the fundamental network of N^2 measurement records. It seems to me that such a generalization may be more conducive to productive research in psychology and sociology, since fully objective properties are hard to come by in those contexts. Furthermore, if this lack of objectivity can be framed in terms of collisions of inconsistent social realities, then we have both, a setup for why this happens, and a tool for how to study it that gives more formal structure to subjective realities. Moreover, continuing the parallel with QD, this network approach may similarly help clarify some of the issues of quantum foundations – by dropping our attachment to an ontology of objects having objective properties in the first place (see more below).
One simple example application in economics may be to consider a barter market, with N goods, and N^2 exchange rates between them. If those N^2 rates meet some very specific (equilibrium) conditions, then we may compress the market representation to just N prices of goods, and a universal medium of exchange may be defined. Real markets, on the other hand, are never at equilibrium and this compression is never exact (since arbitrage exists). Appreciating the fundamental network structure, we could study the compression errors, or perhaps even look for other compressions altogether (e.g., in terms of two currencies rather than just one) – which may lead to better economic models. While I’m sure similar math is used in finance, something about this ontological shift seems useful and novel to me.
To further relate to QM, we must first generalize from a network of agents that can observe each other, to a network of N particles, with observations being replaced by interactions. This way our ontology is shifted from properties of particles, to outcomes of interactions – including, but not limited to, the interaction of a human observer with a quantum system [this attitude is formalized in Relational QM – Rovelli94, Rovelli21, wiki]. While Relational QM (RQM) is considered a different interpretation of QM from QD, I find the two to be taking different perspectives on the same idea – and really helping to clarify one another. While RQM focuses on shifting fundamental ontology from nodes (particle properties) onto edges (interaction records), QD then specifies the condition these edges (records) must satisfy to give rise to objective reality (i.e., to recover particle properties). This condition, which in QD is framed as proliferation of consistent measurement records, is like the network equilibrium property described above – i.e., going around any loop in our network of interactions produces consistent records of the system state, no "arbitrage" is possible, and the network can be compressed down to N "objective" particle properties.
Ultimately, the question is whether such generalization of the scientific method is actually useful. It seems nice that it could clarify some confusions around ontological realism in QM – but we wouldn't expect it to make any falsifiable predictions that go beyond standard QM. Game theory might give a better approach to quantify how useful this generalization is - in analogy to how the Dutch book argument is used to justify Bayesian probabilities. I.e., we could construct some game where an agent that leverages relational ontology (measurement records on edges), or partial compressions, would beat one that only considers objective node properties. In some sense, the barter market example above suggests that this should be possible, and should be related to arbitrage methods.
To conclude, it seems to me that bringing these three topics together (QD, social constructs, scientific method) in one mathematical framework would lead to a beautiful theory that could clarify some issues around Quantum Foundations, give a qualitatively new way to understand society and the social realities we live by, and possibly even allow us to generalize the conventional way we think of "objective truth" via the scientific method. With the breadth of possible scope here, I've had a hard time finding the best concrete place to start developing this – and so would be excited for any suggestions, ideas, or if you want to collaborate on something within this scope. Leave a comment or reach out to me via DM or email: https://www.pchvykov.com/contact
12 comments
Comments sorted by top scores.
comment by Edith · 2024-02-07T08:46:53.547Z · LW(p) · GW(p)
So for the cat, a superposition of dead and alive will never be "objective" since it is not stable under interactions with photons – and so cannot be copied many times.
Ask yourself what is being copied many times. The very fact that the quantum cat is in a superposition of only alive and dead tells you something else that is only apparently "consensus objective"; everyone agrees that the only possible definite states associated with the system are that the quantum cat is alive, or dead, and nothing else. This just kicks the definition of "objective" back to the specification of the state vector. There is something objective about the fact that the cat only ever has the possibility of being recorded as alive or dead and nothing else. That list (alive, dead) is revealed by interactions/measurements/recording, but the fact that there is never anything else on the list despite all possible ways of interacting with the system remains. In fact, including the superposition of dead and alive leads to actual physical consequences (see Elitzur-Vaidman bomb, for example), and Zeilinger won a nobel prize in part for physical instantiation of this "interaction-free" measurement.
This seems to awoke the scientific method, where reproducibility of an experimental result is the core criterion for “objective truth”
But it isn't. Firstly we are working with two different and incompatible senses of reproducibility. Reproducibility of the naive and classical kind, which is when Anne does an experiment and John does the same experiment but at a different time and a different place, and then they both record the same result, is nothing but evidence that the experiment is a) so big that quantum fluctuations are washed out and b) invariant under space and time translations. Physicists and chemists have long since dispensed with this naive criterion as constituting "objective truth". The whole quantum mechanical picture is that the experiment is only ever reproducible in the aggregate, a completely different kind of reproducible, that's to say that Anne and John will only agree on two things after conducting the experiment infinitely many times, and those things are 1) the types of possible outcomes of the experiment (i.e the list of definite states) 2) the probabilities with which these outcomes occur and both 1) and 2) take as a given that the experiment is invariant under space and time translations over a large number of iterations. This is why everybody is satisfied when the large hadron collider reports the existence of the Higgs particle - nobody cares that there is only one large hadron collider, because everyone assumes in QFT that the experimental setup is unaffected by spacetime translations anyway. Instead they only bother "reproducing" the results by independent experiments, i.e in principle different ways of measuring the same effect, like ATLAS or CMS which are literally detectors just tacked onto the LHC apparatus. In other words, working physicists (at least) have long moved on from the naive notion of reproducibility and are working with something much more constrained.
Replies from: pchvykov↑ comment by pchvykov · 2024-02-07T15:03:35.717Z · LW(p) · GW(p)
Thanks for your comments! I'm having a bit of trouble clearly seeing your core points here - so forgive me if I misinterpret, or address something that wasn't core to your argument.
To the first part, I feel like we need to clearly separate QM itself (Copenhagen), different Quantum Foundation theories, and Quantum Darwinism specifically. What I was saying is specifically about how Quantum Darwinism views things (in my understanding) - and since interpretations of QM are trying to be more fundamental than QM itself (since QM should be derived from them), we can't use QM arguments here. So QD says that (alive, dead) is the complete list because of consensus (i.e., in this view, there isn't anything more fundamental than consensus).
I don't think I agree with (or don't understand what you mean by) "including the superposition of dead and alive leads to actual physical consequences" - bomb-testing result is consequence of standard QM, so it doesn't prove anything "new."
To the second part, I implicitly meant that reproducibility could mean wither deterministic (reproducibility of a specific outcome), or statistical (reproducibility of a probability of an outcome over many realizations) - I don't really see those two as fundamentally different. In either case, we think of objective truth (whether probabilistic or deterministic) as something derived from reproducible - so, for example, excluding Knightian uncertainty.
Replies from: Edith↑ comment by Edith · 2024-02-08T06:59:14.103Z · LW(p) · GW(p)
What I was saying is specifically about how Quantum Darwinism views things (in my understanding) - and since interpretations of QM are trying to be more fundamental than QM itself (since QM should be derived from them), we can't use QM arguments here.
With this I just wanted to point out that I was not making any argument that relies on a particular interpretation of QM to work up to interaction-free measurements. I wanted to make it clear that I was not arguing anything about a collapse mechanism/what happens under the hood - it's just empirically correct that the result of any measurement is a definite state. You don't need any theory, it's just brute empirics, all territory. [Tangentially, but still true, there is no distinction even theoretically/"map side" in how QD and Copenhagen QM treat definite states - all the differences come before this in the postulation of pointer states, collapse mechanism, etc, but QD still completely agrees with the canonical notion of a definite state.]
I don't think I agree with (or don't understand what you mean by) "including the superposition of dead and alive leads to actual physical consequences" - bomb-testing result is consequence of standard QM, so it doesn't prove anything "new."
All I really wanted to do was to point out an example of "interaction free" measurement, which throws a brick into the quantum darwinism approach. There can never be an "objective consensus" about what happens in the bomb cavity, because any sea of photons/electrons/whatever present in the cavity will trip the bomb. The point of mentioning the Zeilinger experiment was to say that this is an empirical result, so QD has to be able to explain it, and it can't. The only way to get Elitzur-Vaidman from QD is to postulate two different splits of system and environment during the experiment - this is a concrete version of the criticism laid out in a paper by Ruth Kastner. It is a plain physical fact that you can have interaction-free measurement, and QD struggles with explaining this since it has to perform an ontological split mid experiment; if the ontological split is arbitrary, why do you need to perform a specific special split reproduce the results of experiment? If it isn't arbitrary, then you have to do some hard work to explain why changing your definition of system and environment for different experiments (and sometimes mid experiment) is justified.
I implicitly meant that reproducibility could mean wither deterministic (reproducibility of a specific outcome), or statistical (reproducibility of a probability of an outcome over many realizations) - I don't really see those two as fundamentally different.
It's hard for me to see why you think they are not fundamentally different definitions of reproducibility. On an iteration by iteration basis, they clearly differ significantly; in the first case (reproducibility of a specific outcome), the ball must fall in the same way every time for it to count as evidence towards this kind of reproducibility, and a single instance of it not falling in the same way... The second case (reproducibility of a probability of an outcome over many realizations) immediately destroys the first kind of reproducibility... Is it not the difference between having intrinsic probability in your definition of reproducibility and not having it? Maybe I am missing something blunt here.
Replies from: pchvykov, pchvykov↑ comment by pchvykov · 2024-02-08T16:07:40.973Z · LW(p) · GW(p)
There can never be an "objective consensus" about what happens in the bomb cavity,
Ah, nice catch - I see your point now, quite interesting. Now I'm curious whether this bomb-testing setup makes trouble for other quantum foundation frameworks too...? As for QD, I think we could make it work - here is a first attempt, let me know what you think (honestly, I'm just using decoherence here, nothing else):
If the bomb is 'live', then the two paths will quickly entangle many degree of freedom of the environment, and so you can't get reproducible records that involve interference between the two branches. If the bomb is "dud", then the two paths remain contained to the system, and can interfere before making copies of the measurement outcomes.
Honestly, I have a bit of trouble arguing about quantum foundation approaches since they all boil down to the same empirical prediction (sort of by definition), most are inherently not falsifiable - so ultimately feel like a personal preference of what argumentation you find convincing.
Is it not the difference between having intrinsic probability in your definition of reproducibility and not having it?
I just meant that good-old scientific method is what we used to prove classical mechanics, statistical mechanics, and QM. In either case, it's a matter of anyone repeating the experiment getting the same outcome - whether this outcome is "ball rolls down" or "ball rolls down 20% of the time". I'm trying to see if we can say something in cases where no outcome is quite reproducible - probabilistic outcome or otherwise. Knightian uncertainty is one way this could happen. Another is cases where we may be able to say something more than "I don't know, so it's 50-50", but where that's the only truly reproducible statement.
comment by Mitchell_Porter · 2024-02-07T17:44:55.533Z · LW(p) · GW(p)
From the time it came out, Zurek's "Quantum Darwinism" always seemed like just a big muddle to me. Maybe he's trying to obtain a conclusion that doesn't actually follow from his premises; maybe he wants to call something an example of darwinism, when it's actually not.
I don't have time to analyze it properly, but let me point out what I see. In a world where wavefunctions don't collapse, you end up with enormous superpositions with intricate internal structure of entanglement and decoherence - OK. You can decompose it into a superposition of tensor products, in which states of subsystems may or may not be correlated in a way akin to measurement, and over time such tensor products can build up multiple internal correlations which allows them to be organized in a branching structure of histories - OK.
Then there's something about symmetries, and then we get to stuff about how something is real if there are lots of copies of it, but it's naive to say that anything actually exists... Obviously the argument went off the rails somewhere before this. In the part about symmetry, he might be trying to obtain the Born rule; but then this stuff about "reality comes from consensus but it's not really real", that's probably descended from his personal interpretation of the Copenhagen interpretation's ontology, and needs philosophical decoding and critique.
edit: As for your own agenda - it seems to me that, encouraged by the existence of the "quantum cognition" school of psychology, you're hoping that e.g. a "social quantum darwinism" could apply to social psychology. But one of the hazards of this school of thought, is to build your model on a wrong definition of quantumness. I always think of Diederik Aerts in this connection.
What is true, is that individual and social cognition can be modelled by algebras of high-dimensional arrays that include operations of superposition (i.e. addition of arrays) and tensor product; and this is also true of quantum physical systems. I think it is much healthier to pursue these analogies, in the slightly broader framework of "analogies between the application of higher-dimensional linear algebra in physics and in psychology". (The Harmonic Mind by Paul Smolensky might be an example of this.) That way you aren't committing yourself to very dubious claims about quantum mechanics per se, while still leaving open the possibility of deeper ontological crossovers.
Replies from: pchvykov↑ comment by pchvykov · 2024-02-08T16:24:31.258Z · LW(p) · GW(p)
Great points - I'm more-or-less on-board with everything you say. Ontology in QM I think is quite inherently murky - so I try to avoid talking about "what're really real" (although personally I find the Relational QM perspective on this to be most clear - and with some handwaving I could carry it over to QD I think).
Social quantum darwinism - yeah, sounds about right. And yeah, the word "quantum" is a bit ambiguous here - it's a bit of a political choice whether to use it or avoid it. Although besides superpositions and tensor products, quantum cognition also includes collapse - and that's now taking quite a few (yes, not all!) ingredients from the quantum playbook to warrant the name?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2024-02-10T03:40:21.785Z · LW(p) · GW(p)
Collapse can also be implemented in linear algebra, e.g. as projection onto a randomly selected, normalized eigenvector... Anyway, I will say this, you seem to have an original idea here. So I'm going to hang back for a while and see how it evolves.
comment by StartAtTheEnd · 2024-02-07T11:16:28.977Z · LW(p) · GW(p)
Fascinating! I had a few thoughts reading your post. Hopefully they're worth the time it takes to read them.
I find money to be the most fascinating and pertinent social construct in our daily lives.
I think there's many more examples of this. Money is a resource to be exploited, but so is trust, confidence, fame, and so on. Don't you think that ones reputation looks a bit like a stock market? And that we invest in other people in the same way as we invest in stocks?
I don't know to what extent these are social constructs, but they're similar in interesting ways.
We thus create the relevant laws, and then spend considerable resources to enforce them
Agreed, but I'd like to point out that the more effort you have to put into enforcing a law, the more unnatural said law is.
I think the reason for human freedom is a sort of equivalence relation. If one group of people claims that water is healthy, and another group claims that god has punished humanity with the constant need of water, then both groups will drink water in the end, which means that both groups survive.
For an outside observer, the two groups of people are equivalent in that their behaviour is the same. It's like comparing (2+4) and (3+3), they're different and yet the same. My math education is lacking, so I don't have a fitting word here.
But while we can't ignore the laws of physics, we can diverge away from reality, living in delusion. But I think there's a cost here, proportional to the gap between objective reality and subjective reality. The worse the coherence between your real self and your persona, the more effort it takes to "force" yourself to align with the latter.
I noticed that I haven't made a claim for how subjective and objective reality follows different laws (likely because I don't like disillusioning myself), but I have noticed that reality doesn't allow for contraditions, whereas the brain does. Self-contradicting beliefs are even common in society, it allows people to have their cake and eat it too.
The human brain can be 60% happy and 60% sad at the same time. The result is not evaluated to 0% happy and 0% sad. The differences seem to overlap without mutual destruction. As if the brain experiences a linear combination of conflicting things.
Finally, your post is just as much about human nature as it's about objective things. I don't think we can create a perfect, logical theory, since imperfect, illogical human beings are part of the very equation. A human being is trying to encode "formal system as it relates to human beings" in a formal system, in a way which can be communicated to human beings. It feels sort of like a set trying to put itself inside itself?
Replies from: pchvykov↑ comment by pchvykov · 2024-02-07T15:34:37.286Z · LW(p) · GW(p)
Thanks for sharing your thoughts - cool ideas!
Yes, I've actually thought that human interactions may be well modeled as a stock-market... never actually looked into whether this has been done though. And yes, maybe such model could be framed using this network-type setup I described... could be interesting - what if different cliques have different 'stock' valuation?
"...the more unnatural said law is." - the word 'natural' is a bit of a can of worms... I guess your statement could be viewed as an interesting definition of 'natural'? E.g., in nonequilibrium stat mech you can quantify a lower-bound on energy expenditure to keep something away from the equilibrium distribution. E.g., I've thought of applying this to quantify minimum welfare spending needed to keep social inequality below some value. But here maybe you're thinking more general? I just think 'natural' or 'real self' are really slippery notions to define. E.g., is all life inherently unnatural since it requires energy expenditure to exist?
"As if the brain experiences a linear combination of conflicting things." - that's precisely the sort of observations that Quantum Cognition models using quantum-like state-vectors. And precisely the sort of thing this framework I'm describing could help to explain perhaps.
"It feels sort of like a set trying to put itself inside itself?" - nice one! And there was a time when ancient Greek philosophers conclusively 'proved' to themselves the impossibility of ever fully understanding what matter is made of, and figured it's better to spend time on moral philosophy. Now, the former is basically solved, and the latter is still very much open. So I don't buy into no-go theorems much...
Replies from: StartAtTheEnd↑ comment by StartAtTheEnd · 2024-02-09T09:54:47.633Z · LW(p) · GW(p)
You can model it in your head. We like to befriend people with high social status, and we stay away from losers. This is because the former make for better investment. Success is a feedback-loop. But at times we get overconfident, in ourselves or others, leading to a bubble followed by a crash once evidence appears that you overextended. I like to think of mania and depression as the same. They're like the bulls and bears of stock trading.
Perhaps this can generalize to beliefs as well. And our tendency to double down on bad beliefs, rather than changing out minds, would be the sunk cost fallacy bias of the brain.
By "unnatural" I mean "against human nature". Taoists say to "let go" and to stop "fighting against the flow". Similar advice would be "pick your battles" and "don't worry about what you can't change". The main principle here is that whatever comes easy is favored by reality. So if a government needs a police state to enforce a said system, it's "forced", and thus a bad system. This ties into darwinism, I think, adaptivity (flexibility) is key. Inflexibility is trying to hold onto something which doesn't work.
People who are more true to themselves spend less energy socializing with others. Compare this to the socially anxious, and to those attempting to be perfect rather than themselves. Fighting reality takes energy.
However, just letting everything go as it must doesn't seem like a good idea. For instance, we tend towards inequality. Some people think that we should let things go bad - that it's merely the end of a cycle. This may be true. We let trees die so that they next generation of trees can grow. If we try to slow down aging of plants, we merely prolong the undesirable state. It's possible that the same applies to society, that we should let it collapse so that we can start anew. Death requires less energy than repair. If you're working with legacy code in programming, you should likely scrap it and start over with a modern programming language. Systems increase in entropy, and then they die. In humans it's DNA damage, in governments its corruption. Can anything live forever? I don't think so, at best you can renew parts of something so that the whole may live longer. A country might select a new king, and a person might get an organ transplant.
Nietzsche valued suffering, as it makes people stronger. Capitalism values competition, as it keeps systems healthy and strong. The human body likes exercise and cold showers, for the same reasons. Therapy is also about killing bad aspects of ourselves so that the rest of us can live on.
Is all this too abstract? "Every system tends towards chaos, and the only way to stop a system from dying is to let its parts die so that the speed of renewal surpasses the speed of decay". The ship of theseus becomes a new ship, but you didn't experience the death of a ship, only many deaths of its parts. This is the only form of immortality which is possible.
Edit: The connections fit, but this view is incomplete. Fitness isn't the same as low entropy, for instance, it's closer to adaptation as opposed to maladaptation. But the ability to adapt might go down as entropy increases.
You might very well have a good idea here, but I don't know much about quantum mechanics. Don't take my lack of compliments as arguments against your observations, I'm not qualified to speak about quantum mechanics. I'm out of my league but a little too fond of sharing my own thoughts, I suppose.
So I don't buy into no-go theorems
I think that's the correct mindset to have! I just think that it's similar to formalize beauty in itself. This wouldn't work, as beauty is a relation between objects and humans. I think the human viewpoints have to be part of the equation. I'm not saying that there's no objective reality, but I think we need to apply a theory of relativity, so that we're talking about relations rather than things-in-themselves. My intuition tells me that things only exist in relation to other things. Asking what something "really" is, as an isolated item, feels mistaken to me. A relative position asks for the absolute state of something, but this depends on itself? It's hard to put into words.
Replies from: pchvykov↑ comment by pchvykov · 2024-02-13T12:51:54.670Z · LW(p) · GW(p)
Thanks for expanding on this stuff - really nice discussion!
Yeah that stock-market analogy is quite tantalizing - and I like the breadth that it could apply to.
For your discussion on "unnatural" - sure, I agree with the sentiment - but it's the question of how to formalize this all so that it can produce a testable, falsifiable theory that I'm unclear on. Poetically it's all great - and I enjoy reading philosophical treatise on this - but they always leave me wanting, as I don't get something to hold onto at the end, something I can directly and tangeably apply to decision-making.
For your last paragraph, yeah that emphasis on "relational" perspective of reality is what I'm trying to build up and formalize in this post. And yes, it's a bit hypocritical to say that "ultimately reality is relational" ;P
Replies from: StartAtTheEnd↑ comment by StartAtTheEnd · 2024-02-13T18:33:10.150Z · LW(p) · GW(p)
Thank you!
I do realize that philosophizing like this is much easier than shutting up and calculating.
There's mathematical laws and principles hidden in my reply, like the path of least resistance, which I consider natural in some sense. It's hard to formalize this in a way which relates to your main goal.
My intuition, like AI weights, is black-box.
I can try, though: Your N^2 system stores the information of every perspective. There's no one value of "kind", kind is a relation and not an atom. You can... Factor out? "kind", so that you're storing an objective definition of "kind" next to the graph. Now you have N values for "kind".
The reason people will agree on hair color is because this evaluation doesn't depend on the individual. Well, it does, but it's a shared property of the whole graph (unless perhaps one is colorblind), so it's essentially already factored out. "Compression" here is essentially the same as rounding. Colors are areas on a 1 or 2D spectrum, but we clamp them to the nearest point. This is like mapping R to N, e.g. 4.7 -> 5. If the number of unique answers (after the compression) is less than the area of nodes, then you need less than N^2 nodes to store it. But this is probably trivial to you?
If we view it as interactions, we can consider the case that two people may never meet. In any case, I think this makes the order of interactions important. People match eachother, and calibrate themselves towards those they interact with, creating local realities. An idea which might be related here is the algorithms that social networks use to sync likes between distributed servers (and I don't think the order matters here?). They seem to have solved a similar problem (though "number of likes" isn't subjective). These aren't quantum-interactions, but I don't know how important this difference is. By the way, agents transfer information in a memetic manner, and if you focus on the agents rather than the meme, you may miss a part of the picture. Since social constructs are created rather than inherent in the universe, they might not exist in some nodes. And in real life, a node may interact with itself. But I'll stop here as it's very unlikely that I know more graph theory than you.
Finally, if my sentences about graphs is "100" on a difficulty scale, then a formalization of my previous comment would be a million. It's like comparing a college text-book to an unsolved math problem. Take whats useful to you and discard the rest