Posts
Comments
“If I’m thinking about how to pick up tofu with a fork, I might analogize to how I might pick up feta with a fork, and so if tofu is yummy then I’ll get a yummy vibe and I’ll wind up feeling that feta is yummy too.”
Isn't the more analogous argument "If I'm thinking about how to pick up tofu with a fork, and it feels good when I imagine doing that, then when I analogize to picking up feta with a fork, it would also feel good when I imagine that"? This does seem valid to me, and also seems more analogous to the argument you'd compared the counter-to-common-sense second argument with:
“If I’m thinking about what someone else might do and feel in situation X by analogy to what I might do and feel in situation X, and then if situation X is unpleasant than that simulation will be unpleasant, and I’ll get a generally unpleasant feeling by doing that.”
I think I assign close to zero probability to the first hypothesis. Brains are not that fast at thinking, and while sometimes your system 1 can make snap judgements, brains don't reevaluate huge piles of evidence in milliseconds. These kinds of things take time, and that means if you are dying, you will die before you get to finish your life review.
My guess is that our main crux lies somewhere around here. If I'd thought the life review experience involved tons and tons of "thinking", or otherwise some form of active cognitive processing, I would also give ~zero probability to the first hypothesis.
However, my understanding of the life review experience is that it's the phenomenological correlate of stopping a bunch of the active cognitive processing we employ to dissociate. In order to "unsee" something (i.e., dissociate from it), you still have to see it enough to recognize that it's something you're supposed to unsee, and then perform the actual work of "unseeing". What I'm proposing is that all the work that goes into "unseeing" halts during a life review, and all the stuff that would originally have gotten seen-enough-to-get-unseen now just gets seen directly, experienced in a decentralized and massively parallel fashion.
This is related to the hypothesis I'd mentioned in the original post about attention being a filter, rather than a spotlight. On this view, we filter stuff out to help us survive, but this filtration process actually takes more energy than directly experiencing everything unfiltered. This would counterintuitively imply that having high precision on (/ paying attention to) a bajillion things at once might actually require less cognitive effort than our default moment-to-moment experience, and the reason this doesn't happen by default is because we can't navigate reality well enough to survive in this lower-cognitive-effort state.
I think we'd still need an explanation for how the memo to stop dissociating could propagate throughout one's whole belief network so quickly. But I can pretty easily imagine non-mysterious explanations for this, e.g. something analogous to a mother's belief network near-instantaneously getting the memo to put ~100% of their psychological and physiological effort into lifting a car off of their child. The Experience of Dying From Falls by Noyes and Kletti (sci-hub link here) describes somewhat similar experiences occurring during falls on the top of page 4.
I should also mention that on my current models, just because someone experiences a dump of all their undissociated experiences doesn't mean that they'll remember any of it, or that any more than a tiny minority of these undissociated experiences will have a meaningful impact on how they'll live their lives afterward. I think it can be a lot like having a "life-changing peak experience" at a workshop and then life continuing as usual upon return.
I'm curious for your models for why people might experience these kinds of states.
One crucial aspect of my model is that these kinds of states get experienced when the psychological defense mechanisms that keep us dissociated get disarmed. If Alice and Bob are married, and Bob is having an affair with Carol, it's very common for Alice to filter out all the evidence that Bob is having an affair. When Alice finally confronts the reality of Bob's affair, the psychological motive for filtering out the evidence that Bob is having an affair gets rendered obsolete, and Alice comes to recognize the signs she'd been ignoring all along.
I'm pretty much proposing an analogous mechanism, except with the full confrontation of our mortality instead, and the recognition of what we'd been filtering out happening in a split second. (It's different but related with ayahuasca, which is famous for its ability to penetrate psychological defense mechanisms, whether we like it or not.)
@habryka, responding to your agreement with this claim:
a majority of the anecdata about reviewing the details of one's life from a broader vantage point are just culturally-mediated hallucinations, like alien abductions.
I think my real crux is that I've had experiences adjacent to near-death experiences on ayahuasca, during which I've directly experienced some aspects of phenomena reported in life reviews (like re-experiencing memories in relatively high-res from a place where my usual psychological defenses weren't around to help me dissociate, especially around empathizing with others' experiences), which significantly increases my credence on there being something going on in these life review accounts beyond just culturally-mediated hallucinations.
Abduction by literal physical aliens is obviously a culturally-mediated hallucination, but I suspect the general experience of "alien abductions" is an instantiation of an unexplained psychological phenomenon that's been reported throughout the ages. I don't feel comfortable dismissing this general psychological phenomenon as 100% chaff, given that I've had comparably strange experiences of "receiving teachings from a plant spirit" that nevertheless seem explainable within the scientific worldview.
In general, I think we have significantly different priors about the extent to which the things you dismiss as confabulations actually contain veridical content of a type signature that's still relatively foreign to the mainstream scientific worldview, in addition to chaff that's appropriate to dismiss. I'll concede that you're much better than I am at rejecting false positives, but I think my "epistemic risk-neutrality" makes me much better than you are at rejecting false negatives. 🙂
Thanks a lot for sharing your thoughts! A couple of thoughts in response:
I suspect that the principles you describe around the "experience of tanha" go well beyond human or even mammalian psychology.
That's how I see it too. Buddhism says tanha is experienced by all non-enlightened beings, which probably includes some unicellular organisms. If I recall correctly, some active inference folk I've brainstormed with consider tanha a component of any self-evidencing process with counterfactual depth.
Forgiveness (non-judgment?) may then need a clear definition: are you talking about a person's ability not to seek "tanha-originating revenge," while still being able to act out of caring self-protection?
Yes, this pretty much aligns exactly with how I think about forgiveness!
I really like the directions that both of you are thinking in.
But I think the "We suffered and we forgive, why can't you?" is not the way to present the idea.
I agree. I think of it more as like "We suffered and we forgave and found inner peace in doing so, and you can too, as unthinkable as that may seem to you".
I think the turbo-charged version is "We suffered and we forgave, and we were ultimately grateful for the opportunity to do so, because it just so deeply nourishes our souls to know that we can inspire hope and inner peace in others going through what we had to go through." I think Jesus alludes to this in the Sermon on the Mount:
11 “Blessed are you when people insult you, persecute you and falsely say all kinds of evil against you because of me. 12 Rejoice and be glad, because great is your reward in heaven, for in the same way they persecuted the prophets who were before you.
Here's something possibly relevant I wrote in a draft of this post that I ended up cutting out, because people seemed to keep getting confused about what I was trying to say. I'm including this in the hopes that it will clarify rather than further confuse, but I will warn in advance that the latter may happen instead...
The Goodness of Reality hypothesis is closely related to the Buddhist claim of non-self, which says that any fixed and unchanging sense of self we identify with is illusory; I partially interpret “illusory” to mean “causally downstream of a trapped prior”. One corollary of non-self is that it’s erroneous for us to model ourselves as a discrete entity with fixed and unchanging terminal values, because this entity would be a fixed and unchanging self. This means that anyone employing reasoning of the form “well, it makes sense for me to feel tanha toward X, because my terminal values imply that X is bad!” is basing their reasoning on the faulty premise that they actually have terminal values in the first place, as opposed to active blind spots masquerading as terminal values.
Your section on "tanha" sounds roughly like projecting value into the world, and then mentally latching on to an attractive high-value fabricated option.
I would say that the core issue has more to do with the mental latching (or at least a particular flavor of it, which is what I'm claiming tanha refers to) than with projecting value into the world. I'm basically saying that any endorsed mental latching is downstream of an active blind spot, regardless of whether it's making the error of projecting value into the world.
I think this probably brings us back to:
A big problem with this post is that I don't have a clear idea of "tanha" is/isn't, so can't really tell how broad various claims are.
A couple of additional pointers that might be helpful:
- I think of tanha as corresponding to the phenomenology of resisting an update because of a trapped prior.
- I think tanha is present whenever we get triggered, under the standard usage of the word (like in "trigger warning"), and I think of milder forms of tanha as being kind of like micro-triggers.
- Whenever we're suffering, and there's a sense of rush and urgency coming from lower subsystems that override higher cognition in service of trying to make the suffering go away, there's tanha involved.
(Note that I don't consider myself an expert on Buddhism, so take these pointers with a grain of salt.)
I think it might be helpful if you elaborated on specific confusions you have around the concept of tanha.
I'm open to the hypothesis that the life review is basically not a real empirical phenomenon, although I don't currently find that very plausible. I do think it's probably true that a lot of the detailed characteristics ascribed to life reviews are not nearly as universal as some near-death experience researchers claim they are, but it seems pretty implausible to me that a majority of the anecdata about reviewing the details of one's life from a broader vantage point are just culturally-mediated hallucinations, like alien abductions. (That's what I'm understanding you to be claiming, please let me know if I'm wrong.)
For example, the anecdatapoint in the link I'd shared about the guy who had a near-death experience during a fall included a life review. ("Then I saw my whole past life take place in many images, as though on a stage at some distance from me. I saw myself as the chief character in the performance. Everything was transfigured as though by a heavenly light and everything was beautiful without grief, without anxiety, and without pain. The memory of very tragic experiences I had had was clear but not saddening.") This fall happened in 1871, before life reviews were prevalent in popular culture, and I'm curious how you interpret anecdatapoints like these.
Regarding your second point, I'm leaving this comment as a placeholder to indicate my intention to give a proper response at some point. My views here have some subtlely that I want to make sure I unpack correctly, and it's getting late here!
In response to your third point, I want to echo ABlue's comment about the compatibility of the trapped prior view and the evopsych view. I also want to emphasize that my usage of "trapped prior" includes genetically pre-specified priors, like a fear of snakes, which I think can be overriden.
In any case, I don't see why priors that predispose us to e.g. adultery couldn't be similarly overriden. I wonder if our main source of disagreement has to do with the feasibility of overriding "hard-wired" evolutionary priors?
In response to your first point, I think of moral codes as being contextual more than I think of them as being subjective, but I do think of them as fundamentally being about pragmatism ("let's all agree to coordinate in ABC way to solve PQR problem in XYZ environment, and socially punish people who aren't willing to do so"). I also think religions often make the mistake of generalizing moral codes beyond the contexts in which they arose as helpful adaptations.
I think of decision theory as being the basis for morality -- see e.g. Critch's take here and Richard Ngo's take here. I evaluate how ethical people are based on how good they are at paying causal costs for larger acausal gains.
I do draw a distinction between value and ethics. Although my current best guess is that decision theory does in some sense reduce ethics to a subset of value, I do think it's a subset worth distinguishing. For example, I still have a concept of evaluating how ethical someone is, based on how good they are at paying causal costs for larger acausal gains.
I think the Goodness of Reality principle is maybe a bit confusingly named, because it's not really a claim about the existence of some objective notion of Good that applies to reality per se, and is instead a claim about how our opinions about reality are fundamentally distorted by false conceptions of who we are. I think metaethics crucially relies on us not being confused about who we are, which is how I see the two relating.
Thanks a lot for sharing your experience! I would be very curious for you to further elaborate on this part:
Eventually this led to some moments of insight when I realized just how trapped by my own ontology I had become, and then found a way threw to a new way of seeing the world. These happened almost instantly, like a dam breaking and releasing all the realizations that had been held back.
But in order for that to be plausible, you would need a reason why the almost-truths they found are so goddamn antimemetic that the most studied and followed people in history weren't able to make them stick.
A few thoughts:
- I think many of the truths do stick (like "it's never too late to repent for your misdeeds"), but end up getting wrapped up in a bunch of garbage.
- The geeks, mops, and sociopaths model feels very relevant, with the great spiritual leaders / people who were serious about doing inner work being the geeks.
- In some sense, these truths are fundamentally about beating Moloch, and so long as Moloch is in power, Moloch will naturally find ways to subvert them.
They're so slippery that until you've gotten past one yourself it's hard to believe they exist (especially when the phenomenal experience of knowing-something-that-was-once-utterly-unknowable can also seemingly be explained by developing a delusion).
YES. I think this is extraordinarily well-articulated.
There are important insights and claims from religious sources that seem to capture psychological and social truths that aren't yet fully captured by science. At least some of these phenomenon might be formalizable via a better understanding of how the brain and the mind work, and to that end predictive processing (and other theories of that sort) could be useful to explain the phenomenon in question.
Yes, I agree with this claim.
You spoke of wanting formalization but I wonder if the main thing is really the creation of a science, though of course math is a very useful tool to do science with and to create a more complete understanding. At the end of the day we want our formalizations to comport to reality - whatever aspects of reality we are interested in understanding.
That feels resonant. I think the kind of science I'm hoping for is currently bottlenecked by us not yet having the right formalisms, kind of like how Newtonian physics was bottlenecked by not having the formalism of calculus. (I would certainly want to build things using these formalisms, like an ungameable steel-Arbital.)
I'm not sure what you mean by that, but the claim "many interpretations of religious mystical traditions converge because they exploit the same human cognitive flaws" seems plausible to me. I mostly don't find such interpretations interesting, and don't think I'm interpreting religious mystical traditions in such a way.
If I change "i.e. the pluralist focus Alex mentions" to "e.g. the pluralist focus Alex mentions" does that work? I shouldn't have implied that all people who believe in heuristics recommended by many religions are pluralists (in your sense). But it does seem reasonable to say that pluralists (in your sense) believe in heuristics recommended by many religions, unless I'm misunderstanding you. (In the examples you listed these would be heuristics like "seek spiritual truth", "believe in (some version of) God", "learn from great healers", etc.)
If your main point is "don't follow religious heuristics blindly, only follow them if you actually understand why they're good" I'm totally with you. I think I got thrown off a bit because, AFAIU, the way people tend to come to adopt pluralist views is by doing exactly that, and thereby coming to conclusions that go against mainstream religious interpretations. (I am super impressed that the Pope himself seems to have been going in this direction. The Catholic monks at the monastery I visited generally wished the Pope were a lot more conservative.)
I personally don't have a great way of distinguishing between "trying to reach these people" and "trying to manipulate these people".
I use heuristics similar to those for communicating to young children.
In general I don't even think most people trying to do such outreach genuinely know whether their actual motivations are more about outreach or about manipulation. (E.g. I expect that most people who advocate for luxury beliefs sincerely believe that they're trying to help worse-off people understand the truth.) Because of this I'm skeptical of elite projects that have outreach as a major motivation, except when it comes to very clearly scientifically-grounded stuff.
This is why I mostly want religious pluralist leaders who already have an established track record of trustworthiness in their religious communities to be in charge of getting the message across to the people of their religion.
So my overall position here is something like: we should use religions as a source of possible deep insights about human psychology and culture, to a greater extent than LessWrong historically has (and I'm grateful to Alex for highlighting this, especially given the social cost of doing so).
Thanks a lot for the kind words!
IMO this all remains true even if we focus on the heuristics recommended by many religions, i.e. the pluralistic focus Alex mentions.
I think we're interpreting "pluralism" differently. Here are some central illustrations of what I consider to be the pluralist perspective:
- the Catholic priest I met at the Parliament of World Religions who encouraged someone who had really bad experiences with Christianity to find spiritual truth in Hinduism
- the passage in the Quran that says the true believers of Judaism and Christianity will also be saved
- the Vatican calling the Buddha and Jesus great healers
I don't think "lots of religions recommend X" means the pluralist perspective thinks X is good. If anything, the pluralist perspective is actually pretty uncommon / unusual among religions, especially these days.
Because if you understand the insights that Christianity is built upon, you can use those to reach people without the language of Christianity itself. And if you don't understand those insights, then you don't know how to avoid incorporating the toxic parts of Christianity.
I think this doesn't work for people with IQ <= 100, which is about half the world. I agree that an understanding of these insights is necessary to avoid incorporating the toxic parts of Christianity, but I think this can be done even using the language of Christianity. (There's a lot of latitude in how one can interpret the Bible!)
Perhaps these concerns would be addressed by examples of the kind of statement you have in mind.
I'm not sure exactly what you're asking -- I wonder how much my reply to Adam Shai addresses your concerns?
I will also mention this quote from the category theorist Lawvere, whose line of thinking I feel pretty aligned with:
It is my belief that in the next decade and in the next century the technical advances forged by category theorists will be of value to dialectical philosophy, lending precise form with disputable mathematical models to ancient philosophical distinctions such as general vs. particular, objective vs. subjective, being vs. becoming, space vs. quantity, equality vs. difference, quantitative vs. qualitative etc. In turn the explicit attention by mathematicians to such philosophical questions is necessary to achieve the goal of making mathematics (and hence other sciences) more widely learnable and useable. Of course this will require that philosophers learn mathematics and that mathematicians learn philosophy.
I think getting technical precision on philosophical concepts like these will play a crucial role in the kind of math I'm envisioning.
I'm not sure how much this answers your question, but:
- I actually think Buddhism's metaphysics is quite well-fleshed-out, and AFAIK has the most fleshed-out metaphysical system out of all the religious traditions. I think it would be sufficient for my goals to find a formalization of Buddhist metaphysics, which I think would be detailed and granular enough to transcend and include the metaphysics of other religious traditions.
- I think a lot of Buddhist claims can be described in the predictive processing framework -- see e.g. this paper giving a predictive processing account of no-self, and this paper giving a predictive processing account of non-dual awareness in terms of temporal depth. I would consider these papers small progress towards the goal, insofar as they give (relatively) precise computational accounts of some of the principles at play in Buddhism.
- I don't think the mathematical foundations of predictive processing or active inference are very satisfactory yet, and I think there are aspects of Buddhist metaphysics that are not possible to represent in terms of these frameworks yet. Chris Fields (a colleague of Michael Levin and Karl Friston) has some research that I think extends active inference in directions that seem promising for putting active inference on sounder mathematical footing. I haven't looked too carefully into his work, but I've been impressed by him the one time I talked with him, and I think it's plausible that his research would qualify as small progress toward the goal.
- I've considered any technical model that illustrates the metaphysical concepts of emptiness and dependent origination (which are essentially the central concepts of Buddhism) to be small progress towards the goal. Some examples of this:
- In his popular book Helgoland, Carlo Rovelli directly compares the core ideas of relational quantum mechanics to dependent origination (see also the Stanford Encyclopedia of Philosophy page on RQM).
- It can be very hard to wrap one's mind around how the Buddhist concept of emptiness could apply for the natural numbers. I found John Baez's online dialogue about how "'the' standard model [of arithmetic] is a much more nebulous notion than many seem to believe" to be helpful for understanding this.
- Non-classical logics that don't take the law of excluded middle for granted helped me to make sense of the concept of emptiness in the context of truth vaues; the Kochen-Specker theorem helped me make sense of the concept of emptiness in the context of physical properties; and this paper giving a topos perspective on the Kochen-Specker theorem helped me put the two together.
It's relevant that I think of the type signature of religious metaphysical claims as being more like "informal descriptions of the principles of consciousnes / the inner world" (analogously to informal descriptions of the principles of the natural world) than like "ideology or narrative". Lots of cultures independently made observations about the natural world, and Newton's Laws in some sense could be thought of as a "Rosetta Stone" for these informal observations about the natural world.
Yeah, I also see broad similarities between my vision and that of the Meaning Alignment people. I'm not super familiar with the work they're doing, but I'm pretty positive on the the little bits of it I've encountered. I'd say that our main difference is that I'm focusing on ungameable preference synthesis, which I think will be needed to robustly beat Moloch. I'm glad they're doing what they're doing, though, and I wouldn't be shocked if we ended up collaborating at some point.
Thanks for the elaboration. Your distinction about creating vs reconciling preferences seems to hinge on the distinction between "ur-want" and "proper want". I'm not really drawing a type-level distinction between "ur-want" and "proper want", and think of each flower as itself being a flowerbud that could further bloom. In my example of Alice wanting X, Bob wanting Y, and Carol proposing Z, I'd thought of X and Y as both "proper wants" and "ur-wants that bloomed into Z"
Thanks, this really warmed my heart to read :) I'm glad you appreciated all those details!
I don't really get how what you just said relates to creating vs reconciling preferences. Can you elaborate on that a bit more?
I'm not sure how you're interpreting the distinction between creating a preference vs reconciling a preference.
Suppose Alice wants X and Bob wants Y, and X and Y appear to conflict, but Carol shows up and proposes Z, which Alice and Bob both feel like addresses what they'd initially wanted from X and Y. Insofar as Alice and Bob both prefer Z over X and Y and hadn't even considered Z beforehand, in some sense Carol created this preference for them; but I also think of this preference for Z as reconciling their conflicting preferences X and Y.
People sometimes say that AGI will be like a second species; sometimes like electricity. The truth, we suspect, lies somewhere in between. Unless we have concepts which let us think clearly about that region between the two, we may have a difficult time preparing.
I just want to strongly endorse this remark made toward the end of the post. In my experience, the standard fears and narratives around AI doom invoke "second species" intuitions that I think stand on much shakier ground than is commonly acknowledged. (Things can still get pretty bad without a "second species", of course, but I think it's worth thinking clearly about what those alternatives look like, as well as how to think about them in the first place.)
Thanks, Alex. Any connections between this and CTMU? (I'm in part trying to evaluate CTMU by looking at whether it has useful implications for an area that I'm relatively familiar with.)
No direct connections that I'm aware of (besides non-classical logics being generally helpful for understanding the sorts of claims the CTMU makes).
Re: point 7, I found Jessica Taylor's take on counterfactuals in terms of linear logic pretty compelling.
Good question! Yeah, there's nothing fundamentally quantum about this effect. But if the simulator wants to focus on universes with 1 & 2 fixed (e.g. if they're trying to calculate the distribution of superintelligences across Tegmark IV), the PNRG (along with the initial conditions of the universe) seem like good places for a simulator to tweak things.
It is not clear to me that this would result in a lower Kolmogorov complexity at all. Such an algorithm could of course use a pseudo-random number generator for the vast majority quantum events which do not affect p(ASI) (like the creation of CMB photons), but this is orthogonal to someone nudging the relevant quantum events towards ASI. For these relevant events, I am not sure that the description "just do whatever favors ASI" is actually shorter than just the sequence of events.
Hmm, I notice I may have been a bit unclear in my original post. When I'd said "pseudorandom", I wasn't referring to the use of a pseudo-random number generator instead of a true RNG. I was referring to the "transcript" of relevant quantum events only appearing random, without being "truly random", because of the way in which they were generated (which I'm thinking of as being better described as "sampled from a space parameterizing the possible ways the world could be, conditional on humanity building superintelligence" rather than "close to truly random, or generated by a pseudo-random RNG, except with nudges toward ASI".)
I mean, if we are simulated by a Turing Machine (which is equivalent to quantum events having a low Kolmogorov complexity), then a TM which just implements the true laws of physics (and cheats with a PNRG, not like the inhabitants would ever notice) is surely simpler than one which tries to optimize towards some distant outcome state.
As an analogy, think about the Kolmogorov complexity of a transcript of a very long game of chess. If both opponents are following a simple algorithm of "determine the allowed moves, then use a PRNG to pick one of them", that should have a bound complexity. If both are chess AIs which want to win the game (i.e. optimize towards a certain state) and use a deterministic PRNG (lest we are incompressible), the size of your Turing Machine -- which /is/ the Kolmogorov complexity -- just explodes.
Wouldn't this also serve as an argument against malign consequentialists in the Solomonoff prior, that may make it a priori more likely for us to end up in a world with particular outcomes optimized in their favor?
It is not clear to me that this would result in a lower Kolmogorov complexity at all.
[...]
Look at me rambling about universe-simulating TMs. Enough, enough.
To be clear, it's also not clear to me that this would result in a lower K-complexity either. My main point is that (1) the null hypothesis of quantum events being independent of consciousness rests on assumptions (like assumptions about what the Solomonoff prior is like) that I think are actually pretty speculative, and that (2) there are speculative ways the Solomonoff prior could be in which our consciousness can influence quantum outcomes.
My goal here is not to make a positive case for consciousness affecting quantum outcomes, as much as it is to question the assumptions behind the case against the world working that way.
This. Physics runs on falsifiable predictions. If 'consciousness can affect quantum outcomes' is any more true than the classic 'there is an invisible dragon in my garage', then discovering that fact would seem easy from an experimentalist standpoint. Sources of quantum randomness (e.g. weak source+detector) are readily available, so any claimant who thinks they can predict or affect their outcomes could probably be tested initially for a few 100$.
Yes, I'm also bearish on consciousness affecting quantum outcomes in ways that are as overt and measurable in the way you're gesturing at. The only thing I was arguing in this post is that the effect size of consciousness on quantum outcomes is maybe more than zero, as opposed to obviously exactly zero. I don't think of myself as having made any arguments that the effect size should be non-negligible, although I also don't think that possibility has been ruled out for non-neglible effect sizes lying somewhere between "completely indistinguishable from no influence at all" and "overt and measurable to the extent a proclaimed psychic could reproducibly affect quantum RNG outcomes".
I'll take a stab at this. Suppose we had strong a priori reasons for thinking it's in our logical past that we'll have created a superintelligence of some sort. Let's suppose that some particular quantum outcome in the future can get chaotically amplified, so that in one Everett branch humanity never builds any superintelligence because of some sort of global catastrophe (say with 99% probability, according to the Born rule), and in some other Everett branch humanity builds some kind of superintelligence (say with 1% probability, according to the Born rule). Then we should expect to end up in the Everett branch in which humanity builds some kind of superintelligence with ~100% probability, despite the Born rule saying we only have a 1% chance of ending up there, because the "99%-likely" Everett branch was ruled out by our a priori reasoning.
I'm not sure if this is the kind of concrete outcome that you're asking for. I imagine that, for the most part, the kind of universe I'm describing will still yield frequencies that converge on the Born probabilities, and for the most part appear indistinguishable from a universe in which quantum outcomes are "truly random". See my reply to Joel Burget for some more detail about how I think about this hypothesis.
If we performed a trillion 50/50 quantum coin flips, and found a program with K-complexity far less than a trillion that could explain these outcomes, that would be an example of evidence in favor of this hypothesis. (I don't think it's very likely that we'll be able to find a positive result if we run that particular experiment; I'm naming it more to illustrate the kind of thing that would serve as evidence.) (EDIT: This would only serve as evidence against quantum outcomes being truly random. In order for it to serve as evidence in favor of quantum outcomes being impacted by consciousness, the low K-complexity program explaining these outcomes would need to route through the decisions of conscious beings somehow; it wouldn't work if the program were just printing out digits of pi in binary, for example.)
My inside view doesn't currently lead me to put much credence on this picture of reality actually being true. My inside view is more like "huh, I notice I have become way more uncertain about the a priori arguments about what kind of universe we live in -- especially the arguments that we live in a universe in which quantum outcomes are supposed to be 'truly random' -- so I will expand my hypothesis space for what kinds of universes we might be living in".
Shortly after publishing this, I discovered something written by John Wheeler (whom Chris Langan cites) that feels thematically relevant. From Law Without Law:
I was hoping people other than Jessica would share some specific curated insights they got. Syndiffeonesis is in fact a good insight.
I finally wrote one up! It ballooned into a whole LessWrong post.
It seems if I only read the main text, the obvious interpretation is that points are events and the circles restrict which other events they can interact with.
This seems right to me, as far as I can tell, with the caveat that "restrict" (/ "filter") and "construct" are two sides of the same coin, as per constructive-filtrative duality.
From the diagram text, it seems he is instead saying that each circle represents entangled wavefunctions of some subset of objects that generated the circle.
I think each circle represents the entangled wavefunctions of all of the objects that generated the circle, not just some subset.
Relatedly, you talk about "the" wave function in a way that connotes a single universal wave function, like in many-worlds. I'm not sure if this is what you're intending, but it seems plausible that the way you're imagining things is different from how my model of Chris is imagining things, which is as follows: if there are N systems that are all separable from one another, we could write a universal wave function for these N systems that we could factorize as ψ_1 ⊗ ψ_2 ⊗ ... ⊗ ψ_N, and there would be N inner expansion domains (/ "circles"), one for each ψ_i, and we can think of each ψ_i as being "located within" each of the circles.
Great. Yes, I think that's the thing to do. Start small! I (and presumably others) would update a lot from a new piece of actual formal mathematics from Chris's work. Even if that work was, by itself, not very impressive.
(I would also want to check that that math had something to do with his earlier writings.)
I think we're on exactly the same page here.
Please be prepared for the possibility that Chris is very smart and creative, and that he's had some interesting ideas (e.g. Syndiffeonesis), but that his framework is more of a interlocked collection of ideas than anything mathematical (despite using terms from mathematics). Litany of Tarsky and all that.
That's certainly been a live hypothesis in my mind as well, that I don't think can be ruled out before I personally see (or produce) a piece of formal math (that most mathematicians would consider formal, lol) that captures the core ideas of the CTMU.
So Chris either (i) doesn't realize that you need to be precise to communicate with mathematicians, or (ii) doesn't understand how to be precise.
While I agree that there isn't very much explicit and precise mathematical formalism in the CTMU papers themselves, my best guess is that (iii) Chris does unambiguously gesture at a precise structure he has in mind, assuming a sufficiently thorough understanding of the background assumptions in his document (which I think is a false assumption for most mathematicians reading this document). By analogy, it seems plausible to me that Hegel was gesturing at something quite precise in some of his philosophical works, that only got mathematized nearly 200 years later by category theorists. (I don't understand any Hegel myself, so take this with a grain of salt.)
Except, I can already predict you're going to say that no piece of his framework can be understood without the whole. Not even by making a different smaller framework that exists just to showcase the well-ordering alternative. It's a little suspicious.
False! :P I think no part of his framework can be completely understood without the whole, but I think the big pictures of some core ideas can be understood in relative isolation. (Like syndiffeonesis, for example.) I think this is plausibly true for his alternatives to well-ordering as well.
If you're going to fund someone to do something, it should be to formalize Chris's work. That would not only serve as a BS check, it would make it vastly more approachable.
I'm very on board with formalizing Chris's work, both to serve as a BS check and to make it more approachable. I think formalizing it in full will be a pretty nontrivial undertaking, but formalizing isolated components feels tractable, and is in fact where I'm currently directing a lot of my time and funding.
"gesture at something formal" -- not in the way of the "grammar" it isn't. I've seen rough mathematics and proof sketches, especially around formal grammars. This isn't that, and it isn't trying to be.
[...]
Nonsense! If Chris has an alternative to well-ordering, that's of general mathematical interest! He would make a splash simply writing that up formally on its own, without dragging the rest of his framework along with it.
My claim was specifically around whether it would be worth people's time to attempt to decipher Chris's written work, not whether there's value in Chris's work that's of general mathematical interest. If I succeed at producing formal artifacts inspired by Chris's work, written in a language that is far more approachable for general academic audiences, I would recommend for people to check those out.
That said, I am very sympathetic to the question "If Chris has such good ideas that he claims he's formalized, why hasn't he written them down formally -- or at least gestured at them formally -- in a way that most modern mathematicians or scientists can recognize? Wouldn't that clearly be in his self-interest? Isn't it pretty suspicious that he hasn't done that?"
My current understanding is that he believes that his current written work should be sufficient for modern mathematicians and scientists to understand his core ideas, and insofar as they reject his ideas, it's because of some combination of them not being intelligent and open-minded enough, which he can't do much about. I think his model is... not exactly false, but is also definitely not how I would choose to characterize most smart people who are skeptical of Chris.
To understand why Chris thinks this way, it's important to remember that he had never been acculturated into the norms of the modern intellectual elite -- he grew up in the midwest, without much affluence; he had a physically abusive stepfather he kicked out of his home by lifting weights; he was expelled from college for bureaucratic reasons, which pretty much ended his relationship with academia (IIRC); he mostly worked blue-collar jobs throughout his adult life; AND he may actually have been smarter than almost anybody he'd ever met or heard of. (Try picturing what von Neumann may have been like if he'd had the opposite of a prestigious and affluent background, and had gotten spurned by most of the intellectuals he'd talked to.) Among other things, Chris hasn't had very many intellectual peers who could gently inform him that many portions of his written work that he considers totally obvious and straightforward are actually not at all obvious for a majority of his intended audience.
On the flip side, I think this means there's a lot of low-hanging fruit in translating Chris's work into something more digestible by the modern intelletual elite.
I was hoping people other than Jessica would share some specific curated insights they got. Syndiffeonesis is in fact a good insight.
Gotcha! I'm happy to do that in a followup comment.
I'd categorize this section as "not even wrong"; it isn't doing anything formal enough to have a mistake in it.
I think it's an attempt to gesture at something formal within the framework of the CTMU that I think you can only really understand if you grok enough of Chris's preliminary setup. (See also the first part of my comment here.)
(Perhaps you'd run into issues with making the sets well-ordered, but if so he's running headlong into the same issues.)
A big part of Chris's preliminary setup is around how to sidestep the issues around making the sets well-ordered. What I've picked up in my conversations with Chris is that part of his solution involves mutually recursively defining objects, relations, and processes, in such a way that they all end up being "bottomless fractals" that cannot be fully understood from the perspective of any existing formal frameworks, like set theory. (Insofar as it's valid for me to make analogies between the CTMU and ZFC, I would say that these "bottomless fractals" violate the axiom of foundation, because they have downward infinite membership chains.)
I'm really not seeing any value in this guy's writing. Could someone who got something out of it share a couple specific insights that got from it?
I think Chris's work is most valuable to engage with for people who have independently explored philosophical directions similar to the ones Chris has explored; I don't recommend for most people to attempt to decipher Chris's work.
I'm confused why you're asking about specific insights people have gotten when Jessica has included a number of insights she's gotten in her post (e.g. "He presents a number of concepts, such as syndiffeonesis, that are useful in themselves.").
Thanks a lot for posting this, Jessica! A few comments:
It's an alternative ontology, conceiving of reality as a self-processing language, which avoids some problems of more mainstream theories, but has problems of its own, and seems quite underspecified in the document despite the use of formal notation.
I think this is a reasonable take. My own current best guess is that the contents of the document uniquely specifies a precise theory, but that it's very hard to understand what's being specified without grokking the details of all the arguments he's using to pin down the CTMU. I partly believe this because of my conversations with Chris, and I partly believe this because someone else I'd funded to review Chris's work (who had extensive prior familiarity with the kinds of ideas and arguments Chris employs) managed to make sense of most of the CTMU (including the portions using formal notation) based on Chris's written work alone, in a way that Chris has vetted over the course of numerous three-way Zoom calls.
In particular, I doubt that conspansion solves quantum locality problems as Langan suggests; conceiving of the wave function as embedded in conspanding objects seems to neglect correlations between the objects implied by the wave function, and the appeal to teleology to explain the correlations seems hand-wavey.
I'm actually not sure which quantum locality problems Chris is referring to, but I don't think the thing Chris means by "embedding the wave function in conspanding objects" runs into the problems you're describing. Insofar as one object is correlated with others via quantum entanglement, I think those other objects would occupy the same circle -- from the subtext of Diagram 11 on page 28, The result is a Venn diagram in which circles represent objects and events, or (n>1)-ary interactive relationships of objects. That is, each circle depicts the “entangled quantum wavefunctions” of the objects which interacted with each other to generate it.
In particular, I think this manifests in part as an extreme lack of humility.
I just want to note that, based on my personal interactions with Chris, I experience Chris's "extreme lack of humility" similarly to how I experience Eliezer's "extreme lack of humility":
- in both cases, I think they have plausibly calibrated beliefs about having identified certain philosophical questions that are of crucial importance to the future of humanity, that most of the world is not taking seriously,[1] leading them to feel a particular flavor of frustration that people often interpret as an extreme lack of humility
- in both cases, they are in some senses incredibly humble in their pursuit of truth, doing their utmost to be extremely honest with themselves about where they're confused
- ^
It feels worth noting that Chris Langan has written about Newcomb's paradox in 1989, and that his resolution involves thinking in terms of being in a simulation, similarly to what Andrew Critch has written about.
I've spent 40+ hours talking with Chris directly, and for me, a huge part of the value also comes from seeing how Chris synthesizes all these ideas into what appears to be a coherent framework.
Here's my current understanding of what Scott meant by "just a little off".
I think exact Bayesian inference via Solomonoff induction doesn't run into the trapped prior problem. Unfortunately, bounded agents like us can't do exact Bayesian inference via Solomonoff induction, since we can only consider a finite set of hypotheses at any given point. I think we try to compensate for this by recognizing that this list of hypotheses is incomplete, and appending it with new hypotheses whenever it seems like our current hypotheses are doing a sufficiently terrible job of explaining the input data.
One side effect is that if the true hypothesis (eg "polar bears are real") is not among our currently considered hypotheses, but our currently considered hypotheses are doing a sufficiently non-terrible job of explaining the input data (eg if the hypothesis "polar bears aren't real, but there's a lot of bad evidence suggesting that they are" is included, and the data is noisy enough that this hypothesis is reasonable), we just never even end up considering the true hypothesis. There wouldn't be accumulating likelihood ratios in favor of polar bears, because actual polar bears were never considered in the first place.
I think something similar is happening with phobias. For example, for someone with a phobia of dogs, I think the (subconscious, non-declarative) hypothesis "dogs are safe" doesn't actually get considered until the subject is well into exposure therapy, after which they've accumulated enough evidence that's sufficiently inconsistent with their prior hypotheses of dogs being scary and dangerous that they start considering alternative hypotheses.
In some sense this algorithm is "going out of its way to do something like compartmentalization", in that it's actively trying to fit all input data into its current hypotheses (/ "compartments") until this method no longer works.
Yep! I addressed this point in footnote [3].
I just want to share another reason I find this n=1 anecdote so interesting -- I have a highly speculative inside view that the abstract concept of self provides a cognitive affordance for intertemporal coordination, resulting in a phase transition in agentiness only known to be accessible to humans.
Hmm, I'm not sure I understand what point you think I was trying to make. The only case I was trying to make here was that much of our subjective experience which may appear uniquely human might stem from our langauge abilites, which seems consistent with Helen Keller undergoing a phase transition in her subjective experience upon learning a single abstract concept. I'm not getting what age has to do with this.
Questions #2 and #3 seem positively correlated – if the thing that humans have is important, it's evidence that architectural changes matter a lot.
Not necessarily. For example, it may be that language ability is very important, but that most of the heavy lifting in our language ability comes from general learning abilities + having a culture that gives us good training data for learning language, rather than from architectural changes.
I remembered reading about this a while back and updating on it, but I'd forgotten about it. I definitely think this is relevant, so I'm glad you mentioned it -- thanks!