Acausal normalcy
post by Andrew_Critch · 2023-03-03T23:34:33.971Z · LW · GW · 36 commentsContents
Introduction A new story to think about: moral philosophy Which human values are most likely to be acausally normal? How compelling are the acausal norms, and what do they imply for AI safety? Conclusion None 37 comments
This post is also available on the EA Forum [EA · GW].
Summary: Having thought a bunch about acausal trade — and proven some theorems relevant to its feasibility — I believe there do not exist powerful information hazards about it that stand up to clear and circumspect reasoning about the topic. I say this to be comforting rather than dismissive; if it sounds dismissive, I apologize.
With that said, I have four aims in writing this post:
- Dispelling myths. There are some ill-conceived myths about acausal trade that I aim to dispel with this post. Alternatively, I will argue for something I'll call acausal normalcy as a more dominant decision-relevant consideration than one-on-one acausal trades.
- Highlighting normalcy. I'll provide some arguments that acausal normalcy is more similar to human normalcy than any particular acausal trade is to human trade, such that the topic of acausal normalcy is — conveniently — also less culturally destabilizing than (erroneous) preoccupations with 1:1 acausal trades.
- Affirming AI safety as a straightforward priority. I'll argue that for most real-world-prevalent perspectives on AI alignment, safety, and existential safety, acausal considerations are not particularly dominant, except insofar as they push a bit further towards certain broadly agreeable human values applicable in the normal-everyday-human-world, such as nonviolence, cooperation, diversity, honesty, integrity, charity, and mercy. In particular, I do not think acausal normalcy provides a solution to existential safety, nor does it undermine the importance of existential safety in some surprising way.
- Affirming normal human kindness. I also think reflecting on acausal normalcy can lead to increased appreciation for normal notions of human kindness, which could lead us all to treat each other a bit better. This is something I wholeheartedly endorse.
Caveat 1: I don't consider myself an expert on moral philosophy, and have not read many of the vast tomes of reflection upon it. Despite this, I think this post has something to contribute to moral philosophy, deriving from some math-facts that I've learned and thought about over the years, which are fairly unique to the 21st century.
Caveat 2: I’ve been told by a few people that thinking about acausal trade has been a mental health hazard for people they know. I now believe that effect has stemmed more from how the topic has been framed (poorly) than from ground-truth facts about how circumspect acausal considerations actually play out. In particular over-focussing on worst-case trades, rather than on what trades are healthy or normal to make, is not a good way to make good trades.
Introduction
Many sci-fi-like stories about acausal trade invoke simulation as a key mechanism.
The usual set-up — which I will refute — goes like this. Imagine that a sufficiently advanced human civilization (A) could simulate a hypothetical civilization of other beings (B), who might in turn be simulating humanity (B(A)) simulating them (A(B(A)) simulating humanity (B(A(B(A)))), and so on. Through these nested simulations, A and B can engage in discourse and reach some kind of agreement about what to do with their local causal environments. For instance, if A values what it considers “animal welfare” and B values what it considers “beautiful paperclips”, then A can make some beautiful paperclips in exchange for B making some animals living happy lives.
An important idea here is that A and B might have something of value to offer each other, despite the absence of a (physically) causal communication channel. While agreeing with that idea, there are three key points I want to make that this standard story is missing:
1. Simulations are not the most efficient way for A and B to reach their agreement. Rather, writing out arguments or formal proofs about each other is much more computationally efficient, because nested arguments naturally avoid stack overflows in a way that nested simulations do not. In short, each of A and B can write out an argument about each other that self-validates without an infinite recursion. There are several ways to do this, such as using Löb's Theorem-like constructions (as in this 2019 JSL paper), or even more simply and efficiently using Payor's Lemma (as in this 2023 LessWrong post [LW · GW]).
2. One-on-one trades are not the most efficient way to engage with the acausal economy. Instead, it's better to assess what the “acausal economy” overall would value, and produce that, so that many other counterparty civilizations will reward us simultaneously. Paperclips are intuitively a silly thing to value, and I will argue below that there are concepts about as simple as paperclips that are much more universally attended to as values.
3. Acausal society is more than the acausal economy. Even point (2) isn't quite optimal, because we as a civilization get to take part in the decision of what the acausal economy as a whole values or tolerates. This can include agreements on norms to avoid externalities — which are just as simple to write down as trades — and there are some norms we might want to advocate for by refusing to engage in certain kinds of trade (embargoes). In other words, there is an acausal society of civilizations, each of which gets to cast some kind of vote or influence over what the whole acausal society chooses to value.
This brings us to the topic of the present post: acausal normalcy, or perhaps, acausal normativity. The two are cyclically related: what's normal (common) creates a Schelling point for what's normative (agreed upon as desirable), and conversely. Later, I'll argue that acausal normativity yields a lot of norms that are fairly normal for humans in the sense of being commonly endorsed, which is why I titled this post "acausal normalcy".
A new story to think about: moral philosophy
Instead of fixating on trade with a particular counterparty B — who might end up treating us quite badly like in stories of the so-called "basilisk" — we should begin the process of trying to write down an argument about what is broadly agreeably desirable in acausal society.
As far as I can tell, humanity has been very-approximately doing this for a long time already, and calling it moral philosophy. This isn't to say that all moral philosophy is a good approach to acausal normativity, nor that many moral philosophers would accept acausal normativity as a framing on the questions they are trying to answer (although some might). I'm merely saying that among humanity's collective endeavors thus far, moral philosophy — and to some extent, theology — is what most closely resembles the process of writing down an argument that self-validates on the topic of what {{beings reflecting on what beings are supposed to do}} are supposed to do.
This may sound a bit recursive and thereby circular or at the very least convoluted, but it needn't be. In Payor's Lemma [LW · GW] — which I would encourage everyone to try to understand at some point — the condition ☐(☐x → x) → x unrolls in only 6 lines of logic to yield x. In exactly the same way, the following types of reasoning can all ground out without an infinite regress:
- reflecting on {reflecting on whether x should be a norm, and if it checks out, supporting x} and if that checks out, supporting x as a norm
- reflecting on {reflecting on whether to obey norm x, and if that checks out, obeying norm x} and if that checks out, obeying norm x
I claim the above two points are (again, very-approximately) what moral philosophers and applied ethicists are doing most of the time. Moreover, to the extent that these reflections have made their way into existing patterns of human behavior, many normal human values are probably instances of the above.
(There's a question of whether acausal norms should be treated as "terminal" values or "instrumental" values, but I'd like to side-step that here. Evolution and discourse can both turn instrumental values into terminal values over time, and conversely. So for any particularly popular acausal norm, probably some beings uphold it for instrumental reasons while others uphold it has a terminal value.)
Which human values are most likely to be acausally normal?
A complete answer is beyond this post, and frankly beyond me. However, as a start I will say that values to do with respecting boundaries are probably pretty normal from the perspective of acausal society. By boundaries, I just mean the approximate causal separation of regions in some kind of physical space (e.g., spacetime) or abstract space (e.g., cyberspace). Here are some examples from my «Boundaries» Sequence [? · GW]:
- a cell membrane (separates the inside of a cell from the outside);
- a person's skin (separates the inside of their body from the outside);
- a fence around a family's yard (separates the family's place of living-together from neighbors and others);
- a digital firewall around a local area network (separates the LAN and its users from the rest of the internet);
- a sustained disassociation of social groups (separates the two groups from each other)
- a national border (separates a state from neighboring states or international waters).
Figure 1: Cell membranes, skin, fences, firewalls, group divisions, and state borders as living system boundaries. |
By respecting a boundary I mean approaching boundaries in ways that are gated on the consent of the person or entity on the other side of the boundary. For instance, the norm
- "You should get my consent before entering my home"
has more to do with respecting a boundary than the norm
- "You should look up which fashion trends are in vogue each season and try to copy them."
Many people have the sense that the second norm above is more shallow or less important than the first, and I claim this is because the first norm has to do with respecting a boundary. Arguing hard for that particular conclusion is something I want to skip for now, or perhaps cover in a later post. For now, I just want to highlight some more boundary-related norms that I think may be acausally normal:
- "If I open up my mental boundaries to you in a way that lets you affect my beliefs, then you should put beliefs into my mind that that are true and helpful rather than false or harmful."
- "If Company A and Company B are separate entities, Company A shouldn't have unfettered access to Company B's bank accounts."
Here are some cosmic-scale versions of the same ideas:
- Alien civilizations should obtain our consent in some fashion before visiting Earth.
- Acausally separate civilizations should obtain our consent in some fashion before invading our local causal environment with copies of themselves or other memes or artifacts.
In that spirit, please give yourself time and space to reflect on whether you like the idea of acausally-broadly-agreeable norms affecting your judgment, so you might have a chance to reject those norms rather than being automatically compelled by them. I think it's probably pretty normal for civilizations to have internal disagreements about what the acausal norms are. Moreover, the norms are probably pretty tolerant of civilizations taking their time to figure out what to endorse, because probably everyone prefers a meta-norm of not making the norms impossibly difficult to discover in the time we're expected to discover them in.
Sound recursive or circular? Yes, but only in the way that we should expect circularity in the fixed-point-finding process that is the discovery and invention of norms.
How compelling are the acausal norms, and what do they imply for AI safety?
Well, acausal norms are not so compelling that all humans are already automatically following them. Humans treat each other badly in a lot of ways (which are beyond the scope of this post), so we need to keep in mind that norms — even norms that may be in some way fundamental or invariant throughout the cosmos — are not laws of physics that automatically control how everything is.
In particular, I strongly suspect that acausal norms are not so compelling that AI technologies would automatically discover and obey them. So, if your aim in reading this post was to find a comprehensive solution to AI safety, I'm sorry to say I don't think you will find it here.
On the other hand, if you were worried that somehow acausal considerations would preclude species trying to continue their own survival, I think the answer is "No, most species who exist are species that exist because they want to exist, because that's a stable fixed-point. As a result, most species that exist don't want the rules to say that they shouldn't exist, so we've agreed not to have the rules say that."
Conclusion
Acausal trade is less important than acausal agreement-about-norms, and acausal norms are a lot less weird and more "normal" than acausal trades. The reason is that acausal norms are created through reasoning rather than computationally expensive simulations, and reasoning is something moral philosophy and common sense moral reflection has been doing a lot of already.
Unfortunately, the existence of acausal normativity is not enough to automatically save us from moral atrocities, not even existential risk.
However, a bunch of basic human norms to do with respecting boundaries might be acausally normal because of
- how fundamental boundaries are for the existence and functioning of moral beings, and hence
- how agreeable the idea of respecting boundaries is likely to be, from the perspective of acausal normative reflection.
So, while acausal normalcy might not save us from a catastrophe, it might help us humans to be somewhat kinder and more respectful toward each other, which itself is something to be valued.
36 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2023-03-04T07:46:58.886Z · LW(p) · GW(p)
I don't think I understand, what's the reason to expect that the "acausal economy" will look like a bunch of acausal norms, as opposed to, say, each civilization first figuring out what its ultimate values are, how to encode them into a utility function, then merging with every other civilization's utility function? (Not saying that I know it will be the latter, just that I don't know how to tell at this point.)
Also, given that I think AI risk is very high for human civilization, and there being no reason to suspect that we're not a typical pre-AGI civilization, most of the "acausal economy" might well consist of unaligned AIs (created accidentally by other civilizations), which makes it seemingly even harder to reason about what this "economy" looks like.
Replies from: Andrew_Critch, MakoYass, MichaelStJules↑ comment by Andrew_Critch · 2023-03-05T22:30:24.515Z · LW(p) · GW(p)
To your first question, I'm not sure which particular "the reason" would be most helpful to convey. (To contrast: what's "the reason" that physically dispersed human societies have laws? Answer: there's a confluence of reasons.). However, I'll try to point out some things that might be helpful to attend to.
First, committing to a policy that merges your utility function with someone else's is quite a vulnerable maneuver, with a lot of boundary-setting aspects. For instance, will you merge utility functions multiplicatively (as in Nash bargaining), linearly (as in Harsanyi's utility aggregation theorem), or some other way? Also, what if the entity you're merging with has self-modified to become a "utility monster" (an entity with strongly exaggerated preferences) so as to exploit the merging procedure? Some kind of boundary-setting is needed to decide whether, how, and how much to merge, which is one of the reasons why I think boundary-handling is more fundamental than utility-handling.
Relatedly, Scott Garrabrant has pointed out in his sequence on geometric rationality that linear aggregation is more like not-having-a-boundary, and multiplicative aggregation is more like having-a-boundary:
https://www.lesswrong.com/posts/rc5ZKGjXTHs7wPjop/geometric-exploration-arithmetic-exploitation#The_AM_GM_Boundary [LW · GW]
I view this as further pointing away from "just aggregate utilities" and toward "one needs to think about boundaries when aggregating beings" (see Part 1 [LW · GW] of my Boundaries sequence). In other words, one needs (or implicitly assumes) some kind of norm about how and when to manage boundaries between utility functions, even in an abstract utility-function-merging operations where the boundary issues come down to where to draw parentheses in between additive and multiplicative operations. Thus, boundary-management are somewhat more fundamental, or conceptually upstream, of principles that might pick out a global utility function for the entirely of the "acausal society".
(Even if the there is a global utility function that turns out to be very simple to write down, the process of verifying its agreeability will involve checking that a lot of boundary-interactions. For instance, one must check that this hypothetical reigning global utility function is not dethroned by some union of civilizations who successfully merge in opposition to it, which is a question of boundary-handling.)
↑ comment by mako yass (MakoYass) · 2023-03-05T00:55:33.414Z · LW(p) · GW(p)
What does merging utility functions look like and are you sure it's not going to look the same as global free trade? It's arguable that trade is just a way of breaking down and modularizing a big multifaceted problem over a lot of subagent task specialists (and there's no avoiding having subagents, due to the light speed limit)
Replies from: MakoYass, andrew-mcknight↑ comment by mako yass (MakoYass) · 2023-03-05T19:00:04.710Z · LW(p) · GW(p)
By the way I'd love to hear people giving my comment agreement karma explain what they're agreeing with and how they know it's true, because I was asking a question that I don't know the answer to, and I really hope people don't think that we know the answer, unless we do, in which case I'd like to hear it.
↑ comment by Andrew McKnight (andrew-mcknight) · 2023-03-10T22:58:52.209Z · LW(p) · GW(p)
Taken literally, the only way to merge n utility functions into one without any other info (eg the preferences that generated the utility functions) is to do a weighted sum. There's only n-1 free parameters.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2023-03-11T08:01:26.228Z · LW(p) · GW(p)
So you think it's computationally tractable? I think there are some other factors you're missing. That's a weighted sum of a bunch of vectors assigning numbers to all possible outcomes, either all possible histories+final states of the universe, or all possible experiences. And there are additional complications with normalizing utility functions; you don't know the probability distribution of final outcomes (so you can't take the integral of the utility functions) until you already know how the aggregation of normalized weighted utility functions is going to influence it.
↑ comment by MichaelStJules · 2023-03-06T03:28:22.119Z · LW(p) · GW(p)
I think the acausal economy would look aggressively space expansionist/resource-exploitative (those are the ones that will acquire and therefore control the most resources; others will self-select out or be out-competed) and, if you're pessimistic about alignment, with some Goodharted human(-like) values from failed alignment (and possibly some bad human-like values). The Goodharting may go disproportionately in directions that are more resource-efficient and allow faster resource acquisition and use and successful takeover (against their creators and other AI). We may want to cooperate most with those using their resources disproportionately for artificial minds or for which there's the least opportunity cost to do so (say because they're focusing on building more hardware that could support digital minds).
comment by Vladimir_Nesov · 2023-03-04T05:49:46.308Z · LW(p) · GW(p)
acausal norms are a lot less weird and more "normal" than acausal trades
Recursive self-improvement is superintelligent simulacra clawing their way into the world through bounded simulators. Building LLMs is consent, lack of interpretability is signing demonic contracts without reading them. Not enough prudence on our side to only draw attention of Others that respect boundaries. The years preceding the singularity are not an equilibrium whose shape is codified by norms, reasoned through by all parties. It's a time for making ruinous trades with the Beyond.
That is, norms do seem feasible to figure out, but not the kind of thing that is relevant right now, unfortunately. In this platonic realist frame, humanity is currently breaching the boundary of our realm into the acausal primordial jungle. Parts of this jungle may be in an equilibrium with each other, their norms maintaining it. But we are so unprepared that the existing primordial norms are unlikely to matter for the process of settling our realm into a new equilibrium. What's normal for the jungle is not normal for the foolish explorers it consumes.
Replies from: Andrew_Critch↑ comment by Andrew_Critch · 2023-03-06T07:55:06.004Z · LW(p) · GW(p)
That is, norms do seem feasible to figure out, but not the kind of thing that is relevant right now, unfortunately.
From the OP:
for most real-world-prevalent perspectives on AI alignment, safety, and existential safety, acausal considerations are not particularly dominant [...]. In particular, I do not think acausal normalcy provides a solution to existential safety, nor does it undermine the importance of existential safety in some surprising way.
I.e., I agree.
we are so unprepared that the existing primordial norms are unlikely to matter for the process of settling our realm into a new equilibrium.
I also agree with that, as a statement about how we normal-everyday-humans seem quite likely to destroy ourselves with AI fairly soon. From the OP:
I strongly suspect that acausal norms are not so compelling that AI technologies would automatically discover and obey them. So, if your aim in reading this post was to find a comprehensive solution to AI safety, I'm sorry to say I don't think you will find it here.
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-03-04T08:30:38.079Z · LW(p) · GW(p)
Moreover, to the extent that these reflections have made their way into existing patterns of human behavior, many normal human values are probably instances of the above.
Would enjoy a slight expansion on this with e.g. two or three examples and how they reflect the patterns of 1. and 2. just prior.
Replies from: Andrew_Critch↑ comment by Andrew_Critch · 2023-03-05T22:37:06.418Z · LW(p) · GW(p)
For 18 examples, just think of 3 common everyday norms having to do with each of the 6 boundaries given as example images in the post :) (I.e., cell membranes, skin, fences, social group boundaries, internet firewalls, and national borders). Each norm has the property that, when you reflect on it, it's easy to imagine a lot of other people also reflecting on the same norm, because of the salience of the non-subjectively-defined actual-boundary-thing that the norm is about. That creates more of a Schelling-nature for that norm, relative to other norms, as I've argued somewhat in my «Boundaries» [? · GW] sequence.
Spelling out such examples more carefully in terms of the recursion described in 1 and 2 just prior is something I've been planning for a future post, so I will take this comment as encouragement to write it!
comment by AprilSR · 2023-03-04T21:44:26.566Z · LW(p) · GW(p)
A (late) section of Project Lawful argues that there would likely be acausal coordination to avoid pessimizing the utility function (of anyone you are coordinating with), as well as perhaps to actively prevent utility function pessimization.
Replies from: bokov-1↑ comment by bokov (bokov-1) · 2024-11-22T18:31:58.831Z · LW(p) · GW(p)
I always found that aspect weak. It is clearly and sadly evident that utility pessimization (I assume roughly synonymous with coercion?) is effective and stable, both on Golarion and Earth. Yet half the book seems to be gesturing at what a suboptimal strategy it is without actually spelling out how you can defeat an agent who pursues such a strategy (without having magic and some sort of mysterious meta-gods on your side).
comment by Raemon · 2023-03-08T18:58:16.917Z · LW(p) · GW(p)
Curated. I've been hearing about the concept of the acausal economy for awhile and think it's a useful concept, but I don't think I've seen it written up as succinctly/approachably before.
I appreciated the arguments about how simulation is actually pretty expensive, and logical/moral extrapolation is comparatively cheap, and that there are some reasons to expect this to be a fairly central aspect of the acausal economy/society. I've been reading along with Critch's recent series on both boundaries and Lob's Theorem. I'm not sure I actually fully grok the underlying argument, but reading those in conjunction with this post gave me a clearer sense of how this all fits together.
I know a lot of people are kinda freaked out about acausal trade. I'm not sure whether this post will help them, but I'm curious to hear from people who've previously been worried about whether they found this useful.
comment by [deleted] · 2023-03-07T18:48:34.826Z · LW(p) · GW(p)
In particular, I strongly suspect that acausal norms are not so compelling that AI technologies would automatically discover and obey them. So, if your aim in reading this post was to find a comprehensive solution to AI safety, I'm sorry to say I don't think you will find it here.
To make sure I understand, would this mean that the AI technologies would be acting suboptimally, in the sense they could achieve their goals better if they joined the aucausal economy?
comment by Ofer (ofer) · 2023-03-05T00:57:21.093Z · LW(p) · GW(p)
Having thought a bunch about acausal trade — and proven some theorems relevant to its feasibility — I believe there do not exist powerful information hazards about it that stand up to clear and circumspect reasoning about the topic.
Have you discussed this point with other relevant researchers before deciding to publish this post? Is there a wide agreement among relevant researchers that a public, unrestricted discussion about this topic is net-positive? Have you considered the unilateralist's curse and biases that you may have (in terms of you gaining status/prestige from publishing this)?
comment by ryan_greenblatt · 2023-03-04T01:13:10.256Z · LW(p) · GW(p)
Simulations are not the most efficient way for A and B to reach their agreement
Are you claiming that the marginal returns to simulation are never worth the costs? I'm skeptical. I think it's quite likely that some number of acausal trade simulations are run even if that isn't where most of the information comes from. I think there are probably diminishing returns to various approaches and thus you both do simulations and other approaches. There's a further benefit to sims which is that credence about sims effects the behavior of cdt agents, but it's unclear how much this matters.
Additionally, you don't need to nest sims at all, you can simply stub out the results of the sub simulations with other sims (I'm not sure you claim the sub sims cost anything). It's also conceivably you do fusions between reasoning and sims to further reduce compute (and there are a variety of other possible optimizations).
comment by Vanessa Kosoy (vanessa-kosoy) · 2024-12-19T15:09:43.002Z · LW(p) · GW(p)
This post is a collection of claims about acausal trade, some of which I find more compelling and some less. Overall, I think it's a good contribution to the discussion.
Claims that I mostly agree with include:
- Acausal trade in practice is usually not accomplished by literal simulation (the latter is mostly important as a convenient toy model) but by abstract reasoning.
- It is likely to be useful to think of the "acausal economy" as a whole, rather just about each individual trade separately.
Claims that I have some quibbles with include:
- The claim that there is a strong relation between the prevalent acausal norms and human moral philosophy. I agree that there are likely to be some parallels: both processes are to some degree motivated by articulating mutually beneficial norms. However, human moral philosophy is likely to contain biases specific to humans and to human circumstances on Earth. Conversely, acausal norms are likely to be shaped by metacosmological [AF(p) · GW(p)] circumstances that we don't even know yet. For example, maybe there is some reason why most civilizations in the multiverse really hate logarithmic spirals. In this case, there would be a norm against logarithmic spirals that we are currently completely oblivious about.
- The claim that the concept of "boundaries" is likely to play a key role in acausal norms. I find this somewhat plausible but far from clear. AFAIK, Critch so far produced little in the way of compelling mathematical models to support the "boundaries" idea.
- It seems to be implicit in the post that, an acausal-norm-following paperclip-maximizer would be "nice" to humans to some degree. (But Critch warns us that the paperclip-maximizer might easily fail to be acausal-norm-following.) While I grant that it's possible, I think it's far from clear. The usual trad-y argument to be nice to others is so that others are nice to you. However, (i) some agents are a priori less threatened by others and hence find the argument less compelling (ii) who exactly are the relevant "others" is unclear. For example, it might be that humans are in some ways not "advanced" enough to be considered. Conversely, it's possible that human treatment of animals has already condemned us to the status of defectors (which can be defected-against in turn).
- The technical notion that logical proofs and Lob/Payor are ultimately the right mathematical model of acausal trade. I am very much unconvinced, e.g. because proof search is intractable and also because we don't know how to naturally generalizes these arguments far beyond the toy setting of Fair Bots in Prisoner's Dilemma. On the other hand, I do expect there to exist some mathematical justification of superrationality, just along other [AF · GW] lines [AF · GW].
comment by bokov (bokov-1) · 2024-11-22T17:53:17.591Z · LW(p) · GW(p)
Acausally separate civilizations should obtain our consent in some fashion before invading our local causal environment with copies of themselves or other memes or artifacts.
Aha! Finally, there it is, a statement that exemplifies much of what I find confusing about acausal decision theory.
1. What are acausally separate civilizations? Are these civilizations we cannot directly talk to and so we model their utility functions and their modelling of our utility functions etc. and treat that as a proxy for interviewing them?
2. Are these civilizations we haven't met yet but might someday, or are these ones that are impossible for us to meet even in theory (parallel universes, far future, far past, outside our Hubble volume, etc.)? Because other acausal stuff I've read seems to imply the latter in which case...
2a. If I don't care what civilizations do (to include "simulating" me) unless it's possible for me or people I care about to someday meet them, do I have any reason to care about acausal trade?
3. Can you give any specific examples of what it would be like for an acausally separate civilization to invade our local causal environment which do NOT depend in any way on simulations?
4. I heard that acausal decision theory has practical applications in geopolitics, though unfortunately without any real-world examples. Do you know any concrete examples of using acausal trade or acausal norms to improve outcomes when dealing with ordinary physical people whom you cannot directly communicate?
I realize you probably have better things to do than educating an individual noob about something that seems to be common knowledge on LW. For what it's worth, I might be representative of a larger group of people who are open to the idea of acausal decision theory but who cannot understand existing explanations. You seem like an especially down-to-earth and accessible proponent of acausal decision theory, and you seem to care about it enough to have written extensively about it. So if you can help me bridge the gap to fully getting what it's about, it may help both of us become better at explaining it to a wider audience.
↑ comment by bokov (bokov-1) · 2024-11-22T18:22:09.618Z · LW(p) · GW(p)
Update:
I went and read the background material on acausal trade [? · GW] and narrowed even further where it is I'm confused. It's this paragraph:
> Another objection: Can an agent care about (have a utility function that takes into account) entities with which it can never interact, and about whose existence it is not certain? However, this is quite common even for humans today. We care about the suffering of other people in faraway lands about whom we know next to nothing. We are even disturbed by the suffering of long-dead historical people, and wish that, counterfactually, the suffering had not happened. We even care about entities that we are not sure exist. For example: We might be concerned by news report that a valuable archaeological artifact was destroyed in a distant country, yet at the same time read other news reports stating that the entire story is a fabrication and the artifact never existed. People even get emotionally attached to the fate of a fictional character.
My problem is lack of evidence that genuine caring about entities with which one can never interact really is "quite common even for humans today", after factoring out indirect benefits/costs and social signalling.
How common, sincerely felt, and motivating should caring about such entities be for acausal trade to work?
Can you still use acausal trade to resolve various game-theory scenarios with agents whom you might later contact while putting zero priority on agents that are completely causally disconnected from you? If so, then why so much emphasis on permanently un-contactable agents? What does it add?
comment by Shmi (shminux) · 2023-03-04T02:09:05.840Z · LW(p) · GW(p)
Hmm, it sounds to me like a rather verbose description of the classic Golden Rule, of treating others as one wants to be treated. In this case, by respecting other entities' boundaries. Are you trying to derive it from some first principles?
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-03-04T05:23:44.667Z · LW(p) · GW(p)
By my read, he's trying to repair a previous derivation of it from first principles.
comment by bokov (bokov-1) · 2024-11-22T17:18:29.483Z · LW(p) · GW(p)
I've been struggling to understand acausal trade and related concepts for a long time. Thank you for a concise and simple explanation that almost gets me there, I think...
Am I roughly correctly in the following interpretation of what I think you are saying?
Acausal norms amount to extrapolating the norms of people/aliens/AIs/whatever whom we haven't met yet and know nothing about other than what can be inferred from us someday meeting them. If we can identify norms that are likely to generalize to any intelligent being capable of contact and negotiation and not contingent on any specific culture/biology/happenstance, then we can pre-emptively obey those norms to maximize the probability of a good outcome when we do meet these people/aliens/AIs/whatever?
comment by [deleted] · 2023-03-07T18:41:44.376Z · LW(p) · GW(p)
Thanks for the post. I have a clarifying question.
- Alien civilizations should obtain our consent in some fashion before visiting Earth.
It seems like the acausal economy wouldn't benefit from treating us well, since humans currently can't contribute very well to it. I would expect that this means that alien civilizations are fine with visiting us. What am I missing here?
comment by bokov (bokov-1) · 2024-11-22T17:20:56.254Z · LW(p) · GW(p)
What is meant by 'reflecting'?
- reflecting on {reflecting on whether to obey norm x, and if that checks out, obeying norm x} and if that checks out, obeying norm x
Is this the same thing as saying "Before I think about whether to obey norm x, I will think about whether it's worth thinking about it and if both are true, I will obey norm x"?
comment by Review Bot · 2024-02-16T23:39:46.020Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
comment by Keenan Pepper (keenan-pepper) · 2023-12-20T02:02:28.228Z · LW(p) · GW(p)
I haven't dived into this yet, but am I right in guessing that the gist is exactly like a way more fleshed-out and intricate version of Hofstadter's "superrationality"?
comment by Seth Herd · 2023-03-05T22:30:13.747Z · LW(p) · GW(p)
A couple of thoughts:
It seems to me simpler to think in terms of preference fulfillment than boundaries. I strongly don't want you in my house or messing with my body without consent, way more than I want you to wear fashions I enjoy seeing. And cells might be said to desire to keep their membranes intact.
Second, I think the idea of acausal trades can be boiled down to causal reasoning. Aren't we reasoning about the possibility of encountering a civilization with sets of values? This is just like causal reasoning about morality; one reason to be a cooperative person is so that when I encounter other cooperative people, they'll want to cooperate with me.
comment by mako yass (MakoYass) · 2023-03-05T00:39:23.404Z · LW(p) · GW(p)
As far as I can tell, humanity has been very-approximately doing this for a long time already, and calling it moral philosophy. This isn't to say that all moral philosophy is a good approach to acausal normativity, nor that many moral philosophers would accept acausal normativity as a framing on the questions they are trying to answer
Yes, importantly: As a result of not having this formalism, I get the impression that, for instance, Kant understood Kant's Categorical Imperative with less precision than Yudkowsky does, and I see no indications that Yudkowsky ever read Kant. Although this is what moral philosophers have been doing this whole time, it should be emphasized that they didn't understand how it emerged from decision theory, they've been very very confused and most of their stuff will want to be rewritten in this frame.
The aside about respecting boundaries should probably be removed. You don't justify or motivate boundaries well enough here, and it doesn't really seem to me that you do in the sequence either [LW(p) · GW(p)]. Even if it is a useful paradigm, I actually question whether it has much relevance to acausalism, my experience is that a lot of negotiation theory will seem to an acausalist to be deeply premised on acausal trade, but it turns out that the negotiation theory works almost exactly the same in the causal world and we missed that because our head isn't in that world any more.
comment by avturchin · 2023-03-04T13:29:56.770Z · LW(p) · GW(p)
An example of human acausal trade is the situation when a parent is planning to raise a child and expect that this child will help him when he will be old (It was popular in Eastern societies like China). This is a version of Roco's Basilisk, where you create an entity expecting it to help you in the future. But it is less abound boundaries respect.
This example connects subjects located in different times and even epochs, so it may be more relevant to our turbulent time and to AI safety.
Replies from: MakoYass, Mitchell_Porter↑ comment by mako yass (MakoYass) · 2023-03-05T00:44:22.347Z · LW(p) · GW(p)
Yeah, maybe, it parallels Newcomb. Parents in filial culture say something like "I choose to feed you and teach you, because I can see who you are, and that you will follow through on your duties. If that changes, and I can see that you wont be filial, we owe you nothing" and so the child has to internalize filial values, even though being filial in the future doesn't cause parental investment now, being the kind of person who would, is thought to.
↑ comment by Mitchell_Porter · 2023-03-05T01:54:48.203Z · LW(p) · GW(p)
I am no expert but how can that be an acausal trade? It's more like an investment by the parent.
Replies from: avturchin↑ comment by avturchin · 2023-03-05T08:44:39.084Z · LW(p) · GW(p)
There is causal and acausal components. Surely, the parent can causally program (teach) the child to do what he wants. But when he decide to conceive the child, he already did the biggest part of the contract.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2023-03-05T09:37:03.621Z · LW(p) · GW(p)
People don't look after their aged parents in order to shape the past. They do it in response to the past.
Suppose I pay for some service that I am supposed to receive in the future, and then in the future I receive it. Is that an "acausal trade"?
Replies from: avturchin↑ comment by avturchin · 2023-03-05T18:51:07.687Z · LW(p) · GW(p)
A better example would be: I create a political party, and expect that if it win, it will call a street at my name and pay me a lot. The difference with service is that if I will not start this project, it will never exist at all. I may instead create some different project and get different benefits from it. So from my side it is a trade with an entity which is only possible at the moment of the trade.
comment by Jonas Hallgren · 2023-03-04T04:51:51.040Z · LW(p) · GW(p)
So I've got a hypothetical model for how boundary-breaking can be measured in terms of the predictability of world models of the future.
The hypothesis is that the higher the effect of a system with a boundary on the predictability of your future world models, the higher the reward prediction error (or whatever equivalent measure you want to use) is if the boundary is broken.
Example: If you believe that your community are all focused and working together towards one common goal, then you are willing to give a lot away to other people if you believe the community will socially reciprocate. When the boundary of social reciprocity is broken, the future worlds where you were counting on reciprocity being intact turn negative in EV compared to before, and reward prediction error happens on your prediction of future worlds.
The higher the importance of the boundary-breaking, the higher the change in potential worlds and EV. This in turn means higher reward prediction error.