"Far Coordination"
post by DragonGod · 2022-11-23T17:14:41.830Z · LW · GW · 17 commentsContents
Epistemic Status Introduction What is "Far Coordination"? Why is Far Coordination Possible? Premises Spacetime Invariance of Computation Why Care About Far Coordination? Use Cases for Far Coordination Preventing "Monstrous" Divergences Policing Particular Behaviours Maintaining Unity Coordination in Multi Civilisation Interactions Interlude Approaches To Far Coordination General Approach Clones Partner Simulations Considerations for Far Coordination Challenges Drift Error Accumulation Death/Inaccessibility Mitigations Stable/Static Aspects Error Correction Resynchronisation Backups Successorship Constitution Fundamental Limitations of Far Coordination Stasis Distance Restrictions Eating Your Cake and Having It Vulnerability Further Steps None 17 comments
Epistemic Status
Early and rough thoughts. Sharing because I believe the core idea is sound[1].
Introduction
What is "Far Coordination"?
Coordination across such vast distances of spacetime that communication between coordinating parties is physically impossible[2].
Why is Far Coordination Possible?
Premises
Assumptions on which the rest of this post lies:
- Computationalism: all agents of interest are computations that can be replicated to arbitrarily high fidelity on other substrates
- Robust cryptography
- Parties can send messages to each other that can't be forged
- Digital signatures can't be forged
Spacetime Invariance of Computation
Computation is time and space invariant[3].
"" evaluates to whether it's computed today, 5 billion years ago or 5 billion years from now. It also evaluates to whether it's computed on earth, elsewhere in the Milky Way or somewhere in Andromeda.
The spacetime invariance of computation can be exploited to facilitate coordination across vast distances of time and space.
Why Care About Far Coordination?
I am interested in far coordination because I want our civilisation to become interstellar (and eventually intergalactic). Coordination across vast distances of spacetime would be necessary to remain a united civilisation (as opposed to fracturing into a multitude of [potentially very divergent] civilisations) after we venture out of the solar system.
I would prefer for earth originating civilisation to remain broadly united, should we venture to the stars.
Use Cases for Far Coordination
Preventing "Monstrous" Divergences
Some future-minded folks I've chatted with seem to be concerned that posthumans would diverge from us in considerable ways. Particularly, there's concern that their values may be so radically altered that they would seem to us (were we around to observe them) not just alien, but monstrous[4].
Policing Particular Behaviours
Some people are very concerned about astronomical suffering. For those with suffering-focused moral foundations, the prospect of astronomical suffering may dominate all considerations. There may be a desire to implement mechanisms to prevent anyone from running simulations of sentient beings for the purpose of torturing them. Such policing may be viable for a planetary or even interplanetary civilisation, but without means to coordinate behaviour across vast distances, it's not viable for interstellar or intergalactic civilisations.
I do not endorse creating a cosmic police state/totalitarian government to police mindcrime, but I do understand the desire to constrain the behaviour of our descendants. We may legitimately not want any civilisation that can trace its genealogy to Earth (any of our descendants) to engage in behaviours we consider especially heinous.
Maintaining Unity
Civilisations that can't coordinate across vast distances in space would necessarily splinter and fracture into different pockets should they venture out of their host star systems. The splinters may develop into separate civilisations that don't view themselves as brethren with their sibling civilisations. They may even later wage war upon said siblings.
This fracturing of civilisation, and the potential for conflict among siblings, is a pretty horrifying prospect. Robin Hanson has suggested that a desire to maintain unity/conformity among humanity (and our descendants) may be a reason we never venture out to other star systems (from 08:38 [09:55 if pressed for time] to 12:04 in the podcast).
Even without outright conflict, the sibling civilisations may compete amongst each other for the limited resources within their (perhaps overlapping) spheres of influence. This becomes especially likely if some sibling civilisations are "grabby" (this trend may be exacerbated by selection pressures favouring grabby civilisations[5]).
Coordination in Multi Civilisation Interactions
A plausible scenario where far coordination would prove especially valuable is in multi-civilisation interactions. By "multi-civilisation", I refer here to civilisations that cannot trace their history to a common ancestor.
Far coordination would be especially valuable should our descendants run across aliens.
For each civilisation, let its civilisation tree be all civilisations that share a common ancestor. For humanity, our civilisation tree would be all civilisations that can trace their ancestry to Earth (all Earth originating civilisations).
Multi-civilisation interactions can be viewed as a multi-agent environment[6]. I'll briefly outline below ways in which it is useful/desirable for a civilisation tree to behave as a single agent:
- United front/interface
- The entire civilisation tree can participate in strategic interactions as a single actor
- Other agents may treat the entire civilisation tree as a single actor
- United policy regarding cooperation, competition and/or conflict
- Cooperation
- Nodes of a civilisation tree may honour commitments made by other nodes
- Said commitments could be verified (via e.g. digital signatures or other suitable cryptographic mechanisms)
- Nodes may negotiate on behalf of their entire civilisation, or other nodes
- This could be especially helpful as the beneficiary node(s) may not be able to participate in the negotiations in a timely manner due to speed of communication restrictions
- Nodes of a civilisation tree may honour commitments made by other nodes
- Competition/conflict
- An alliance with one node might become in effect an alliance with the entire civilisation tree
- Likewise, a declaration of war on one node may be treated as a declaration of war on the entire civilisation tree
- Cooperation
- Nodes benefit from the united strength of the entire civilisation tree
- Favours could be repaid by nodes other than the recipient nodes
- Incentivises favourable actions to nodes of the civilisation
- Reliable receipts of the services rendered could be provided again (via e.g. digital signatures or other suitable cryptographic mechanisms)
- Retribution may be visited by cells other than the offended cells
- Disincentivises harming any node of the civilisation tree[7]
- This is somewhat more tricky as the harm inflicted may not be reliably communicated, or other nodes may only learn about the harm a long time after the fact
- Nodes could send heartbeat messages (with an accompanying GUID[8] to identify the source) to all listening nodes, and a prolonged cessation in said heartbeats could warrant investigation and (if needed) retaliation
- Redundant dedicated responder nodes could be set up "in proximity"[9] to particular nodes, for "fast" responses
- Each new node added to the civilisation would (eventually) update the entire civilisation tree of its existence via its heartbeat
- The heartbeats could also encode more sophisticated information if needed
- The civilisation could adopt a policy of hunting down adversaries/avenging any fallen brethren
- Consensus resolution mechanisms would need to be decided beforehand to determine how much resources each node should commit to particular vendettas
- Probably the responsibility falls on the closest nodes, and/or the dedicated responder nodes
- Consensus resolution mechanisms would need to be decided beforehand to determine how much resources each node should commit to particular vendettas
- Favours could be repaid by nodes other than the recipient nodes
- Reliable trade
- Payments could be received from other nodes if needed
- The entire civilisation tree could present a uniform policy around trade to other civilisations
Interlude
Suppose an advanced civilisation is currently united. The civilisation is going to dispatch a colony ship to a nearby star system. The colony would set up a child civilisation on the new star system. The parent desires that the behaviour of their child be coordinated with their own across the intervening distance; how might they achieve this?
Over the remainder of this post, I will briefly describe a couple approaches, challenges/limitations of the approaches, mitigations to the challenges, and fundamental limitations of far coordination.
Approaches To Far Coordination
General Approach
Before the colony ship departs for the nearby star system, some preparations could be made:
- The parent civilisation freezes their governance/coordination (super)structures
- The governance structures of the child are made to mirror the parent (with analogous positions for the key decision makers)
Clones
- High fidelity copies of key decision makers in the parent civilisation could be made
- Said copies would then assume the corresponding roles/positions of their originals in the governance structures of the child
The copies have the same initial values and decision algorithms as the originals, and so would make the same decisions in analogous circumstances as the originals would have (had they access to the same information as the copies [the difference in lived experience since the copies separated from the originals]).
Partner Simulations
- The key decision makers of the child are selected independently
- High fidelity simulations of relevant key decision makers[10] of the child and the parent are created
- The colony is given access to the simulations of the parent decision makers (and vice versa)
After separation, the colony can consult the simulations of the parent decision makers as needed (and vice versa) to coordinate behaviour.
Arrangements would need to be made beforehand for consensus resolution mechanisms to decide how to incorporate the feedback from the simulations.
Considerations for Far Coordination
Challenges
There are several challenges that would limit the reliability/robustness of far coordination.
Drift
After separation, the two civilisations would undergo different experiences. These experiences may modify relevant aspects of the key decision makers in the two civilisations such that the simulations no longer reliably reflect the modified original.
In particular the following changes are problematic for far coordination:
- Drift in values
- Drift in decision making algorithms
Significant drift could break basically all the previously mentioned use cases for far coordination[11].
Error Accumulation
Aside from drift, miniscule deviations in the fidelity of the simulations to the original may accumulate into significant deviations when aggregated across many decision makers or vast stretches of time.
Death/Inaccessibility
Relevant counterparts in the colony or in the parent civilisation may die, be removed from power or otherwise be unavailable for decision making. Thus, actions coordinated with a simulation of an unavailable actor may not be tracking anything meaningful or relevant on the child/parent.
Mitigations
Steps that can be taken to mitigate the aforementioned challenges.
Stable/Static Aspects
In order to mitigate drift, the relevant decision makers could make core aspects of their values/decision algorithms stable/static so that their simulations continue to reliably reflect the originals even in the furthest reaches of time, after substantial divergence in lived experiences.
To attain the highest level of stability, entire aspects (values/decision algorithms) of relevant actors may be completely frozen.
Error Correction
Error correction mechanisms could be used to prevent errors from propagating/accumulating. This feels more promising when combined with stable/static aspects. If there's stochastic variation between the simulation and the original, then e.g. a particular decision could be evaluated many times to attain arbitrarily high confidence that a result reflects the decision of the original in a given circumstance[12].
Resynchronisation
Another approach to mitigate drift is that the simulations and the original synchronise periodically. The parent and the child connect periodically (e.g. every million years[13]), new simulations are generated, surprising divergences may be interrogated, the coordination protocols could be improved.
A limitation of this mitigation is that it requires the colony to be near enough to the parent that periodic resynchronisation is viable. This approach seems feasible for e.g. an interstellar civilisation but becomes fraught for coordinating across galaxies (especially galaxies in different [super]clusters).
Backups
To mitigate the issue of key decision makers dying, being removed from power or otherwise becoming inaccessible, redundant backups of key decision makers could be made at frequent intervals. The backup would be summoned at any instance in which an important decision needs to be made but a relevant actor is unavailable.
Successorship
If the values and decision algorithms of the relevant actors could be sufficiently abstracted such that they could be reliably communicated to other actors/other actors can reliably implement them, then there could be succession of relevant decision makers with the successors being trained in the values and decision algorithms they are to implement.
The terms of decision makers on the child and parent could further be synchronised. If coupled with resynchronisation, the new leaders of parent and child could connect and synchronise before the start of their relevant terms.
Constitution
Regardless of whether successorship is pursued or not, abstracting away the relevant aspects of decision making so that it can be implemented by different actors — a constitution perhaps — seems like it would be valuable.
With resynchronisation, the constitution could be amended (if needed at every synchronisation), without resynchronisation, the constitution could be frozen, and rigid adherence to the constitution guaranteeing coordination[14].
Fundamental Limitations of Far Coordination
It seems like with robust far coordination you can get only one of the following:
- Coordination across arbitrary distances
- Drift
And you're furthermore forced to relinquish the security of redundant independent nodes.
That is, the following are fundamental limitations:
- Stasis
- Distance Restrictions
- Vulnerability
Stasis
Any reliable sort of far coordination that is not bound by distance seems like it would require locking in core aspects of the decision making. This may look like all the relevant decision makers freezing their values and decision algorithms, or it may be abstracting said values and algorithms into a comprehensive constitution (that would then be frozen).
Regardless, the kind of free form evolution in values, philosophy, governance/coordination systems we've enjoyed for most of human history would become a thing of the past.
If the frozen aspects are confined to a decision making elite, but most citizens are allowed to drift freely, the involved societies would soon find themselves in a situation where their governance structures and leaders are archaic, or so far removed from their current values that it's dystopian.
It may thus be the case that the locked in values/decision algorithms may need to be applied not only to the leaders of the involved societies, but even ordinary citizens.
Civilisations may thus have to freeze core aspects of themselves to enable far coordination across arbitrary distances.
Stasis may be undesirable as species that self-modify to lock themselves into a particular version may be less adaptable, less able to deal with unforeseen circumstances. Perhaps such species may be outcompeted by more dynamic/adaptable species.
(Though this may be also be infeasible given that the lock in would only happen after the civilisation in question has amassed considerable power.
I do wonder what selection pressures advanced civilisations face? Do these selection pressures push towards stasis, or do they favour dynamism?[15])
Distance Restrictions
To facilitate robust far coordination without stasis, parent civilisations need to remain sufficiently close to all their descendants that resynchronisation is possible. That is the spatial extent of a civilisation tree will need to be confined.
If stasis is considered so undesirable, this may be the more preferable pill to swallow.
Eating Your Cake and Having It
Is it possible to have both? Or to what extent can we attain both?
We could try and develop a static comprehensive constitution after a very long reflection [? · GW]. The constitution could be used as a coordination superstructure for all earth originating civilisations. Drift would be permitted within the confines of that constitution.
Resynchronisation might be pursued within individual "cells" (galaxies? galactic clusters? superclusters?), while different cells would be allowed to drift apart from each other (again within the confines of the constitution).
Vulnerability
One of the main benefits of expanding to the stars/other galaxies is to attain robustness against existential threats. Spatial separation provides civilisational redundancy, as existential catastrophes that visit one node may be unable to affect other nodes.
Spatial separation will protect a united civilisation from most external threats, but existential threats to the civilisation arising from its behaviour may remain a concern. If one node of a civilisation suffers existential catastrophe arising from internal threats, then other nodes are likewise vulnerable to the same internal threats. Perhaps the entire civilisation might likewise succumb to internal existential catastrophe.
The unified behaviour of the civilisation might thus present a single point of failure (albeit one that may not necessarily be exploitable by external adversaries[16]).
A potential mitigation would be for civilisations to only expand outside their star system after attaining existential security [? · GW]. Alternatively, stasis should only be pursued when a civilisation is confident that it is sufficiently robust to all internal threats.
Further Steps
Stuff I'd like to do later[17]:
- Fill in any missing details
- Explore the relationship between far coordination mechanisms and distance
- What coordination options do very short distances afford
- What coordination options are forced by very long distances
- Explore far coordination across temporal separations
- Expand on
- Use cases for far coordination
- Especially with respect to multi-civilisation interactions
- The list of bullet points was initially planned to be expanded into sentences/entire paragraphs.
- Challenges for far coordination
- Mitigation actions
- Fundamental limitations of far coordination
- Use cases for far coordination
- Make the arguments more rigorous
- Persuade someone to write a fic exploring this concept
- ^
That said, I suspect the basic insight is not novel and is already present in e.g. the acausal trade literature.
- ^
Due to the speed of light limits on the transmission of arbitrary information.
- ^
Computation is also invariant along other dimensions (logical truths are true in every "(logically) possible world"), so potentially coordination across universes or even entire ontologies might be possible.
I do not actually think the potential for such coordination ("very far coordination"?) is relevant to realising the longterm potential of human civilisation, so I did not address any such mechanisms in this post.
Robust cooperation within our future light cone is sufficient for all practical purposes I think.
- ^
This is not just something that can happen; the argument has been made to me that it is what we should expect by default.
I do think that digital life (extended lifespans/indefinite life, resurrection/pseudo-immortality [e.g. via restoration from backups], trivial reproduction [via copying], arbitrary self-modifications [arbitrary changes might be made to cognitive architectures, (meta) values, accessible subjective states, etc.]) and the much greater power and accessible resources to posthumans would give them a perspective that is markedly different from our own.
I'm not convinced that posthumans would necessarily be "monstrous" to us, but I do expect them to be very alien to us. Posthuman mindspace is probably vastly larger than human mindspace, and has much greater extreme points along relevant dimensions. I do expect that some subset of posthuman mindspace would be "monstrous", I'm just not convinced that said subset is especially likely to manifest in the mainline.
- ^
On the individual, group, factional and sub civilisational levels, selection pressures favour power/influence seeking behaviour (maximal reproduction on the individual level being one example of power seeking behaviour).
Power seeking also seems to be favoured on the civilisational level as well in a manner that translates directly to "grabbiness".
Power seeking civilisations acquire more resources, become more powerful, and progressively expand their spheres of influence faster than less grabby civilisations. To a first approximation, the "grabbiest" civilisations are the most powerful.
Within an attainable sphere of influence, the fastest expanding civilisations would acquire most of the resources (assuming comparable starting conditions).
Furthermore, more powerful civilisations are better able to compete for resources against less powerful civilisations should their spheres of influence overlap.
I would thus expect that less grabby civilisations get outcompeted, assimilated or outright destroyed when their spheres of influence overlap with the spheres of influence of grabby civilisation.
If the grabby civilisations are especially benevolent, they may leave some limited preserve for their weaker counterparts. Said weaker counterparts may thus be able to continue some form of limited existence, but would lose out on the majority of what they had considered their cosmic endowment.
That said, I expect that benevolent grabby civilisations are more likely to assimilate weaker civilisations, than to voluntarily cede resources to them. Only in encounters between peer civilisations do I expect them to voluntarily forego competition/cede resources (because the expected cost of competition/resource conflict outweigh the expected value of the attainable resources).
- ^
The set of agents could be a partition of the set of civilisations (or relevant sub civilisational groups) such that:
1. Each set in the partition is robustly coordinated in their behaviour/decision making (at the appropriate level of abstraction)
2. No set is a proper subset of some other robustly coordinated set
- ^
A civilisation tree that had a policy of responding positively to negative incentives would be especially vulnerable to them (one only needs to compromise one node [via blackmail, hostages, threats, etc.] to extract utility from the entire civilisation tree).
I think for decision theoretic reasons, the civilisation tree should have a policy of unconditionally rejecting any negative incentives. If the civilisation tree is known to be a united actor, each node that refuses to respond to negative incentives (even when they would be naively expected to) provides evidence that the civilisation tree does not respond to negative incentives and reduces the likelihood that other nodes would be presented with negative incentives.
Because the civilisation tree is distributed across space, mounting external existential threats to the civilisation tree may be infeasible, so it may not be possible for adversaries to present any negative incentives that are sufficiently strong that the civilisation tree has no choice but to yield to them.
- ^
A simple way to generate GUIDs for each node:
Each node's GUID includes the GUID of its parent as a prefix followed by a special separator character and a unique identifier locating that node among its parent's children (e.g. birth number).
- ^
If nodes are individual star systems, the associated responder nodes could be located in the interstellar medium. If entire galaxies, in the intergalactic medium.
Ideally, there's probably be responder nodes placed in "orbits" at various distances around a particular target node.
- ^
The actors whose decision making we want to coordinate among. Some other actors may be "key decision makers" in some meaningful way, but their decision making not relevant to coordinating the behaviour of the colony with the parent.
- ^
1. Unconstrained drift may lead to monstrous divergences
2. Monstrous divergences may lead the divergent civilisation to engage in behaviour the parent wanted to prohibit.
3. Significant divergences break unity and may cause the divergent civilisation to view itself as distinct from its parent.
4. Significant divergences and civilisation fracturing prevents the civilisation from participating in multi-civilisation interactions as single actor, robbing it of the benefits thereof.
- ^
There are probably many more error correcting mechanisms that would be relevant here, but I'm sadly not familiar with the relevant literature.
- ^
The resynchronisation frequency would depend on the intervening distance. If e.g. (post)humanity set up a child civilisation on Alpha Centauri, then in the absence of any other children to coordinate with, we'd want to resync a lot more frequently than every million years [perhaps on the order of a 100 years?]).
- ^
Enforcement of the constitution could be automated via AI systems. Or relevant actors may self-modify into agents that are incapable of violating the constitution.
- ^
This sounds like the plot of a pretty interesting short story to me.
- ^
A united civilisation may still be especially vulnerable to particular exploits by sophisticated adversaries; space-like separation just sharply limits the ability of said adversaries to realise such exploits against all nodes of a united civilisation.
- ^
But will probably not get around to doing any time soon.
17 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2022-11-23T21:35:30.565Z · LW(p) · GW(p)
We may legitimately not want any civilisation that can trace its genealogy to Earth (any of our descendants) to engage in behaviours we consider especially heinous.
Hmm, trying to constrain your far descendants seems both a terrible and a futile idea. Making sure that the far descendants can replicate our reasoning exactly seems much more useful and doable.
Why is it a terrible idea? Imagine that our ancestors thought that regular human sacrifices to God of Rain are required for societal survival, and it would be "especially heinous" to doom the society by abandoning this practice, so they decided to "lock in" this value. We have a lot of these grandfathered values that no longer make sense already locked in, intentionally or accidentally.
On the other hand, it would be super useful to have a simulator that lets a future civilization trace our thinking and see the reasons for various Chesterton fences we have now.
Replies from: donald-hobson, DragonGod↑ comment by Donald Hobson (donald-hobson) · 2022-11-24T00:28:33.433Z · LW(p) · GW(p)
Why is it a terrible idea? Imagine that our ancestors thought that regular human sacrifices to God of Rain are required for societal survival, and it would be "especially heinous" to doom the society by abandoning this practice, so they decided to "lock in" this value. We have a lot of these grandfathered values that no longer make sense already locked in, intentionally or accidentally.
It would be terrible by our values. Sure. Would it be terrible by their values? That is more complicated. If they are arguing it is required for "social survival", then that sound like they were mistaken on a purely factual question. They failed to trace their values back to the source. They should have locked in a value for "social survival". And then any factual beliefs about the correlation between human sacrifice to rain gods and social survival are updated with normal baysian updates.
But let's suppose they truely deeply valued human sacrifice. Not just for the sake of something else, but for it's own sake. Then their mind and yours have a fundamental disagreement. Neither of you will persuade the other of your values.
If values aren't locked in, they drift. What phenomena cause that drift? If our ancestors can have truely terrible values (by our values), our decedents can be just as bad. So you refuse to lock in your values, and 500 years later, a bunch of people who value human sacrifice decide to lock in their values. Or maybe you lock in the meta value of no one having the power to lock in their object values, and values drift until the end of the universe. Value space is large, and 99% of the values it drifts through would be horrible as measured by your current values.
↑ comment by DragonGod · 2022-11-23T21:44:17.795Z · LW(p) · GW(p)
I'm sympathetic to this reasoning. But I don't know if it'll prevail. I'd rather we lock in some meta values and expand to the stars than not expand at all.
Replies from: donald-hobson, shminux↑ comment by Donald Hobson (donald-hobson) · 2022-11-24T00:33:16.774Z · LW(p) · GW(p)
I would much prefer we lock in something. I kind of think it's the only way to any good future. (What we lock in, and how meta it is are other questions) This is regardless of any expanding to the stars.
↑ comment by Shmi (shminux) · 2022-11-23T22:09:56.639Z · LW(p) · GW(p)
well, yes, but why the dichotomy?
Replies from: DragonGod↑ comment by DragonGod · 2022-11-24T08:48:57.710Z · LW(p) · GW(p)
I had spoken with people that expected our descendants to diverge from us in ways we'd consider especially heinous, that were concerned about astronomical suffering and was persuaded by Hanson's argument that a desire to maintain civilisational unity may prevent expansion.
So I was in that frame of mind/responding to those arguments when I wrote this.
comment by Dagon · 2022-11-23T22:24:10.319Z · LW(p) · GW(p)
Upvoted for interesting ideas, but I'm completely unconvinced that this is possible, desirable, or even coherent. I think a more formal mathematical definition of "coordination" would be required to change my mind.
Part of my confusion is your focus on "key decision-makers". Do you mean "every agent in the civilization"? If not, what defines the path of an isolated group/civilization? But most of my confusion is what actions specifically are being coordinated, if communication is impossible. Acausal trade is already pretty theoretical, and completely unnecessary if both parties are ALREADY aligned in their utility functions over each observable part of the universe. And any causal agreement is impossible without communication.
Coordination across vast distances of spacetime would be necessary to remain a united civilisation (as opposed to fracturing into a multitude of [potentially very divergent] civilisations)
Note that this prevents improvements as much as it prevents degradation. And more importantly, this seems like "control" or "limitations imposed on children" rather than "coordination". Unless you model "civilization" as an ant-like single utility function and a single timeless agent, which sub-agents merely comprise. No independence or misalignment or even uncertainty of goals can exist in such a picture, and I'll pre-commit to finding the weakness that brings the whole thing down, just for pure orneriness.
Replies from: donald-hobson, DragonGod↑ comment by Donald Hobson (donald-hobson) · 2022-11-24T01:16:18.634Z · LW(p) · GW(p)
No independence or misalignment or even uncertainty of goals can exist in such a picture, and I'll pre-commit to finding the weakness that brings the whole thing down, just for pure orneriness.
Really. Let's paint a picture. Let's imagine a superintelligent AI. The superintelligence has a goal. Implicitly defined in the form of a function that takes in the whole quantum wavefunction of the universe and outputs a number. Whether a particular action is good or bad depends on the answer to many factual questions, some it is unsure about. When the AI only has a rough idea that cows exist, it is implicitly considering a vast space of possible arangements of atoms that might comprise cows. The AI needs to find out quite a lot of specific facts about cow neurochemistry before it can determine whether cows have any moral value. And maybe it needs to consider not just the cow's neurochemistry, but what every intelligent being in the universe would think, if hypothetically they were asked about the cow. Of course, the AI can't compute this directly, so it is in the state of logical uncertainty as well as physical uncertainty.
The AI supports a utopia full of humans. Those humans have a huge range of different values. Some of those humans seem to mainly value making art all day. Some are utilitarian. Some follow virtue ethics. Some personal hedonism with wireheading. A population possibly quite neurodiverse compared to current humanity, except that the AI prevents anyone actively evil from being born.
Note that this prevents improvements as much as it prevents degradation.
If you can actually specify any way, however indirect and meta, to separate improvements from degradation, you can add that to your utility function.
Replies from: Dagon↑ comment by Dagon · 2022-11-24T01:41:24.548Z · LW(p) · GW(p)
I can't follow your example. Does the AI have a goal in terms of the quantum wavefunction of the universe, or a goal in terms of abstractions like "cow neurochemistry"? But either way, is this utopia full of non-aligned, but not "actively evil" humans just another modeled and controlled part of the wavefunction, or are they agents with goals of their own (and if so, how does the AI aggregate those into it's own)?
And more importantly for the post, what does any of that have to do with non-causal-path coordination?
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2022-11-25T01:01:20.344Z · LW(p) · GW(p)
The AI has a particular python program, which, if it were given the full quantum wave function and unlimited compute, would output a number. There are subroutines in that program that could reasonably described as looking at "cow neurochemistry". The AI's goals may involve such abstractions, but only if rules say how such goal is built out of quarks in its utility function. Or it may be using totally different abstractions, or no abstractions at all, yet be looking at something we would recognize as "cow neurochemistry".
But either way, is this utopia full of non-aligned, but not "actively evil" humans just another modeled and controlled part of the wavefunction, or are they agents with goals of their own
Of course they are modeled, and somewhat contolled. And of course they are real agents with goals of their own. Various people are trying to model and control you now. Sure, the models and control are crude compared to what an AI would have, but that doesn't stop you being real.
This doesn't have that much to do with far coordination. I was disagreeing with your view that "locked in goals" implies a drab chained up "ant like" dystopia.
↑ comment by DragonGod · 2022-11-24T08:52:42.425Z · LW(p) · GW(p)
I agree that I did not specify full, working implementations of "far coordination". There are details that I did not fill in to avoid prematurely reaching for rigour.
The kind of coordination I imagined is somewhat limited.
I guess this is an idea I may revisit and develop further sometime later. I do think there's something sensible/useful here, but maybe my exploration of it wasn't useful.
comment by Jay Olson (stephan-olson) · 2023-11-28T18:36:49.679Z · LW(p) · GW(p)
I have described certain limits to communication in an expanding cosmological civilization here: https://arxiv.org/abs/2208.07871
Assuming a civilization that expands at close to the speed of light, your only chance to influence the behavior of colonies in most distant galaxies must be encoded in what you send toward those galaxies to begin with (i.e. what is in the colonizing probes themselves, plus any updates to instructions you send early on, while they're still en route). Because, the home galaxy (the Milky Way) will never hear so much as a "hello, we've arrived" back from approximately 7/8 of the galaxies that it eventually colonizes (due to a causal horizon).
You'll have some degree of two-way communication with the closest 1/8 of colonized galaxies, though the amount of conversation will be greatly delayed and truncated with distance.
To see just how truncated, suppose a colony is established in a galaxy, and they send the following message back towards the Milky Way: "Hello, we've arrived and made the most wonderful discovery about our colony's social organization. Yes, it involves ritualistically eating human children, but we think the results are wonderful and speak for themselves. Aren't you proud of us?"
As I mentioned, for only 1/8 of colonized galaxies would that message even make it back to the Milky Way. And for only the 1/27 closest galaxies would the Milky Way be able to send a reply saying "What you are doing is WRONG, don't you see? Stop it at once!" And you can expect that message to arrive at the colony only after a hundred billion years, in most cases. In the case of the 1/64 closest colonies, the Milky Way could also expect to hear back "Sorry about that. We stopped." in reply.
That is, unless the current favored cosmology is completely wrong, which is always in the cards.
So, yeah -- if you want to initiate an expanding cosmological civilization, you'll have to live with the prospect that almost all of it is going to evolve independently of your wishes, in any respect that isn't locked down for all time on day 1.
Replies from: gwern↑ comment by gwern · 2023-11-28T22:26:20.114Z · LW(p) · GW(p)
That is, unless the current favored cosmology is completely wrong, which is always in the cards.
FWIW, that's why I disagree with one of your minor conclusions: there being an inherent myopia to superintelligences which renders everything past a certain distance "exactly zero". There is quite a bit of possibility in the cards about one of the many assumptions being wrong, which creates both risk and reward for not being myopic. So the myopia there would not lead to exactly zero valuation - it might lead to something that is quite substantially larger than zero.
And since the cost of spitting out colonization starwisps seems to be so low in an absolute sense, per Anders, it wouldn't take much above zero value to motivate tons of colonization anyway.
Indeed, the fundamental epistemological & ontological uncertainities might lead you to problems of the total valuation being too large, because any possibility of being able to break lightspeed or change expansion or any of the other loopholes means both that you are now massively threatened by any other entity which cracks the loopholes, and that you can do the same to the universe - which might then be vastly larger - and now you are in infinite-fanaticism territory dealing with issues like Pascal's mugging where the mere possibility that any of the colonized resources might solve the problem leads to investing all resources in colonization in the hopes of one of them getting lucky. (This is analogous to other possible infinite-fanaticism traps: 'what if you can break out of the Matrix into a literally infinite universe? Surely the expected value of even the tiniest possibility of that justifies spending all resources on it?')
(There is also a modest effect from evolution/selection: if there is any variance between superintelligences about the value of blind one-way colonization, then there will be some degree of universe-wide selection for the superintelligences which happen to choose to colonize more blindly. Those colonies will presumably replicate that choice, and then go on to one-way colonize in their own local bubble, and so on, even as the bubbles become disconnected. Not immediately obvious to me how big this effect would be or what it converges to. Might be an interesting use of the Price equation.)
Replies from: stephan-olson↑ comment by Jay Olson (stephan-olson) · 2023-11-29T00:31:00.667Z · LW(p) · GW(p)
Yes, I agree. As you point out, that's a general kind of problem with decision-making in an environment of low probability that something spectacularly good might happen if I throw resources at X. (At one point I actually wrote a feature-length screenplay about this, with an AI attempting to throw cosmic resources at religion, in a low-probability attempt to unlock infinity. Got reasonably good scores in competition, but I was told at one point that "a computer misunderstanding its programming" was old hat. Oh well.)
My pronouncement of "exactly zero" is just what would follow from taking the stated scientific assumptions at face value, and applying them to the specific argument I was addressing. But I definitely agree that a real-world AI might come up with other arguments for expansion.
comment by Donald Hobson (donald-hobson) · 2022-11-24T00:13:00.691Z · LW(p) · GW(p)
Stasis may be undesirable as species that self-modify to lock themselves into a particular version may be less adaptable, less able to deal with unforeseen circumstances. Perhaps such species may be outcompeted by more dynamic/adaptable species.
I don't think this is a real significant effect. Remember, what you are locking in is very high level and abstract.
Let's say you locked in the long term goal of maximizing paperclips. That wouldn't make you any less adaptable. You are still totally free to reason and adapt.
comment by Donald Hobson (donald-hobson) · 2022-11-24T00:09:36.788Z · LW(p) · GW(p)
If the frozen aspects are confined to a decision making elite, but most citizens are allowed to drift freely, the involved societies would soon find themselves in a situation where their governance structures and leaders are archaic, or so far removed from their current values that it's dystopian.
Loads of implicit assumptions in that. Also a sense in which you are attempting to lock in a tiny sliver of your own values. Namely you think a world where the decision makers and citizens have very different values is dystopian.
comment by Donald Hobson (donald-hobson) · 2022-11-23T23:39:28.657Z · LW(p) · GW(p)
Regardless, the kind of free form evolution in values, philosophy, governance/coordination systems we've enjoyed for most of human history would become a thing of the past.
I think there is an extent that we want to lock in our values, or meta values, or value update rules anyway. Regardless of the issues about far coordination. Because they are our values. If you wind back time far enough, and let a bunch of homo erectus lock in their values, they would choose somewhat differently. Now I won't say "Tough, suck's to be a homo erctus." The rules we choose to lock in may well be good by homo erectus values. We might set meta level rules that pay attention to their object level values. Our object level values might be similar enough that they would think well of our optimum. Remember "Not exactly the whole universe optimized to max util" != "bad"
If baby eating aliens came and brainwashed all humans into baby eating monsters, you have to say. "No, this isn't what I value, this isn't anything close. And getting brainwashed by aliens doesn't count as the "free form evolution of values" the way I was thinking of it either. I was thinking of ethical arguments swaying humans opinions, not brainwashing. (Actually, the difference between those can be subtle) The object level isn't right. The meta level isn't right either. This is just wrong. I want to lock in our values, at least to a sufficient extent to stop this.