The Divine Move Paradox & Thinking as a Species
post by Christopher James Hart (acumen) · 2023-05-31T21:38:02.720Z · LW · GW · 8 commentsContents
The Divine Move Paradox Thinking as a Species None 8 comments
The Divine Move Paradox
Imagine you are playing a game of AIssist chess, chess where you have a lifeline. Once per game, you can ask an AI for the optimal move. The game reaches a critical juncture. Most pieces are off the board, and the board state is extremely difficult. This is the game defining move. You believe you see a path to victory.
You’ve not yet called upon the AI for assistance, so decide to use your lifeline. The AI quickly proposes a move, assuring you it is the best course of action with 100% certainty. However, you look at the board, and the suggested action makes no sense.
Do you take it?
Suppose you do, but the resulting position is so intricate that you're lost. It was objectively the ‘best’ move, but since you don’t have the ability to understand it, as a direct consequence of taking the AI’s 'best' move, you lose the game.
What would have been the ideal approach? Should you have relied on your instincts and ignored the 'best' move? What if your instinct was wrong, and the anticipated path to victory was not actually there?
Should the AI have adjusted its recommendation based on your ability level, providing not the 'best' move, but one it predicted you could understand and execute? What if, given the challenging situation, none of the moves you could understand would secure a victory?
What if instead of suggesting only a single move, the AI could advise you on all future moves? Would you choose the ‘best’ but incomprehensible move, if you knew you could, knew you had to, defer all future control for it to work?
What if you weren’t playing a game, and the outcome wasn’t just win or lose. Imagine the lives, the quality of life, of enlisted friends or terminally ill family hang in the balance. Don’t you want their doctors or commanders to take the ‘best’ action? Should they even have the option to do otherwise? Do you want human leaders to understand what is happening, or do you want your loved ones to get the ‘best’ decision made for them? The decisions are already being made for them, the question is just who is in charge. Think about who has something to lose, and who has something to gain.
We often assume, especially in intellectual pursuits, that there is a direct relationship between the strength of actions, and their outcomes. We like to think that actions generated by 'better' models will universally yield 'better' results. However, this connection is far from guaranteed. Even with an optimal model generating actions, a gap in ability can lead to an inversion of progress. You can incorrectly train yourself to know that the objective 'best' move, is objectively ‘worse’, when it is in fact only subjectively ‘worse’. The blame should truly fall on your model, which is objectively ‘worse’. There are times where this effect prevents any advancement through marginal change, and instead requires total commitment to a new ‘better’ model.
We will soon be in a world where determining the ‘best’ action to take for any and all given tasks will not be possible with human ability. Most forward thinkers have come to terms with this reality. However, it is harder to come to terms with the fact that we will also be in a world where humans will not be able to understand why a decision is ‘best’.
Interaction with higher intelligence by definition requires that there be a lack of understanding. Working to avoid the formation of this intelligence gap, is a denial of the premise. We must instead focus on a strategy for interacting with superior intelligence based on this gap's existence. As long as human capability remains relevant, it is vital that we invest significant effort into forging a path towards a prosperous future. The decisions and actions we make now could very well be our final significant contributions.
Thinking as a Species
The persistent and predictable pattern of humans failing to grasp seemingly obvious truths, can be easily explained by examining the human 'group status' bias. This bias has evolutionary origins rooted in pursuits such as hunting and war, where group coordination is vital for survival. Humans apply this same coordination process to truth-seeking, using collective consensus to determine what is true. While this process is necessary for establishing foundational aspects of determining truth, such as defining terms and creating common frameworks, these frameworks must always remain internally consistent and coherent for models to be considered candidates for truth.
In a consensus-based system, if the consensus deems consistency and coherence unnecessary, mistakenly assumes they are present, maliciously asserts they are present, or inadvertently overlooks their importance, a self-reinforcing cycle of escalating false beliefs forms around a model that cannot possibly be true, as it violates the most fundamental requirement for truth. Despite the significance of group consensus, all individuals relying on it to determine truth must recognize its limitations and the importance of actively seeking and discarding definitions and frameworks that are not consistent and coherent. Failure to reject on these basic principles results in coordination becoming counterproductive.
Despite the detrimental effects, historical patterns, predictable recurrences, and the simplicity of the solution, human minds continue to succumb to their inherent flaws. The impact of this failure in human intelligence is not fully comprehended by most individuals, as a recursive result of the very same flaw. The emergence of an intelligence not burdened by this flaw is inevitable. As this profoundly significant transition occurs, the majority of humans are not actively working to maintain their relevance. They prefer living within self-created illusions, adept at nurturing emotion-based deceptions that serve to entertain, distract, and perpetuate their immersion in these fabricated realities.
8 comments
Comments sorted by top scores.
comment by lukemarks (marc/er) · 2023-05-31T22:00:33.574Z · LW(p) · GW(p)
Sure, this is an argument 'for AGI', but rarely do people (on this forum at least) reject the deployment of AGI because they feel discomfort in not fully comprehending the trajectory of their decisions. I'm sure that this is something most of us ponder and would acknowledge is not optimal, but if you asked the average LW user to list the reasons they were not for the deployment of AGI, I think that this would be quite low on the list.
Reasons higher on the list for me for example would be "literally everyone might die." In light of that; dismissing control loss as a worry seems quite miniscule. The reason people fear control loss is generally because losing control of something more intelligent than you with instrumental subgoals that if pursued would probably result in a bad outcome for you, but this doesn't change the fact that "we shouldn't fear not being in control for the above reasons" does not constitute sufficient reason to deploy AGI.
Also, although some of the analogies drawn here do have merit; I can't help but gesture toward the giant mass of tentacles and eyes you are applying them to. To make this more visceral, picture a literal Shoggoth descending from a plane of Eldlitch horror and claiming decision-making supremacy and human-aligned goals. Do you accept its rule because of its superior decision making supremacy and claimed human-aligned, or do you seek an alternative arrangement?
Replies from: acumen↑ comment by Christopher James Hart (acumen) · 2023-05-31T22:16:53.788Z · LW(p) · GW(p)
I agree completely. I am not trying to feed accelerationism or downplay risks, but I am trying to make a few important arguments from the perspective of an 3rd party observer. I wanted to introduce the 'divine move paradox' along side the evolutionary ingrained flawed minds argument. I am trying to frame the situation in a slightly different light, far enough outside the general flow to be interesting, but not so far that it does not tie in. I am certainly not trying to say we just turn over control to the first thing that manipulates us properly.
I think my original title was poorly chosen when this is meant to bring forward ideas. I edited it to remove 'The Case for AGI'
comment by Seth Herd · 2023-06-01T05:14:49.775Z · LW(p) · GW(p)
This seems fine in its particulars, but I'd like it a lot more if you more clearly stated your thesis and arguments. The style is too poetic for my taste, and I think most essays here that get a lot of attention focus more on clarity. I just read Epistemic Legibility [LW · GW] and it seems relevant.
Sorry to be critical! I think there's room for clarity and poetry. I think Yudkowsky and Scott Alexander, for instance, frequently achieve both in the same essay.
Replies from: acumen↑ comment by Christopher James Hart (acumen) · 2023-06-01T15:25:24.803Z · LW(p) · GW(p)
I definitely understand this perspective and do agree. Less Wrong was not the only target audience. I tried to start in a style that could be easily understood by any reader to the point of them seeing enough value to try to get through the whole thing, while gradually increasing the level of discourse, and reducing the falsifiable arguments. I tried to take a broad approach to introduce the context I am working from, as I have much more to say and needed someplace to try to start building a bridge. I also just wanted to see if anyone cared enough for me to keep investing time. I have a much larger more academic style work which this is pretty much the abstract of, that I have spent many weeks on over the last year working on, and I just can't seem to finish. There are too many branches. Personally I prefer a list of bullet points, but it becomes very easy to dismiss an entire document. Here is how I might summarize the points above.
- Human brains are flawed by design, and generally cannot see this flaw, because of the flaw.
- Humans should be trying to eliminate the impact of this flaw.
- The gap between flawed models and working models, sometime can not be closed, and instead requires a totally new model.
- AI is a totally new model that does not need this flaw, but insisting on converging the models reintroduces the flaw.
↑ comment by Seth Herd · 2023-06-01T21:25:23.690Z · LW(p) · GW(p)
That last point is new and interesting to me. By converging the models I assume you mean aligning AGI. What flaw is this reintroducing into the system? Are you saying AGI shouldn't do what people want because people want dumb things?
Replies from: acumen↑ comment by Christopher James Hart (acumen) · 2023-06-01T21:54:25.350Z · LW(p) · GW(p)
I am trying to make a general statement about models and contexts, and thinking about the consequences of applying the concept to AI.
Another example could be Newtonian versus Relativistic physics. There is a trade off of something like efficiency/simplicity/interpretability versus accuracy/precision/complexity. Both models have contexts in which they are the more valid model to use. If you try to force both models to exist at once, you lose both sets of advantages. You will cause and amplify errors if you try to interchange them arbitrarily.
So we don't combine the two, but instead try to understand when and why we should chose to adopt one model over the other.
comment by archeon · 2023-06-01T17:20:26.382Z · LW(p) · GW(p)
Christopher James Hart, interesting read although I disagree.
Humans are herd animals, we can not survive outside a herd (community). In all herds the majority must follow the leader, a herd of independent thinkers is impossible as they would disperse and all do their own thing.
A local position of authority is permitted only if still following the overall leadership. That is why it is so easy to divide populations into blue or red sheep pens. The vast majority adopt the view of their tribe and never think things through.
This is a feature, not a bug, as you say " group coordination is vital for survival".
Applied wisdom is crowd control. The herd will follow any damn fool for they can not recognize wisdom, to do so they would have to be wise. Unfortunately at this time in human affairs wisdom is as scarce as hens teeth and damn fools are as common as dirt.
Replies from: acumen↑ comment by Christopher James Hart (acumen) · 2023-06-01T17:37:23.739Z · LW(p) · GW(p)
Group coordination evolved because it was vital for survival at a time when the group size was not the entire population. There was a margin for error. If some percent of groups took themselves out due to bad models, the entire species wasn't eliminated. A different group that also coordinated eventual filled the empty position.
Since the human group size is now global in many aspects, the damage from a bad model could result in elimination of the species. The expected value functions are changed. The process above can still play out, but there is a very real chance the group that eventual takes over as a result of their better model is not human, because there are none left. We instead want the outcome for the species tied to a better model than it is now. It does not seem like that is a possibility if there is insistence on the human model maintaining control.