"Free Will" in a Computational Universe

post by DragonGod · 2022-09-22T21:25:26.087Z · LW · GW · 6 comments

Contents

      Epistemic Status
      Disclaimer
      Acknowledgements
  Introduction
        You cannot know the output of an arbitrary function on an arbitrary input without computing the function.
  Computational Free Will as Libertarian Free Will
    Argument
  Objections
    Objection From Functional Determinism
    Objection From Computational Determinism
    Objection From Physical Determinism
    Reductio
    The "Free Will" of a Light Switch
    The Anticipated Experiences of "Computational Free Will"
  In Defence of "Computational Free Will"
  Interlude: A Hierarchy of Identity
  Computational Free Will and Predictors
  Caveats and Limitations
    Theoretical
      Computability
    Practical
  Closing Remarks
    Summary
        You cannot know the output of an arbitrary function on an arbitrary input without evaluating the function.
    Tentative Conclusions
    Alternative Conclusions
None
6 comments

Previous post: Initial Thoughts on Dissolving "Couldness" [LW · GW]

 

Epistemic Status

Been sitting in my drafts unattended to for a month. I'm publishing now so that I publish at all. I do not currently endorse this post or its predecessor.

 

Disclaimer

This is an expansion of my previous post generated in a stream of consciousness like manner. I found it sufficiently weird/sketchy that I decided to separate it to its own post.

This is an area for which I have many holes in my knowledge. Take it with a grain of salt (I don't fully trust it myself).

 

Acknowledgements

Discussions with others helped me refine/sharpen these ideas. I'm grateful for them.


Introduction

In the previous post, I showed that it is impossible to determine the output (choice) of an arbitrary (decision) algorithm on arbitrary input (sensory data).

This seems to give rise to a computational notion of libertarian "free will". Arbitrary decision algorithms can really choose any choice (or simply not halt). 

To phrase this "computational free will" in the most succinct manner:

You cannot know the output of an arbitrary function on an arbitrary input without computing the function.

(This definition seamlessly integrates computing extensionally equivalent algorithms. For algorithms that don't halt, "not halting" can simply be considered another output of the function.)

 

I'll investigate this notion of "computational free will" over the remainder of the post.


Computational Free Will as Libertarian Free Will

I think computational free will provides a seamless implementation of libertarian free will that is no less free than e.g. dualistic implementations. That is, I wish to argue that at least one of the below is true:

  1. Algorithms are endowed with the exact same "free will" that dual minds ("souls", "spirits", etc.) are imagined to have in dualistic implementations of libertarian free will.
  2. Libertarian "free will" — even in dualistic ontologies — is an incoherent or confused concept.

That is insomuch as "libertarian free will" is a coherent and sensible concept, then it manifests for algorithms. There is no free will that a dual mind endowed with libertarian free will (henceforth DELF) can have that is absent from an algorithm.

 

Argument

The thesis of computational free will is that the output of arbitrary decision functions (on arbitrary inputs) cannot be known apriori (here, "apriori" means something like "before computing/evaluating the function"). If you want to know the output of a decision function on arbitrary inputs, you just have to compute the function[1][2].

In dualist implementations of libertarian free will, a mind can choose any of a given set of actions. There is no knowledge as to what choice the mind will make ahead of time. Before the mind makes its choice, its choice is undetermined.

Isomorphically, an algorithm can choose any of a given set of actions. There is no knowledge as to what choice the algorithm will make ahead of time. Before the algorithm (or an extensional equivalence thereof) is computed, its choice is undetermined.


Objections

There are many objections that one might raise to the notion of "computational free will". I will address the ones that I am aware of.

 

Objection From Functional Determinism

One might object that an algorithm is just computing a function, and so thus its choice is predetermined in the way a mind's choice is not. 

But this is really just a fundamental misunderstanding of what a "function" is. In the set theoretic formulation, a function  is a relation from  to  (a subset of ) such that:

It's a mapping from inputs to outputs.

A decision function can also be constructed for a dual mind. If when confronted with a choice, a dual mind endowed with libertarian free will picks a particular action (even if it could have chosen otherwise), then you could construct a mapping of the mind's sensory history to its actions[3].

It doesn't matter that we cannot know what that choice is, just that the mind will actually choose something (even if the choice is abstaining). A dual mind endowed with libertarian free will also "evaluates" a decision function.

If this seems overly trivial and tautological, that's because it is. A decision function is not something that determines your decision making, rather, it's a description of said decision making. 

Thus, the idea that algorithms can't have libertarian free will because they are computing a function is silly. The function describes the algorithm, it doesn't determine it.

 

Objection From Computational Determinism

Someone may object that because an algorithm when run with the exact same input will yield the exact same output, then it doesn't have free will. But, I don't think DELFs will behave otherwise. Even if the choice of a DELF cannot be known ahead of time, even though the DELF can choose otherwise, if you could rewind time, the DELF would still make the same choice[3].

If the DELF did not make the same choice on rewound time — if its strategy didn't merely return a different outcome, but it genuinely returned a different strategy? — well, then it didn't make a choice at all. 

For rational choice to be meaningful at all, then faced with the exact same observation history, an agent must adopt the same strategy. If it does not adopt the same strategy, then it's not "choosing" at all. Its "choice" is being governed by a process external to the agent. Rather than "free will", the agent is a slave to the whims of some other.

The only meaningful and coherent notions of "free will" — the ability of agents to choose from among a set of outcomes — must of necessity be computationally deterministic. Even if the agent can choose from among a variety of strategies at a given point in time, there is one strategy that they will actually choose, and they will always choose that same strategy given the same observational history. 

 

Objection From Physical Determinism

One might object that computational free will is bound by physical determinism in a way that dualistic libertarian free will is not. However, it's not clear to me that this is necessarily the case? Computation appears to be substrate independent. It appears to not clearly "reduce" to physics [? · GW][4]:

Consider the simple computation "2 + 2 = 4". 

We might represent this computation by placing a group of two peas and another group of two peas together to form a new group.

If each pea represents:

  1. The number "1", then we have computed: 2 + 2 = 4
  2. The string "go", then we have computed: ("go", "go") + ("go", "go") = ("go", "go", "go", "go")

 

Computation is frame and interpretation dependent. The semantic content of a given computational/informational state is indeterminate. We need to know the referent in idea space of the different computational elements in idea space. E.g., in the second computation above, the referents are:

  • Peas: string "go"
  • Collection of peas: tuple
  • Placing a group of peas together: tuple addition

Computation is implemented by physics, but it is not uniquely implemented by physics. The same physical system can implement multiple computations, and the same computation can be implemented by multiple physical systems[5].

 

Reductio

If I claim that all algorithms have computational "free will", then it leads to implications that seem prima facie absurd, such as for example that a light switch as free will (for how can you "know" ahead of time what choice a light switch makes without running the light switch algorithm [or an extensional equivalence thereof]). I will call this the "light switch objection".

This clearly does not map to intuitive notions of libertarian free will, so my notion of computational free will can be argued to be so trivial as to be tautological. As Karl Popper said, "a theory that explains everything explains nothing". 

Furthermore, beliefs should pay rent in anticipated experiences [LW · GW]. Thus, one might naturally ask how would the world look like if computational "free will" was false? I will call this the "anticipated experiences objection".

I'll try to address both objections.

 

The "Free Will" of a Light Switch

Perhaps unfortunately, I find myself willing to bite the bullet of the "light switch objection". 

When I think of what actually happens when I predict what a light switch does ahead of time, I actually run the algorithm of the light switch (or at least something extensionally equivalent to it). That is, I compute the function of the light switch. And all that "computational free will" claims is that the output of a function cannot be known without computing it.

So, light switches have free will, at least as much as humans do anyway.

 

The Anticipated Experiences of "Computational Free Will"

How would the world look like if "computational free will" were false?

Feels like asking "how would the world look like if we permitted sets to contain themselves as members ?" Computational free will flows from the same limitations of self-reference (the halting problem, Rice's theorem and the proof by contradiction in the previous post all flow from limitations of self-referential mathematical structures).

Such a world is not logically possible, and imagining conceptually possible but logically impossible worlds is not something I'm currently capable of.

 

I think the necessary truthiness of "computational free will" makes the "anticipated experiences objection" not very useful/valuable. The main issue is whether "computational free will" is useful given that light switches also have it.


In Defence of "Computational Free Will"

The main reason why I think the notion of "computational free will" is useful/valuable (despite the "light switch objection)", is that it seems to be isomorphic to e.g. a dualistic implementation of libertarian free will (if one considers the "objection from physical determinism" sufficiently answered). There may not be a meaningful sense in which ontologically basic mental substance would confer more free will than the computational sense[6].

Thus, insomuch as "free will" is a coherent concept at all, algorithms are endowed with it.


Interlude: A Hierarchy of Identity

I propose the below hierarchy when thinking about agents:

  1. A function
  2. An algorithm that computes that function
  3. A physical system that implements the algorithm

"I" (and I believe other sophisticated agents) are entities on level 2[7]. There is currently a unique physical system that implements the algorithm that I recognise as "me", but that's a peculiar feature of our current technological level not an inherent characteristic of reality. A completely different physical system could compute "me" (e.g. if I uploaded my consciousness)[8].

I think it would be mistaken to identify myself as any particular physical system (for the reason specified above). Nor do I think I should be reduced to my decision function either. My decision function just describes my behaviour; my internal experience is part of who "I" am.


Computational Free Will and Predictors

How does computational free will interact with accurate predictors? Well to predict the output of a decision function on arbitrary input, a predictor must compute that function.

Suppose Omega wanted to predict my choice on Newcomb's problem. Then it would need to compute my decision function for Newcomb problems[9].

And considering that my decision function just describes my behaviour — it doesn't determine it — it seems sensible that I can "choose" the output of my decision function (for my decision function merely describes the output of my decision algorithms)[10].

Thus, by controlling the output of my decision algorithm, I control the predictions of any agents computing my decision function[11].


Caveats and Limitations

Theoretical

Computability

The above formulation seemingly requires the stipulation that the universe be computable. If halting oracles exist, then one might believe that computational free will is violate; I am not sure that is the case.

In the case of a halting (or other) oracle, then even if the decision function is not "computed" it is still "evaluated"[2]. You still have to evaluate the function (even if by oracle instead of computation), to "know" the output of the function on arbitrary input.

This doesn't conflict with dualist interpretations of libertarian free will either[12]. In non-computable universes, the "computational free will" thesis may simply be restated as:

You cannot know the output of an arbitrary function on an arbitrary input without evaluating the function.

(I guess it will be more accurate to call this "functional free will".)

 

Practical

How computational free will applies to sophisticated agents (like humans) is very much an open question. We currently do not know how to build such agents. We don't even understand such agents at all.

Humans can compute over computation. We can generate new algorithms on the fly and implement them as needed. We can "change" our decision algorithm. Even without deliberate effort, the algorithm that we implement changes over time (my decision-making procedures at 24 are quite different from at 16).

And perhaps there is an important fundamental sense in which the decision algorithm of a human is different from that of a light switch that captures intuitive notions of "free will".


Closing Remarks

Summary

You cannot know the output of an arbitrary function on an arbitrary input without evaluating the function.

Seems trivial after writing it out. It doesn't feel like a profound insight. Like what else would you have expected? This is true by definition[2].

If you had an algorithm  that gave you the output of a function  for all inputs, then  is "computing" .

This result is immediately apparent, and I didn't need to go through all the tangents about self-reference, halting problems and Rice's theorem (in the previous post) to arrive at it. I could have just gotten it straight away from thinking about decision functions.

The obviousness of this makes me sceptical that I actually did anything useful in this post.

 

But maybe this is just what deconfusing yourself looks like. As John Archibald Wheeler said:

Behind it all is surely an idea so simple, so beautiful, that when we grasp it - in a decade, a century, or a millennium - we will all say to each other, how could it have been otherwise? How could we have been so stupid?

(It feels a bit like that [though maybe I won't phrase it so grandiosely, and I still have substantial uncertainty[13]].)

 

Tentative Conclusions

My decisions correspond to a function, and the output of that function on arbitrary inputs cannot be known without computing said function (and I can "choose" what those outputs are). That is my "free will".

This free will is in no way inferior to the "free will" that "dual minds endowed with libertarian free will" are imagined to have.

 

Alternative Conclusions

Another interpretation of this post that some may prefer is:

Humans have free will if and only if a light switch has free will.

I think this lends itself to the "free will is an illusion" thing (if that's how you swing, I mean).

 

  1. ^

    It should be noted that this doesn't just apply to "arbitrary functions" (i.e. the class of all functions), it applies to every specific function. For any function , to determine the output of  on all potential input , you must compute/evaluate . Any algorithm , that gave the output of  for all its inputs  is computing .

    This is true by the definition of functional equality. See [2]:

  2. ^

    Consider two functions  such that:

    Then .

  3. ^

    Here, the choice should be understood as its choice of decision strategy, not a particular action.

    A strategy is a probabilistic distribution over decision policies.

    A decision policy is a mapping from sensory data (percept sequence, observation history) to outcomes.

    A pure strategy is one that assigns all its probability mass to a particular policy. A mixed strategy is one that does not assign any policy a probability of 1.

    So, e.g. the mind can pick a different action on each rerun because they choose a mixed strategy.

    In the previous post, I described decision algorithms as choosing over actions directly, but this was merely a simplification to ease the analysis. The undecidability results apply to the output of an algorithm, so they'll hold for decision algorithms regardless of whether they were selecting over actions, policies, strategies or whatever.

  4. ^

    In particular, a given physical process does not appear to "uniquely characterise" a particular computational process.

    Nor does a particular computational process "uniquely characterise" a given physical processes.

  5. ^

    One could go even further and interpret computational free will as a dualist implementation of free will, where the ontologically basic dual substance is not primitive mental structures, but primitive computational/mathematical structures. Computations must still be implemented by physics, but in a sense, they exist independent of physics.

    There is no unique implementation of the computational process a given physical system implements.

    (This "irreducibility" of computation to physics is the part of this post I am most uncertain about. I do not understand how a physical process maps to a particular algorithm [it seems to be a choice of interpretation, but for conscious agents, who's the interpreter?] nor what the semantic content of a given algorithm is.

    My official position on it is something like: "I don't know, but it does not look to me like a particular computation is uniquely reducible to a particular physical process".

    I am not sure it actually matters though. I don't imagine computational free will as somehow violating physical determinism.

    Even under the strongest forms of "computational irreducibility to physics", the laws of physics still apply unaltered. The computations those laws are currently implementing are just un(der)determined.)

  6. ^

    I have stated above that the choice of arbitrary decision algorithms on arbitrary input is in principle "unknowable". A possible caveat may be that even if algorithmic output is apriori (before computation) unknowable, there's still a "fact" as to what an algorithm's output will be.

    Alternatively, whether a given algorithm halts may be undecidable, but there's still a fact as to whether an algorithm halts (we just can't "know" what that fact is).

    I don't think the existence of this "fact" contradicts the notion of "free will". Even in dualist theories of libertarian "free will", there's still a fact as to what choice an agent endowed with libertarian free will makes (that fact is just unknowable ahead of time).

    In both cases, you can construct a decision function that describes the decision making of an algorithm or a DELF.

  7. ^

    In general, I think all agents — even the most simple reflex agents — should be considered as algorithms and not as functions. Even if the agent's program is really just a lookup table, that is still an algorithm.

    The agent's decision function just describes the agent's decision making, it is not actually the process by which the agent makes decisions. The distinction between a decision function and a decision algorithm manifests no matter how sophisticated the agent is.

  8. ^

    For those who find this a hard pill to swallow, consider that we can routinely implement the same algorithm on different physical processes. We've performed arithmetic algorithms on everything from mechanical computers to vacuum tubes, to electrical computers and electronic ones.

    An algorithm is separate from a physical system that implements it.

  9. ^

    Note that this doesn't necessitate a full simulation of "me". "I" am an algorithm not a function, and it is not necessarily the case that all algorithms that compute the decision function I implement are "me". 

    (Specifically, "self-identity" is not necessarily preserved across extensional equivalence [more specifically, I doubt that consciousness is preserved across extensional equivalence (as a reductio, you can always construct a giant lookup table to implement any finite function [LW · GW])]).

    However, to predict my decision for a particular problem, Omega only needs to compute my decision function for that particular problem, not my decision function for all problems/life in general. So, I wouldn't expect predictions of my decisions on narrow problems to evaluate my entire decision function [it would be more efficient to compute a restriction of my decision function to said narrow problems] let alone the exact decision algorithm that is "me".

    I.e. I don't expect to be fully simulated for narrow prediction.

  10. ^

    The sense in which I "could" choose the output of my decision algorithm was investigated in "Initial Thoughts on Dissolving Couldness".

  11. ^

    I think that once you take a perspective of "agents as algorithms", logical decision theories fall out naturally (if you believe that you should choose in a manner that counterfactually leads to the best consequences).

    It becomes "obvious" that by making a choice (controlling your decision function), you control the predictions of agents that compute that decision function. The prediction of your choice is truly dependent on what your actual choice is (for it's a computation of your decision function on the particular input under consideration).

    Causal decision theories seem to have an impoverished notion of dependence given that perspective. For they assume the only kind of dependence that exists is physical dependence. That seems grossly insufficient as a decision theory for algorithms.

  12. ^

    If you suppose an oracle that can evaluate arbitrary functions without computing them, then you can similarly suppose another oracle that can determine ahead of time what dualist agents endowed with libertarian free will may do.

  13. ^

    Stuff I'm very unsure of:

    • Qualitative differences between the decision algorithms of sophisticated agents vs that of a light switch
      • There seems to be an abundance of meaningful semantic differences between a human's decision algorithm and a light switch's.
      • The existence of such differences may point at a more meaningful notion of "free will" beyond that which I specified here.
    • Reducibility of computation to physics
      • How does an algorithm map to a given physical process and vice versa?
      • Is there a way for either (or both) of these mappings to be done uniquely?
    • Reducibility of "knowledge"/mental phenomena to computation
      • What is the semantic content embedded in a given computation?
      • Is there a way to uniquely evaluate the semantic content of a given computation (without reference to some outside observer)?
      • How does consciousness (subjective experience) arise from computation?
      • Is qualia/mental state uniquely characterised by a particular computation?

6 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2022-09-22T22:12:30.783Z · LW(p) · GW(p)

Suppose there is a bounded version of your algorithm where you don't have much time to think. If you are thinking for too long, the algorithm can no longer channel your thinking, and so you lose influence over its conclusions. A better algorithm has a higher time bound on the thinking loop, but that's a different algorithm! And the low-time-bound algorithm might be the only implementation of you present in the physical world, yet it's not the algorithm you want to follow.

So it's useful to see the sense in which libertarian free will has it right. You are not the algorithm [LW(p) · GW(p)]. If your algorithm behaves differently from how you behave, then so much worse for the algorithm. Except you are no longer in control of it in that case, so it might be in your interest to restrict your behavior to what your algorithm can do, or else you risk losing influence over the physical world. But if you can build a different algorithm that is better at channeling your preferred behavior than your current algorithm, that's an improvement.

Replies from: shminux
comment by Shmi (shminux) · 2022-09-23T00:21:27.449Z · LW(p) · GW(p)

I never understood that point, "you are not the algorithm". If you include the self-modification part in the algorithm itself, wouldn't you "be the algorithm"?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2022-09-23T00:32:17.747Z · LW(p) · GW(p)

It's not meaningfully self-modification if you are building a new separate algorithm in the environment.

Replies from: shminux
comment by Shmi (shminux) · 2022-09-23T00:48:29.974Z · LW(p) · GW(p)

Hmm. So, suppose there are several parts to this process. Main "algorithm", analyzer of the main algoritm's performance, and an algorithm modifier that "builds a new separate algorithm in the environment". All 3 are parts of the same agent, and so can be just called the agent's algorithm, no?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2022-09-23T01:16:01.709Z · LW(p) · GW(p)

A known algorithm is a known finite syntactic thing, while an agent doesn't normally know its behavior in this form, if it doesn't tie its own identity to an existing algorithm. And that option doesn't seem particularly motivated, as illustrated by it being desirable to build yourself a new algorithm.

Of course, if you just take the whole environment where the agent is embedded (with the environment being finite in nonblank data) and call that "the algorithm", then any outcome in that environment is determined by that algorithm, and that somewhat robs notions disagreeing with that algorithm of motivation (though not really). But in more realistic situations there is unbounded unknown data in environment, so no algorithm fully describes its development, a choice of algorithm/data separation is a matter of framing.

In particular, an agent whose identity is not its initial algorithm can have preference found in environment, whose data is not part of the initial algorithm at all, can't be inferred from it, can only be discovered by looking at the environment, perhaps only ever partially discovered. Most decision theory setups can't understand that initial algorithm as an agent, since it's usually an assumption of a decision algorithm that it knows what it optimizes for.

comment by TAG · 2022-09-24T16:16:11.354Z · LW(p) · GW(p)

In dualist implementations of libertarian free will, a mind can choose any of a given set of actions. There is no knowledge as to what choice the mind will make ahead of time. Before the mind makes its choice, its choice is undetermined.

Isomorphically, an algorithm can choose any of a given set of actions. There is no knowledge as to what choice the algorithm will make ahead of time. Before the algorithm (or an extensional equivalence thereof) is computed, its choice is undetermined

No , it's just unknown. The algorithm could be computer by a predictor, and if it always produces the same output, then it's deterministic. Determinism is the possibility of prediction. If one particular agent can't make a prediction in practice, that doesn't mean determinism has vanished.