The Limits of Artificial Consciousness: A Biology-Based Critique of Chalmers’ Fading Qualia Argument

post by Štěpán Los (stepan-los) · 2023-12-17T19:11:31.953Z · LW · GW · 9 comments

Contents

  1 Introduction
    1.1 Defining Qualia 
    1.2 How do qualia depend on physical properties?
    1.3 Objections to POI and Artificial Consciousness
  2. Fading Qualia
  3. Critique of Fading Qualia 
    3.1 P2 is question-begging
    3.2 Biology Argument 
    3.3 Objections 
  4. Another Upshot: Artificial Consciousness
  5. Final Upshot: Ethical Implications
  6. Conclusion 
    References 
None
9 comments

(This essay was written as part of my university philosophy course. I  added Section 5 to the present version.)

1 Introduction

In Fading Qualia, Absent Qualia and Dancing Qualia (1995), Chalmers argues explicitly for the following central theses:


CT1: The Principle of Organisational Invariance is the correct view of how qualia depend on physical properties.

 

CT2: Absent qualia are empirically impossible.

 

And implicitly commits himself to the third thesis:


CT3: Artificial Consciousness is empirically possible. 


He argues in favour of CT through a reductio ad absurdum (the Fading Qualia argument) and then establishes CT as an Inference to the Best Explanation of the reductio. Finally, CTfollows as a consequence of CT2

After presenting Chalmers’ reasoning I will argue for two main theses: 


M1: The Fading Qualia argument is wrong.


M2: We should lower our credence in the empirical possibility of Artificial Consciousness. 


I will argue for M1 based on the observation that Chalmers’ argument is question-begging: it assumes its own conclusions in order to get the reductio off the ground. I use this observation to develop the Biology Argument which directly argues against the possibility of replicating consciousness in silicon, contesting CTand CT2. After refuting objections to the observation and the Biology Argument, I argue that M2 is a consequence of my arguments for M1. Since Chalmers’ arguments are some of the very few in the literature that formulate an argument in favour of Artificial Consciousness, this paper attempts to execute the crucial task of evaluating their success. 

 

1.1 Defining Qualia 

‘Qualia’ denote subjective experiences like seeing the colour red or smelling cologne. In Thomas Nagel’s words (1974), there is something it is like to have these experiences, they have a concrete phenomenal character such that seeing the colour red is subjectively very different from seeing the colour green. The word ‘quale’ (synonymous with consciousness here) then denotes this subjective character of the experience. 

Notably, an extensive philosophical debate surrounds the question of what qualia are. For instance, some philosophers take qualia to be properties of our inner mental representations of objects (Lewis, 1929) others take the word to denote non-physical and infallible properties of an experience that are somehow “given” to the subject (Dennet, 1991). Further still, some people argue that our experiences only seem to have phenomenal properties as a result of our introspection, making qualia a mere illusion (Frankish, 2016). 

Without delving into this debate; it suffices to say that Chalmers’ arguments go through under the definition of subjective experiences in the first paragraph (call this the Minimal Qualia Assumption). 

 

1.2 How do qualia depend on physical properties?

This section’s title represents Chalmers’ paper’s main question. Note that it requires the assumption that consciousness is material (Physicalism Assumption), a point some scholars dispute (see Goff, 2007; Lowe, 2006). Chalmers (1995) aswers the question with the Principle of Organisational Invariance (POI):


POI: “Given a system with conscious experiences, any system with the same functional organisation at a fine enough grain will have qualitatively identical conscious experiences.”


By functional organisation, Chalmers means the dependence relations and causal interactions between the system’s components (e.g. the particular network connections between neurons in a brain). Furthermore, ‘fine enough grain’ means something like ‘sufficiently informative level of analysis’, indicating that Chalmers will focus on analysing consciousness at the neuronal level. 

A key aspect of POI is the multiple realisability of consciousness: in the same way that by playing the right notes I perform Bach’s Goldberg’s Variations irrespective of what my specific instrument is, consciousness can be realised in various physical systems (brains, silicon chips, alien organisms…) if the right functional organisation is replicated. However, this assumption is contested by various theories which tie qualia to specific physical properties that I will now evaluate. 

Firstly, some theorists argue that qualia are inherently tied to biochemical properties. For instance, mind-brain type identity theorists claim that qualia are identical to brain processes (Polger, 2011) in the same way that ocean waves are identical to movements of water particles. A related but weaker proposal consists in the scientific search for specific patterns of brain activity tied to consciousness, labeled as Neural Correlates of Consciousness (NCCs) and defined as the “minimal and sufficient neural system N whose activation leads to a conscious percept” (Chalmers, 2000, p. 31). Secondly, others argue that qualia arise out of quantum-mechanical properties. Penrose & Hammeroff (2014) suggest that rather than being a result of synaptic interactions, consciousness is produced by synchronised quantum computations in the neuron’s microtubules, i.e. supporting structures in the neuronal cell. 

All in all, Chalmers sets out to defend POI against the theories outlined above. Therefore, we can formulate his first main thesis: 


CT1: POI is the correct view of how qualia depend on physical properties.


In Section 1.3, I will describe objections to CT1, how Chalmers aims to respond to them and how the success of CTbears on the possibility of Artificial Consciousness.

 

1.3 Objections to POI and Artificial Consciousness

Opponents of POI often invoke ‘Absent Qualia’ counter-examples with the following steps: 

Step 1: Describe a functional organisation that is taken as sufficient to produce consciousness.

Step 2: Construct a counter-intuitive example which fits the description.

Step 3: Conclude that since the example is unlikely to be conscious, there must be something else involved in creating consciousness other than mere functional organisation. 

The most important argumentative Step 2 usually involves constructing the human brain’s functional organisation in very counter-intuitive ways. This could be a network of strings mimicking neural activity, Block’s (1978) China brain experiment or Searle’s (1980) water pipe network. Since it seems absurd to contend that these would be conscious, then there must be more to consciousness than just functional organisation.

Here, the main mechanics of Chalmers’ argument come into play: he takes Absent Qualia to be the main objection against CT1Thus, if he can prove that Absent Qualia are wrong, he can infer as the best explanation that CTis correct. Hence, Chalmers’ second main thesis is to argue against the empirical possibility of Absent Qualia through his Fading Qualia argument (see Section 2):


CT2: Absent qualia are empirically impossible.


Furthermore, note that Absent Qualia cases have a significant bearing on another debate: the possibility of Artificial Consciousness (AC), i.e. whether complex artificial systems such as Large Language Models (LLMs), Reinforcement Learning (RL) agents or others can instantiate qualitatively identical qualia to humans or non-human animals (Seth, 2009). If we believe that Absent Qualia are empirically impossible, then it is also possible that AIs fall into the set of functionally equivalent yet qualia-lacking systems. On the other hand, if one believes that Absent Qualia are empirically impossible, then there seems to be little reason to suppose that sufficiently functionally equivalent AIs could not be conscious. Hence, one’s credence in the possibility of AC should rise and fall with one’s credence in the possibility of Absent Qualia. Therefore, as a consequence of CT2, Chalmers commits to a third central thesis:


CT3: Artificial Consciousness is empirically possible. 


Interestingly, some authors even claim that Chalmers’ arguments are the only arguments in the philosophical literature which explicitly argue for the possibility of AC (Long, 2022). Thus, it is important to see how successful they are.

Having summarised the wider debate surrounding Chalmers’ arguments and some preliminary assumptions and argumentative steps he makes, Section 2 will present Fading Qualia in detail. Then, in Section 3, I will start developing my objections to Fading Qualia.


2. Fading Qualia

Fading Qualia is a reductio ad absurdum, i.e. an argumentative mode which refutes a supposition by showing it leads to absurd consequences (Rescher, 2023): 

P1 Suppose Absent Qualia are empirically possible.

C1 There can be a system (Šilicon) with the same functional organization as a conscious system (Štěpán) which lacks conscious properties (due to differences in non-organizational properties such as substrate).

P2 Suppose we gradually replace Štěpán’s neurons with Šilicon’s silicon chips, preserving the organizational structure.

P3 On the final step of the replacement process, Šilicon, by hypothesis, has no experiences, while Štěpán did at the beginning.

C2 This means that at some point during the replacement process, qualia either gradually faded (Fading Qualia) or suddenly disappeared (Suddenly Disappearing Qualia).

P4 Both options are implausible.

C3 P1 is incorrect. 

CT1 POI is the correct view of how qualia depend on physical properties. 

 

The Fading Qualia scenario holds that the conscious system’s qualia start gradually fading. Chalmers claims that this cannot be true because it entails too strong a dissociation between cognition and consciousness: it would require that partially replaced system says all the things that Štěpán would say (because they have the same functional organisation, including outputs), but it is wrong about everything it says since its qualia are, by hypothesis, fading (so, for instance it says “I am seeing bright red”, when in fact it sees faded pink). Chalmers dismisses this as too implausible. 

The Suddenly Disappearing Qualia scenario holds that the conscious system’s qualia suddenly switch off. Chalmers claims that this is absurd because it suggests a strange discontinuity in the laws of nature: it would mean that we could switch back and forth between a neuron and its silicon replacement with a field of experience blinking in and out. 

Hence, Chalmers concludes that the impossibility of Absent Qualia is best explained by POI being correct, through the indirect reasoning mentioned in Section 1.3. 

Now, I will move on to my criticisms of Fading Qualia, highlighting first that P2 is question-begging.
 

3. Critique of Fading Qualia 

3.1 P2 is question-begging

It seems that P2 introduces an element of ‘question-beggingness’ into Fading Qualia. Namely, it seems that Chalmers must assume the following for P2 to hold: neurons and silicon chips are interchangeable systems if they produce the right structure of inputs and outputs. Without this assumption, the replacement scenario would be impossible. However, the assumption just seems to be a variation of POI where ‘system with conscious experiences’ is replaced by ‘information processing system’ or whatever one takes to be the umbrella term for neurons and silicon chips and ‘have identical conscious experiences’ is replaced with ‘produce identical conscious experiences’. In other words, the argument is question-begging because it assumes its own conclusion (POI) at the finer organisational level of neurons and silicon chips. This seems problematic since Chalmers intends his argument to have persuasive force over other theories of consciousness. 

Perhaps it could be argued that it is wrong to categorise this observation as ‘question-begging’, since there is no obvious reason to contest the interchangeability of neurons and silicon chips (or artificial neurons). However, Section 3 will now present the Biology Argument which contests their interchangeability and acts as a counter-argument to Fading Qualia. 


3.2 Biology Argument 

Section 3.2.1 proposes the Biology Argument as a counter-argument to Fading Qualia. Then, in Section 3.2.2 I will provide justification for P2 of the Biology Argument. Finally, in Section 3.2.3 I will argue that since the Biology Argument is justified, it provides direct reasons to reject both CTand CT1

 

3.2.1 General Structure of the Biology Argument

P1 Fading Qualia is true iff neurons and silicon chips are interchangeable.

P2 Neurons and silicon chips are not interchangeable.

C1 Fading Qualia is wrong. 

P2 clearly does most of the argumentative heavy lifting and requires the most support. I will use Godfrey-Smith’s (2016) arguments for the connection between metabolism and consciousness to justify P2 in Section 3.1.2. The rest of the argument is just a simple modus tollens.

 

3.2.2 Godfrey-Smith’s justification for P2

P2 can be justified with Godfrey-Smith’s claim that biological cells responsible for consciousness possess metabolic properties differentiating them from other processing units such as silicon chips. Metabolism is defined by Godfrey-Smith (2016) as “a system’s maintenance of organisation in the face of thermodynamic tendencies towards disorder and decay […] through chemical reactions.” This is the full-fledged justification (call it the Metabolism Justification): 


P1 The right structure of inputs and outputs for consciousness depends on fine-grained metabolic processes..

P2 Silicon chips cannot support these fine-grained functional processes.

C1 Neurons and silicon chips are not interchangeable.


P1 is supported by Godfrey-Smith’s main example of the nitric oxide molecule, which is crucial for both (i) consciousness-producing processes such as plasticity at neural synapses, i.e. the changes in the brain’s structure in response to stimuli, and (ii) metabolism, through its role in the proper functioning of blood vessels and glial cells. Another supporting example is neuronal plasticity itself, however, for lack of space it cannot be described here.

P2 highlights the special status of metabolic processes. They happen at a specifically small scale (nanometers) and in a particularly complex context (immersed in water with hundreds of molecules involved), resulting in trillions of molecular interactions every second. Thus, to fight this thermodynamically chaotic setting and maximise the preservation of energy, many fine-grained processes such as micro-computational activities within cells, signal-like interactions between cells or self-maintenance and control of boundaries evolved. Hence, consciousness as it evolved from these metabolic and proto-cognitive processes can be conceptualised as a ‘binding force’ integrating them all together. In contrast, Godfrey-Smith argues that since computers are in a more orderly setting and are less energy constrained, it is unlikely that the ‘classical candidates’ for replication such as silicon could actually model these processes. Hence, they lack the necessary properties that play the right role in producing consciousness (C1). A similar conclusion is reached by Thagard (2022), who argues that the significantly different energy requirements faced by biological and silicon systems are largely overlooked by philosophical arguments such as Fading Qualia.

If the Metabolism Justification is true, then it seems that the Biology Argument is also true. Let us evaluate what this means for Fading Qualia.

 

3.2.3 Upshot of the Biology Argument

C2 alone does not directly prove that POI is wrong, as Godfrey-Smith asserts that neurons and silicon chips cannot be functional equivalents due to differences explained above. This means that POI could still be valid with appropriate substrates.   

However, the Biology Argument does successfully show that Fading Qualia rests on an unwarranted assumption, meaning it does not go through. Hence, through similar indirect reasoning to Chalmers’, we can assert that CTis wrong: since a purported objection against Absent Qualia does not follow, it seems that Absent Qualia are empirically possible. 

Furthermore, I think the Biology Argument proves that CT1 is wrong through appealing to POI’s triviality. POI can be argued to be trivial for the following reason: it argues that systems with the same functional organisation will have identical conscious experiences, however, based on the above, the sufficient functional organisation can most likely only be achieved by biological brains. Therefore, POI amounts to saying something like: ‘two creatures with identically structured brains (in an identical state) will have identical conscious experiences’, which is trivial if paired with accepting the Physicalist Assumption[1]. This charge of triviality means that POI loses much of its explanatory power when compared to its rival theories — mind-brain identity theorists or NCC theorists at least propose concrete laws guiding the interactions of qualia and physical processes (e.g. tying them to concrete brain areas). As such, it seems that CTis wrong: if the Biology Argument is correct, then there are more accurate and better views of how qualia depend on physical properties than POI. 

Before this can be concluded with confidence, it is necessary to consider objections to the Biology Argument. I consider three objections: Section 3.2.1 questions the Metabolism Justification’s strengthSection 3.2.2 points to instances of purported neural replacement to the Biology Argument’s P2. Finally, Section 3.2.3 proposes a counter-example to my arguments. Refuting all three, I conclude with M1. 

 

3.3 Objections 

3.3.1 Metabolism Justification is Weakly Justified

The first objection may be that the Metabolism Justification is insufficiently supported. More specifically, why should one believe that activities of nitric oxide or thermodynamic chaos are really unreplicable by a strong enough computer? Without defending this crux, the argument seems very weak. 

I will now defend the Metabolism Justification against this charge. Let us assume for the sake of argument that silicon chips are functionally equivalent with biological cells. As a result, the following case must be true since the equivalence posited by POI is symmetrical (the identity relation must go both ways): a human should be able to execute all of their metabolic functions, behaviour, cognition etc. using only Python-like commands (or any other programming language). In other words, if a computer can support human-like consciousness, then a human must be able to support a computer-like consciousness. However, it seems unclear to me how one could even begin to conceptualise this — for instance, how could a person experience pain as a line of Python code? It simply seems that biology is not fit to implement this sort of functioning, in the same way that a computer will not be able to adopt metabolic functioning, supporting the irreplicability of metabolism in silicon.

Of course, this analogy has its limitations: it may be argued that there is a mismatch in levels of analysis between the cases, whereby the chemistry orchestrating the body is a ‘lower-level’ phenomenon than code is for computers. Still, the analogy seems to support the more general principle that a system’s functioning is strongly tied to what purposes and under what pressures that functioning exists. Thus, since I believe that Godfrey-Smith convincingly demonstrated that cells and silicon chips are subject to very different requirements and pressures, his argument has strength. 


3.3.2 Attack on the Biology Argument’s P2 

Another objection challenges P2 with purported instances of neural activity replicated in silicon, examples including prosthetic limbs or bioelectrical therapeutic devices. Since e.g. prosthetic arms seem to function in a very similar way to regular arms, this could be taken as evidence that biological neurons are in fact replaceable with silicon ones. 

Even though this response seems initially strong, holding that biological and artificial neurons are interchangeable and thereby functionally equivalent actually defeats POI. This is because it does not seem that prosthetic devices instantiate any qualia — for instance, hitting a prosthetic arm does not cause pain in the prosthetic arm but rather residual limb pain and potentially phantom pain in the biological limb (Morgan et al., 2017). And while the case of phantom pain is still understudied and complex, it is by and large a case of residual signals in the biological arm rather than pain in the artificial arm (Browne et al., 2022; Garcia-Pallero et al., 2022). This would seem to show that there is more to qualia than mere functionality. Alternatively, the defender may hold that in these cases, biological and artificial body parts are functionally equivalent on a coarse-grained level since they clearly contribute to behaviour in similar ways, demonstrating their interchangeability. However, this merely sends us back to the Biology Argument’s familiar reasoningwhereby fine-grained processes matter to consciousness, meaning the defender must come up with a new argument as to how these could be replicated in silicon or with an argument that contests the role of metabolism in constituting consciousness. Hence, overall, it does not seem that this reply works either.  


3.3.3 Digital Simulations

A final objection invokes digital simulations of the mind as counter-examples (Bostrom & Shulman, 2022). These can be illustrated by imagining how one can simulate the flight of a bird. Perhaps one could build a synthetic bird — simply replace all organic features with artificial ones and by the end, one should have a flying artificial replica of a bird. Alternatively, one could attempt to build a digital flight simulator — combine all mathematical descriptions of wing dynamics, airflow, trajectory etc. into a single algorithm and, once run, achieve a successful replication of bird flight as well. Similarly, it may be said that the same must also hold for a perfect mathematical simulation of the brain (say, one that includes a detailed mathematical description of all metabolic processes) — it should count as a successful replication of consciousness, reviving POI (rather than substrate) as the key to consciousness (Bostrom & Shulman, 2022). I think that even if the above is true, it has little bearing on how the Biology Argument and Fading Qualia fare. 

This is because the Digital Simulations argument does not really address any of the present arguments’ premises, which are framed in terms of neurons vs silicon chips. Furthermore, it seems difficult to imagine a replacement scenario whereby one is gradually uploaded into a computer — the replacement intuition pump loses even more appeal[2]. As such, I think digital simulations should be treated as a separate argument in favour of POI rather than a counter-argument to my position.

Having defended the Biology Argument against objections, it can be concluded that:

 

M1: The Fading Qualia argument is wrong.

 

I will now discuss what bearing this has on the plausibility of AC. 

 

4. Another Upshot: Artificial Consciousness

As discussed, Fading Qualia has significant bearing on AC for many reasons: (i) CT1 supports the notion of multiply realisable consciousness through POI, (ii) CT2 attempts to dismantle the main objection against AC (Absent Qualia) and (iii) it is perhaps the only argument in the philosophical literature which explicitly supports AC. Therefore, since the Biology Argument contests the central theses by highlighting the argument’s question-begging and unsupported nature, it seems that all three reasons for supporting AC fall. That should certainly lower our credence in AC’s empirical possibility.

Furthermore, the Biology Argument acts as a standalone argument against AC’s posibility. Firstly, it places the burden of proof on proponents of AC: they need to argue that (i) metabolic processes are replicable in silicon or (ii) metabolism does not contribute to consciousness. Without these arguments, there are strong reasons to believe that consciousness is tied to the biological substrate. Secondly, the biological arguments present consciousness as functionally tied to particular thermodynamic processes in the body (as the “binding force”). Hence, given the disparity in energy requirements and pressures facing living systems and current (silicon-based!) AI systems (see Thagard as well), it seems unlikely that consciousness needs to emerge in the systems that we are currently building. Granted, LLMs’ neurons or the learning of RL systems display some brain-like or human-like functioning. However, precisely because of the progress that AIs make on many of these metrics (e.g. the progress in LLMs’ language recognition and production capabilities, see Roser, 2023), it seems that they are entirely dissociable from consciousness[3]. In other words, these systems need not develop consciousness in order to improve their performance on such metrics. Therefore, it is unlikely that (i) researchers will actively try to build consciousness into their systems or (ii) it will spontaneously emerge. This seems to further reinforce the point that consciousness in these systems is relatively unlikely.

For all the reasons above, I think we should conclude that:


M2: We should lower our credence in the empirical possibility of Artificial Consciousness. 

 

I frame the conclusion in terms of credence rather than claiming AC is impossible because there are separate arguments that may uphold AC, for instance, the digital simulations of the mind mentioned above. Another argument could be the evolutionary argument (see a variation in Chalmers, 2010), claiming that since evolution managed to produce consciousness, humans could achieve building it too. As such, M2 should be carefully read as contesting the empirical possibility of AC in current and future silicon-based systems or any other substrates which are unable to replicate biological metabolic processes.

Furthermore, throughout the paper it has been assumed that AC involves an instantiation of qualitatively identical qualia to humans or non-human animals. Of course, it is plausible that future AIs may possess consciousness with entirely different qualitative properties. However, due to the speculative and underexplored nature of this consideration, I am hesitant to contend that this should increase our credence in AC.

Finally, note that for reasons described above, M2 runs parallel to whether advanced AI systems could pose significant risks to humanity. Consciousness is sometimes taken to be synonymous with ‘intelligence’ or ‘awareness of self’ which for some might be key ingredients to a catastrophic outcome from AI — for instance, a superintelligent AI with self-serving goals. However, as noted in Section 1.1, consciousness here has a different meaning, namely, the phenomenal character of subjective experiences. Since it was argued that many coarse-grained functions encompassed by the term ‘intelligence’ can be executed without the presence of qualia, arguments for or against risks from advanced AI are largely separate from arguments about digital sentience. 
 

5. Final Upshot: Ethical Implications

M2 has a significant bearing on the ethical debate surrounding advanced AI systems and whether they could earn a moral status in the future (be that agency or perhaps patienthood). That is, if AIs possess qualia, this means they may be experiencing subjectively positive feelings such as pleasure and subjectively negative feelings such as pain, affording them interests. This, in turn, may warrant their inclusion in our moral circle, meaning broadly we should make efforts to take AIs’ interests into account and promote their well-being. Hence, the prospect of AC seems to require the attention of ethics[4]. This is also because the expected value of digitally conscious lives could be very significant (Akova, 2023) — since large numbers of future AIs are likely to be created and these systems may eventually even self-replicate in enormous numbers, the potential that all of these beings are actually suffering would constitute a moral catastrophe. This even led some philosophers to argue that research with the goal of creating AC is morally impermissible (Basl, 2013). Hence, the existence of AC seems to have significant moral consequences and could alter many of our current practices. 

In contrast to the above, M1 and M2 seem to imply that the expected value of conscious AI lives may not be that large, since AC at least in our current architectures is not very probable. This seems to suggest that a good deal of commercially built AIs will not fall into the moral circle. Instead, the circle will include only systems which are specifically engineered to possess consciousness. This, again, is not to say that this issue is ethically irrelevant since there are other potential pathways to AC already mentioned. However, it seems to make the issue less pressing, since technologies like whole brain emulation (WBE) seem to face a number of unsolved technical issues (Sandberg, 2013). More generally, assuming my arguments above are true or at least plausible, it seems that the estimate of AC lives’ expected value largely hinges on one’s credence in (i) WBE and other “consciousness engineering techniques” and/or (ii) scenarios where a qualitatively different subjective experience (that could still at least roughly correspond to pleasure and pain) spontaneously emerges in AIs. Since both scenarios seem relatively unlikely to obtain as of now, this may lower the moral significance of AC when compared to other ethical issues. That being said, this issue deserves more detailed treatment in a paper of its own.
 

6. Conclusion 

Chalmers’ Fading Qualia advocates three main claims:
 

CT1: Principle of Organisational Invariance is a correct view of how qualia depend on physical properties.
 

CT2: Absent qualia are empirically impossible.
 

CT3: Artificial Consciousness is empirically possible. 


I have argued that Fading Qualia cannot support these conclusions because it is question-begging: it assumes POI at the level of neurons and silicon chips in its initial premises. This assumption’s unwarranted nature was supported by the Biology Argument, highlighting the unique nature of neurons and their metabolic processes, making them the primary substrate for consciousness. This means that CT2  does not follow and CTis wrong on grounds of triviality. Finally, having discussed objections to my argument, I have concluded that we should lower our credence in CT3, as our current and future silicon-based AI systems are unlikely to develop consciousness due to functional differences. This has further ethical implications, potentially lowering the likelihood that AIs will earn moral status in the future.

 

References
 

Akova, F., 2023. Artificially sentient beings: Moral, political, and legal issues. New Techno Humanities, 3(1).

Basl, J., 2013. The Ethics of Creating Artificial Consciousness. APA Newsletter on Philosophy and Computers, 13(1).

Block, N., 1978. Troubles with functionalism. Minnesota Studies in the Philosophy of Science, 9.

Bostrom, N. & Shulman, C., 2022. Propositions Concerning Digital Minds and Society. Nick Bostrom, Available at: https://nickbostrom.com/propositions.pdf (Accessed 4 December 2023).

Browne, J. D., Fraiser, R., Cai, Y., Leung, D., Leung, A., & Vaninetti, M., 2022. Unveiling the phantom: What neuroimaging has taught us about phantom limb pain. Brain and behavior, 12(3).

Chalmers, D.J., 1995. Absent qualia, fading qualia, dancing qualia. In T. Metzinger, ed. Ferdinand Schoningh, Available at: https://www.consc.net/papers/qualia.html (Accessed 4 December 2023).

Chalmers, D.J., 2000. What is a neural correlate of consciousness? In T. Metzinger, ed. MIT Press.

Dennett, D.C., 1991. Consciousness Explained, Penguin Books.

Frankish, K., 2016. Illusionism as a Theory of Consciousness. Journal of Consciousness Studies, 23(11–12).

Garcia-Pallero, M. Á., Cardona, D., Rueda-Ruzafa, L., Rodriguez-Arrastia, M., & Roman, P., 2022. Central nervous system stimulation therapies in phantom limb pain: a systematic review of clinical trials. Neural regeneration research, 17(1). 

Godfrey-Smith, P., 2016. Mind, Matter, and Metabolism. Journal of Philosophy, 113(10), Available at: https://petergodfreysmith.com/metazoan.net/Mind_Matter_Metabolism_PGS_2015_preprint.htm#_ftn31 (Accessed 4 December 2023). 

Goff, P., 2007. Panpsychism. In M. Velmans & S. Schneider, eds. Wiley-Blackwell.

Hameroff, S. & Penrose, R., 2014. Consciousness in the universe: A review of the ‘Orch OR’ theory. Physics of Life Reviews, 11(1).

Lewis, C. I., 1929. Mind and the World Order, Charles Scribner’s Sons.

Long, R., 2022. Digital people: biology versus silicon. Experience Machines, Available at: https://experiencemachines.substack.com/p/digital-people-biology-versus-silicon#footnote-anchor-2-66800600 (Accessed 4 December 2023). 

Lowe, E.J., 2006. Non-cartesian substance dualism and the problem of mental causation. Erkenntnis, 65(1).

Morgan, S. J., Friedly, J. L., Amtmann, D., Salem, R., & Hafner, B. J., 2017. Cross-Sectional Assessment of Factors Related to Pain Intensity and Pain Interference in Lower Limb Prosthesis Users. Archives of physical medicine and rehabilitation, 98(1).

Polger, T.W., 2011. Are sensations still brain processes. Philosophical Psychology, 24(1).

Rescher, N., 2023. Reductio ad Absurdum. Internet Encyclopedia of Philosophy, Available at: https://iep.utm.edu/reductio/ (Accessed 4 December 2023).

Roser, M., 2022. The brief history of artificial intelligence: The world has changed fast – what might be next?. Our World in Data, Available at: https://ourworldindata.org/brief-history-of-ai (Accessed 4 December 2023).

Sandberg, A., 2013. Feasibility of Whole Brain Emulation. In V. C. Müller, ed. Springer Berlin Heidelberg, pp. 251–264.

Searle, J., 1980. Minds, brains, and programs. Behavioral and Brain Sciences, 3(3).

Seth, A., 2012. THE STRENGTH OF WEAK ARTIFICIAL CONSCIOUSNESS. International Journal of Machine Consciousness, 1.

  1. ^

    I.e. it is trivial if we don’t read this version of POI as telling us that consciousness is not immaterial.

  2. ^

    Although the reader is encouraged to read Chalmers (2010, p. 37-38) where a very rough version of such an upload is presented.

  3. ^

    Unless one believes that that these systems are already conscious which does not seem like a supported position, see Chalmers (2023) and Butlin & Long (2023).

  4. ^

    It would be interesting to analyse whether deontological theories make the same prediction (i.e. that sentient AIs belong into the moral circle), since at least a lot of the original principles are framed in terms like ‘humanity’ or ‘rational will’ and others, which seem to be very anthropomorphised concepts. On the other hand, perhaps it could be argued that they would be possessed by any complex enough conscious being.

9 comments

Comments sorted by top scores.

comment by Seth Herd · 2023-12-18T02:05:45.997Z · LW(p) · GW(p)

The reason few people argue for the possibility of artificial consciousness is that it seems obviously possible.

The brain is built of matter. "Artificial" systems are built of matter. Make the two similar enough, and you're going to get all of the same properties.

There's just no benefit to assuming consciousness doesn't arise from information processing in the brain. If dualism were true, it would explain absolutely nothing. So it's probably false, and artificial systems can be conscious.

Replies from: Dagon, TAG, stepan-los
comment by Dagon · 2023-12-18T18:34:05.301Z · LW(p) · GW(p)

artificial systems can be conscious

This is one question, and I agree that they "can" be.  The other question is whether they "must" be, especially when the mechanisms are not identical to human wetware.  I'm more uncertain here.  

Note that my uncertainty starts with lack of operational/measurable definitions.  I don't know where to draw the line (or how steep the gradient, if it's not a binary feature) between "not sentient" and "sentient" (terms I find a lot more important than "conscious", which gets redefined to things I don't care much about pretty often).   This uncertainty definitely applies to animals, and even some other humans - I give them the benefit of the doubt, but the doubt remains.

comment by TAG · 2023-12-18T12:34:54.358Z · LW(p) · GW(p)

There’s just no benefit to assuming consciousness doesn’t arise from information processing in the brain.

Nor is there an explanation of how it (in the HP sense)_ does arise. It's still true that..

Make the two similar enough, and you’re going to get all of the same properties.

...but it's a different line of reasoning. Everyone expects a quark-by-quark duplicate to be conscious, but that's not the interesting case of "artificial consciousness".

Replies from: Seth Herd
comment by Seth Herd · 2023-12-18T13:36:38.116Z · LW(p) · GW(p)

It's not, I agree. But then the question becomes not whether it's possible but how. Which IMO is a very worthwhile question.

comment by Štěpán Los (stepan-los) · 2023-12-18T09:04:37.064Z · LW(p) · GW(p)

Just to be clear, I am not arguing in favour of or against dualism, however, it is not true that if dualism were true, it would explain nothing — it is certainly an explanation of consciousness (something like “it arises out of immaterial minds”) but perhaps is just an unpopular one/suffers from too many problems according to some. Secondly, while I may agree that what you are saying about AC being obvious, this does not really address any part of my argument — many things seemed obvious in the past that turned out to be wrong, so just relying on our intuitions rather than arguments does not seem valid. And since there may be reasons that the two cannot turn out to be similar enough (this is the crux of my argument), this may contest your thesis about AC simply being obvious.

comment by hairyfigment · 2023-12-18T01:49:33.127Z · LW(p) · GW(p)

Note that it requires the assumption that consciousness is material

Plainly not, assuming this is the same David J. Chalmers.

Replies from: stepan-los
comment by Štěpán Los (stepan-los) · 2023-12-18T09:00:45.455Z · LW(p) · GW(p)

He makes this assumption in the first paragraph of the paper (i.e. he assumes that consciousness has a physical basis and lawfully depends on this basis).

Replies from: TAG
comment by TAG · 2023-12-18T12:32:10.100Z · LW(p) · GW(p)

That isn't the same thing as "is material".

comment by bagasas · 2024-02-27T23:07:49.478Z · LW(p) · GW(p)

Your argument that P2 is question-begging misses the point. The assumption is not that the replacement neurons can replicate consciousness, it is that they can replicate the behaviour of the biological neurons. This nuance is missed if words such as "function" are used: perhaps function includes consciousness, and then the assumption would be question-begging. But for the reductio to go through, the assumption is that what is replicated is just the behaviour without consciousness. So the artificial neuron will accept input from the biological neurons, do computations, and send output to the biological neurons such that those neurons behave in exactly the same manner as they would have if the output had come from the original neurons that the artificial neuron replaced. You might say, using the Biology Argument, that neurons contain something special such that their output (the nature and timing of chemical and electrical stimuli) cannot be replicated by silicon chips, and therefore the behaviour of downstream neurons will be different, and therefore the behaviour of the subject will be different. But the thought experiment can easily be modified by saying that the artificial neuron contains some alien technology rather than silicon chips. This alien technology reproduces the behaviour of the neurons, but not consciousness. The argument is then that if we could, by any means, separate the behaviour of neurons from consciousness, we would be able to create partial zombies. If you agree that partial zombies are absurd, then the argument goes through that it is impossible to separate the behaviour of neurons from consciousness; or equivalently, if we could find a way to replicate the behaviour of neurons, we would necessarily also replicate the consciousness.