Posts
Comments
Does Ngo think land is valuable even if it is unsuitable for agriculture , lacks mineral resources, and is nowhere near a city?
Ok, Greenland has mineral resources. But does Ngo think that regulation is preventing them from being extracted?
You can claim that subjective attitudes are still part of reality ontologically , but the point is that they function differently epistemologically . Opinions and beliefs and for the and falsehoods are all made of atoms, but all function differently. There are potentially as many subjective attitudes as there are people, and they are variable across time as well. The arbitrariness and lack of coherence is what causes the problems. Objectivity is worth having in ethics, because a world in which prisoners have done something really wrong, and really deserve their punishment is a better than a world in which prisoners just have desires the majority don't like.
Partly. But you can use Bayes to support induction, which is another problem for Popperians.
If you want your notion of personhood to be objectively founded, and non arbitrary, then consciousness becomes relevant again. The two are not necessarily different.
You are assuming MW, and assuming a form where consciousness hops around between decoherent branches. The standard argument against.to quantum immortality applies...we don't experience being very old and having experienced surviving against the odds multiple times. In fact, quantum immortality makes a mockery of the odds...you should have a high subjective probability of being in a low objective probability universe.
If this doesn’t count as an explanation (or at least a concrete hypothesis), what would one look like to you?
Something that inputs a brain state and outputs a quale ie solves the Mary's Room problem. And does it in a principled way, not just a look up table of known correlations.
subjective experience is just how predictive, self-modeling systems represent internal and external states.
To say that X "is just" Y is to say there is no further explanation.
Promoting your own well-being only would be egoism, while ethics seems to be more similar to altruism
Well, yes. (You don't have to start from.a tabular area , and then proceed in baby steps, since there is a lot of prior art)
because of the meaning of the involved term
But "desires" is not how "ethics" is defined in standard dictionaries or philosophy. It's not "the" definition.
This doesn’t reflect the actual methodology, where theories are judged in thought experiments on whether they satisfy our intuitive, pre-theoretical concepts
That's irrelevant. Rival theories still need shared connotation.
They can be low/non agentic, because current ones are. I'm not seeing the fallacy.
conducive to well-being
That in itself isn't a good definition , because it doesn't distinguish ethics from, e.g. Medicine...and it doesn't tell you whose well being. De facto people are ethically obliged to do things which against their well being and refrain from doing some things which promote their own wellbeing...I can't rob people to pay my medical bills. (People also receive objective punishments, which makes an objective approach domestics justifiable).
Though I think the meaning of “ethical” is a bit different, as it doesn’t just take well-being into account but also desires
Whose desires? Why?
The various forms of theories in normative ethics (e.g. the numerous theories of utilitarianism, or Extrapolated Volition) can be viewed as attempts to analyze what terms like “ethics” or “good” mean exactly.
They could also be seen as attempts to find different denotations of a term with shared connotation. Disagreement , as opposed to talking-past, requires some commonality
(And utilitarianism is a terrible theory of obligation, a standard objection which its rationalist admirers have no novel response to).
Whether morality is an objective property of the universe
Thats running together two claims-- the epistemic claim that there are moral truths, and the ontological claim that moral value is a property of the universe (as opposed to be something more like a logical truth).
If morality were objective, it would have to be conceivable that the statement “George’s actions were wrong and he deserves to be punished” would be true even if every human in the world were of the opinion, “George’s actions seem fine to me, perhaps even laudable”.
That's not such a high bar. Our ancestors accepted things like slavery and the subordination of women , which we now.deplore -- we think they were universally wrong.
Thus, a subjective morality is strongly preferable to an objective one!
That doesn't follow, since nihilism, error theory, etc. are possible answers...which means you need to positively argue for (some form of subjectivism) , not.just argue against objectivism.
That’s because, by definition, it is about what we humans want.
No it isnt.
Would we prefer to be told by some third party what we should do, even if it is directly contrary to our own deeply held sense of morality?
If we are rational , we prefer to believe what is true. So we would defer to an omega if there were moral truths, and if we trusted it to know them. Just as rational people defer to scientists and mathematicians. And.of course, objective morality doesn't have to be based on commandment theory in the first place.
We humans have a lot to be proud of: by thinking it through and arguing amongst ourselves, we have advanced morality hugely
How do you know? Moral subjectivism implies any moral stance is as good as any other...any stance is rendered true (or true-for-the-person) just by believing it. But moral progress is only defineable against an objective standard. That's one of the arguments for moral objectivism.
So why are we all so afraid of admitting that, yes, morality is subjective?
You are characterizing objectivists as being emotion driven...but a selection of dry arguments can be found in the literature, which you should have read before writing the OP.
Subjective does not mean arbitrary.
Yes it does. Individual level subjectivism is the claim that simply having a moral stance.makes it correct.
Subjective does not mean that anyone’s opinion is “just as good”. Most humans are in broad agreement on almost all of the basics of morality. After all “people are the same wherever you go”. Most law codes overlap strongly, such that we can readily live in a foreign country with only minor adjustment for local customs. A psychopathic child killer’s opinion is not regarded as “just as good” by most of us, and if we decide morality by a broad consensus — and that, after all, is how we do decide morality — then we arrive at strong communal moral codes.
That's favouring group level subjectivism over individual subjectivism. But similar problems apply: the group can declare any arbitrary thing to be morally right. You can fix that problem.by regarding group level morality as an evolutionary adaptation , so that well adapted.ethics is sort-of-true a s poorly adapted ethics is sort-of-false... but then you are most of the way to objectivism.
Our moral sense is one of a number of systems developed by evolution to do a job:
That doesn't support the claim that morality is subjective, only the claim.that it is natural. Some kinds of objectivism are supernaturalistic but not all.
Human intuition that morality is objective is really the only argument
There are also a bunch of pragmatic arguments , like the need to justify punish ments, the need to define moral progress, etc.
A K-selected species would have very different morality from an r-selected species
I agree! But that shows morality isn't universal -- not that it isn't objective facts. Objective facts can be local. Objective morality can vary with anything except moral stances.
It’s also worth noting that this argument implies that different human races could have different conceptions of morality.
Its clearly the case that different kinds of society -- rich versus poor, nomads versus agricultrualists -- have different kinds of de facto ethics.
Many attempts at establishing an objective morality try to argue from considerations of human well-being. OK, but who decided that human well-being is what is important? We did!
That's a rather minimal amount of subjectivism. Everything downstream of that can be objective , so its really a compromise position. (Harris's theory, which you seem to have in mind here , fails at being a completely objective theory, whilst succeeding in being a mostly objective theory).
But, if you want to arrive at an objective morality you now need a scheme for aggregating the well-beings of many creatures onto some objective scale, such that you can read off what you “should” do and how you “should” balance the competing interests of different people.
No, objective morality doesn't have to be universalistic.
Insularity -- being an echo chamber -- is bad for truth seeking, even if it is good for neighbourhoods.
I think this is way too strong. There are only so many hours in a day, and they trade off between
- (A) “try to understand the work / ideas of previous thinkers” and
- (B) “just sit down and try to figure out the right answer”.
It’s nuts to assert that the “correct” tradeoff is to do (A) until there is absolutely no (A) left to possibly do, and only then do you earn the right to start in on (B).
You need to remember that C,do nothing, don't have an opinion, is an option. Nobody is forcing you to think about these things. If you don't have enough time for due diligence, then you don't have to do B as the only alternative.
That weaker statement is certainly true in some cases. And the opposite is true in other cases.
I can't think of a single example of too much A.
And nothing went wrong anywhere in this process
Ahem!
Even if they are using the term to denote two different things, they can agree on connotation. Meaning isn't exhausted by denotation (reference, extension). Semantic differences are ubiquitous, but so are differences in background assumptions,in presumed ontology.
I just want to be in touch with the ground reality, and I believe that there has to be a set of algorithms we are running, some of which the conscious mind controls and some the autonomous nervous system, it can't be purely random else we wouldn't be functional, there has to be some sort of error correction happening as well
"Algorithms" and "purely random" are nowhere near the only options.
>If I ask you to do 2+2, a 100 times, you would always respond 4,
What if you ask.me for a random number?
>We don't have perfect predicatability in psychology because we don't understand it yet
You also need physical.determinism to.be true. But determimism isn't a fact
Some people.think that an information ontology must be some sort of idealist ontology because they think of information as a mental thing. But you can ponens/tolens that: inasmuch as physics can deal with information, it's not something that exists in only minds.
Then the phenomenon could be stem from punctuation habits, as @bfinn says. Did you notice that my original comment doesn't contain a sentence, by your standards?
What is a sentence anyway… is there something special about a period, as opposed to other punctuation marks? Many are available: the colon is a possibility; also its half-brother; and the comma,of course...also the ellipsis—even the mighty m-dash!
What is a sentence anyway... is there something special about a period, as opposed to other punctuation marks? Many are available: the colon is a possibility; also its half-brother; and the comma,of course...also the ellipsis -- even the mighty m-dash!
The idea that grammar is just inflection is misleading: languages that are mostly isolating can have complex ordering rules,like the the notorious adjective ordering of English.
As for french ...Moi, je ne me défile pas.
1st person. Sing.
1st person. Sing, again.
Negative.
1st person. Sing, reflexive.
Verb!!!
Negative,again.
I have heard of versions of many-worlds that are supposed to be testable
The're are versions that are falsified, for all practical purposes, because they fail to.predict broadly classical observations -- sharp valued real numbers, without pesky complex numbers or superpositions. I mean mainly the original Everett theory of 1957. There have been various attempts to patch the problems -- preferred basis, Decoherence , anthropics, etc, -- so there are various non falsified theories.
The one that I’m most familiar with (“classic many-worlds”?) is much more of a pure interpretation, though: in that version, there is no collapse and the apparent collapse is a matter of perspective. A component of the wavefunction that I perceive as me sees the electron in the spin-down state, but in the big superposition, there’s another component like me but seeing the spin-up state. I can’t communicate with the other me (or “mes,” plural) because we’re just components of a big vector—we don’t interact.
Merely saying that everything is a component of a big vector doesn't show that observers dont go into superposition with themselves, because the same description applies to anything which is in superposition..it's a very broad claim.
What you call classic MWI is what I the have-your-cake-and-eat-it ... assuming nothing except that collapse doesn't occur, you conclude that observers make classical observations for not particular reason...you doing even nominate Decoherence or preferred basis as the mechanism that gets rid of the unwanted stuff.
On the other hand, classic decoherence posits that the wavefunction really does collapse, just not to 100% pure states. Although there’s technically a superposition of electrons and a superposition of mes, it’s heavily dominated by one component. Thus, the two interpretations, classic many-worlds and classic decoherence, are different interpretations.
OK. I would call that single world decoherence. Many worlders appeal to Decoherence as well.
So classic decoherence is more falsifiable than classic many-worlds.
If classic MW means Everetts RSI, it's already false.
I've already told you why Im not going to believe chatGpt. Judge for yourself: https://www.researchgate.net/profile/Bruno-Marchal-3.
Why? I was there, it wasn't.
Bruno Marchal was talking about this stuff in the nineties.
alignment is structurally impossible under competitive pressur
Alignment contrasts with control, as a means to AI safety.
Alignment roughly means the AI has goals, or values similar to human ones (which are assumed, without much evidence to be similar across humans), so that it will do what we want , because it's what it wants.
Control means that it doesn't matter what the AI wants, if it wants anything.
In short, there is plenty of competitive pressure towards control , because no wants an AI they can't control. Control is part of capability.
MWI is more than one theory.
There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are (for all practical purposes) incompatible opposites, but are treated as interchangeable in Yudkowsky's writings. Decoherent branches are large, stable, non interacting and irreversible...everything that would be intuitively expected of a "world". But there is no empirical evidence for them (in the plural) , nor are they obviously supported by the core mathematics of quantum mechanics, the Schrödinger equation.Coherent superpositions are small scale , down to single particles, observer dependent, reversible, and continue to interact (strictly speaking , interfere) after "splitting". the last point is particularly problematical. because if large scale coherent superposition exist , that would create naked eye evidence at macrocsopic scale:, e.g. ghostly traces of a world where the Nazis won.
We have evidence of small scale coherent superposition, since a number of observed quantum.effects depend on it, and we have evidence of decoherence, since complex superposition are difficult to maintain. What we don't have evidence of is decoherence into multiple branches. From the theoretical perspective, decoherence is a complex , entropy like process which occurs when a complex system interacts with its environment. Decoherence isn't simple. But without decoherence, MW doesn't match observation. So there is no theory of MW that is both simple and empirically adequate, contra Yudkowsky and Deutsch.
Decoherence says that regions of large complex superpositions stop interfering with each other
It says that the "off diagonal" terms vanish, but that would tend to.generate a single predominant outcome (except, perhaps, where the environment is highly symmetrical).
Physicalism doesn't solve the hard problem, because there is no reason a physical process should feel like anything from the inside.
Computationalism doesn't solve the hard problem, because there is no reason running an algorithm should feel like anything from the inside.
Formalism doesn't solve the hard problem, because there is no reason an undecideable proposition should feel like anything from the inside.
Of course, you are not trying to explain qualia as such, you are giving an illusionist style account. But I still don't see how you are predicting belief in qualia.
And among these fictions, none is more persistent than the one we call qualia.
What's useful about them? If you are going to predict (the belief in) qualia, on the basis of usefulness , you need to state the usefulness. It's useful to know there is a sabretooth tiger bearing down in you , but why is an appearance more useful than a belief ..and what's the use of a belief-in-appearance?
This suggests an unsettling, unprovable truth: the brain does not synthesize qualia in any objective sense but merely commits to the belief in their existence as a regulatory necessity.
What necessity?
ETA:
self-referential, self-regulating system that is formally incomplete (as all sufficiently complex systems are) will generate internally undecidable propositions. These propositions — like “I am in pain” or “I see red” — are not verifiable within the system, but are functionally indispensable for coherent behavior.
I still see no reason why an undecideable proposition should appear like a quale or a belief in qualia.
That failure gets reified as feeling.
Why?
I understand that you invoke the “Phenomenological Objection,” as I also, of course, “feel” qualia. But under EN, that feeling is not a counterargument — it’s the very evidence that you are part of the system being modeled.
Phenomenal conservatism , the idea that if something seems to exist ,you should (defeasibly) assume it does exist,.is the basis for belief in qualia. And it can be defeated by a counterargument, but the counter argument needs to be valid as an argument. Saying X's are actually Y's for no particular reason is not valid.
This argument is based on a conflation. It assumes that there’s one single thing, “morality”, and this one thing produces not only answers to “what should you do”, but also, what should we condemn, what should we punish, what should one feel guilty about, and other similar questions; and that, moreover, the answers to these questions are identical (or opposites, as appropriate; here the first one would be the opposite of the others).
Yes: consequentialism, deontology, etc, are are different aspects of morality, and even relate to different things...the permissible versus the desirable, etc. Yet they can be reconciled:-
One of the areas of debate about ethics is where the locus of ethical concern is .. whether it lies in persons (an approach known as virtue ethics), rules (deontology) or the consequences of actions (consequentialism). (There are also other axes, such as objectivity versus subjectivity and cognitivism versus non cognitivism). Consider a case where someone dies in an industrial accident , although all rules were followed: if you think the plant manager should be exonerated because he folowed the rules, you are siding with deontology, whereas if you think he should be punished because a death occurred under his supervision, you are siding with consequentialism.
Many people encounter deontology in the form of "ten commandments" style religious law. From a rational perspective, this kind of deontology is unsatisfactory: for one thing, there are multiple competing systems, and it is not clear why any system should be followed, and it is difficult to adapt traditional deontology to new circumstancs. Likewise, virtue ethics suffers from disagreements about what is virtuous.. for instance, the stength-and-independence cluster of virtues versus the kindness-and-cooperation cluster.
To those who are looking for a rational basis for ethics (which includes most of theose seeking to find a motivating basis for ethics), consequentialism is more attractive. A basis in the preferences and values people actually have, which are cognitevely accessible. Being based on the preferences and values people actually have, goes a considerable way to finding motivation to behave ethically (although there is a considerable wrinkle in blancing "my "preferences against "yours").. And it is possible to adapt to changing circumstances in terms of the results we would wish to get out of them. These are the advantages of consequentialism.
On the other hand, ethics as it is exists in human societies doesn't have a strong rational basis. Psyhcological studies show that people's thinking about ethics is instead intuitive, and often centered on rules and virtues which are taken for granted. However, the arguments for consequentialism so far given don't add up to arguments for pure consequentialism. The more sophisticated defences of rational ethics can include aspects of deontology and virtue ethics.
The disadvantages of (most forms of) consequentialism include the fact that consequences are impossible to calculate exactly, in general. Secondly, different individuals, approximating conseqential decision making differently, would lead to lack of coordination. For instance, . Thirdly, it is unreasonable to punish people for consequences of intentional actions whose outcomes they could not forsee. Fourthly, it is unreasonable to punish people for what unintentional actions.
Rules that are commonly agreed, and which lead to (approximately) desireable outcomes, in the consequentialist sense solve all these problems. Firstly, it is possible to memorise a set of rules. Secondly, if everyone follows the same rules, it is possible to co-ordinate. Thirdly, it is reasonable to punish someone for failing to follow a rule they knew about and knew they should be followed. Fourthly, new rules can be formulated in response to to changing circumstances, since it is possible to choose rules that lead to desirable expected outcomes.
("Right" and "wrong", that is praisweorthiness and blameability are concepts that belong to deontology. A good outcome in the consequentialist sense, one that is a generally desired, is a different concept from deontological right. Ethics does not have a single subject mtter ..it is about goodness in the sense of desireable ends and goodness in the sense of right behaviour and goodness in the sense of virtue.)
The advantages of consequentialism can still be retained by basing rules on expected consequences. That is very much a compromise, though. A finite and cognitively manageble set of rules can only approximate the case-by-case calculations of an ideal ethical reasoner. But ideal ethical reasoners don't exist. (But some people reason better than others, even though everyone is obliged to follow the same set of rules...)
The explicit construction of rules is apparent in modern, tenchologically advanced societies, since such societies face challenges to adapt socially to their technological innovations. Nonetheless, the rules of a more traditional society can be retrospectively seen as gradual adaptations, existing in order to bring about desirable consequences. And inadmuch as ethical rules exist to fulfil a purpose, bringing about desirable consequences, they can be seen as doing so better or worse.
People need to actually be able to act on morality, which is where virtue (in one sense) comes in. Virtue can mean moral fibre, an inner capacity to do what is not in your selfish interest, or , alternatively moral standing or status that rewards people for being moral. Virtue correlates more with reward, deontology more with punishment.
But there’s something else, which is a very finite legible learning algorithm that can automatically find all those things
Is there? I see a.lot of talk about brain algorithms here, but I have never seen one stated...made "legible".
—the object-level stuff and the thinking strategies at all levels. The genome builds such an algorithm into the human brain
Does it? Rationalists like to applaud such claims, but I have never seen the proof.
And it seems to work!
Does it? Even If we could answer every question we have ever posed, we could still have fundamental limitations. If you did have a fundamental cognitive deficit, that prevents you from.understanding some specific X how would you know? You need to be able to conceive X before conceiving that you don't understand X. It would be like the visual blind spot...which you cannot see!
And then I’m guessing your response would be something like: there isn’t just one optimal “legible learning algorithm” as distinct from the stuff that it’s supposed to be learning. And if so, sure
So why bring it up?
there isn’t just one optimal “legible learning algorithm”
Optimality -- doing things efficiency -- isn't the issue, the issue is not being able to do certain things at all.
I think this is very related to the idea in Bayesian rationality that priors don’t really matter once you make enough observations.
The idea is wrong. Hypotheses matter , because if you haven't formulated the right hypothesis , no amount of data will confirm it. Only worrying about weighting of priors is playing in easy mode, because it assumes the hypothesis space is covered. Fundamental cognitive limitations could manifest as the inability to form certain hypotheses. How many hypotheses can a chimp form? You could show a chimp all the evidence in the world, and it's not going to hypothesize general relativity.
Rationalists always want to reply that Solomonoff inductors avoid the problem on the basis that SIs consider "every" "hypothesis"... but they don't , several times over. It's not just that they are uncomputable, it's also that it's not know that every hypothesis can be expressed as a programme. The ability to range over a complete space does not equate to the ability to range over Everything.
Here’s an example: If you’ve seen a pattern “A then B then C” recur 10 times in a row, you will start unconsciously expecting AB to be followed by C. But “should” you expect AB to be followed by C after seeing ABC only 2 times? Or what if you’ve seen the pattern ABC recur 72 times in a row, but then saw AB(not C) twice? What “should” a learning algorithm expect in those cases? You can imagine a continuous family of learning algorithms, that operate on the same underlying principles.
A set of underlying principles is a limitation. SIs are limited to computability and the prediction of a sequence of observations. You're writing as that something like prediction of the next observation is the only problem of interest , but we don't know that Everything fits that pattern. The fact that Bayes and Solomomoff work that way is of no help, as shown above.
But within this range, I acknowledge that it’s true that some of them will be able to learn different object-level areas of math a bit faster or slower, in a complicated way, for example.
But you haven't shown that efficiency differences are the only problem. The nonexistence of fundamental no-go areas certainly doesn't follow from the existence of.efficiency differences.
, it can still figure things out with superhuman speed and competence across the board
The definition of superintelligence means that "across the board" is the range of things humans do, so if there is something humans can't do at all,an ASI is not definitionally required to be able to do it.
By the same token, nobody ever found the truly optimal hyperparameters for AlphaZero, if those even exist, but AlphaZero was still radically superhuman
The existence of superhuman performance in some areas doesn't prove adequate performance in all areas, so it is basically irrelevant to the original question, the existence of fundamental limitations in humans.
OP discusses maths from a realist perspective. If you approach it as a human construction, the problem about maths is considerably weakened...but the wider problem remains, because we don't know that maths is Everything.
this is conflating the reason for why one knows/believes P versus the reason for why P,
Of course, that only makes sense assuming realism.
You are understating your own case, because there is a difference between mere infinity and All Kinds of Everything. An infinite collection of one kind of thing can be relatively tractable.
Again, those are theories of consciousness, not definitions of consciousness.
I would agree that people who use consciousness to denote the computational process vs. the fundamental aspect generally have different theories of consciousness, but they’re also using the term to denote two different things.
But that doesn't imply that they disagree about (all of) the meaning of the term "qualia"..since denotation (extension, reference)doesn't exhaust meaning. The other thing is connotation, AKA intension, AKA sense.
https://en.m.wikipedia.org/wiki/Sense_and_reference
Everyone can understand that the qualia are ,minimally, things like the-way-a-tomato-seems-to-you, so that's agreement on sense , and the disagreement on whether the referent is "physical property", "nonphysical property" , "information processing", etc, arises from different theoretical stances.
(I think this is bc consciousness notably different from other phenomena—e.g., fiber decreasing risk of heart disease—where the phenomenon is relatively uncontroversial and only the theory about how the phenomenon is explained is up for debate.
That's an odd use of "phenomenon"...the physical nature of a heart attack is uncontroversial, and the controversy is about the physical cause. Whereas with qualia, they are phenomenal properly speaking..they are appearences...and yet lack a prima facie interpretation in physical (or information theoretic) terms. Since qualia do present themselves immediately as phenomenal, then outright denial ...feigning anaesthesia or zombiehood.. is a particular poor response to the problem. And the problem is different to "how does one physical event cause another one that is subsequent in time"...it's more like "how or whether qualia, phenomenal consciousness supervenes synchronously on brain states". .
With consciousness, there are a bunch of “problems” about which people debate whether they’re even real problems at all (e.g., binding problem, hard problem). Those kinds of disagreements are likely causally upstream of inconsistent terminology.)
If you don't like the terminology, you can invent better terminology. Throughout this exchange , you have been talking in terms of "consciousness" , and I have been replying in terms of "qualia", because "qualia" is a term that was invented to hone in on the problem, on the aspects of consciousness where it isn't obviously just information processing. (I'm personally OK with using information theoretic explanations, such as global workplace theory, to address Easy Problem issues , such as Access Consciousness).
Theres a lot to be said for addressing terminological.issues, but it's not an easy win for camp #1.
If it was that easy to understand, we wouldn’t be here arguing about it.
Definitions are not theories
Even if there is agreement about the meaning of the word, there can also be disagreement about the correct theory of qualia. Definitions always precede theories -- we could define "Sun" for thousands of years before we understood its nature as a fusion reactor. Shared definitions are a prerequisite of disagreement , rather than just talking past each other.
The problem of defining qualia -- itself, the first stage in specifying the problem --can be much easier than the problem of coming up with a satisfactory theoretical account, a solution. It's a term that was created by an English speaking philosopher less than a hundred years ago, so it really doesn't present the semantic challenges of some philosophical jargon.
(The resistance to qualia can also be motivated by unwillingness to give up commitments -- bias, bluntly -- not just semantic confusion)
My claim is that arguments about qualia are (partially) caused by people actually having different cognitive mechanisms that produce different intuitions about how experience works.
Semantic confusions and ideological rigidity already abound, so there is no need to propose differing cognitive mechanisms.
Theories about how qualia work don't have to be based on direct intuition. Chalmers arguments are complex, and stretch over 100s of pages.
Well, I’m glad you’ve settled the nature of qualia. There’s a discussion downthread, between TAG and Signer, which contains several thousand words of philosophical discussion of qualia.
Again, the definition is one thing and the "nature"...the correct ontological theory...is another. The definition is explained by Wikipedia, the correct theory , the ultimate explanation is not.
Seriously, I definitely have sensations.
"Sensation" is ambiguous between a functional capacity -- stopping at a red light -- and a felt quality -- what red looks like. The felt quality is definitely over and above the function, but that's probably not your concern.
I just think some people experience an extra thing on top of sensations, which they think is an indissoluble part of cognition, and which causes them to find some things intuitive that I find incomprehensible.
It's true that some people have concluded nonphysical theories from qualia... but it doesn't follow that they must be directly perceiving or intuiting any kind of nonphysicalism in qualia themselves. Because it's not true that every conclusion has to be arrived at immediately, without any theoretical, argumentative background. Chalmers' arguments don't work that way and are in fact quite complex.
Physics is a complex subject that needs to be learnt. To know what is physical is therefore not a matter of direct intuition...so to know that qualia are not physical is also not a matter of direct intuition.
There's no existing, successful, physical or computational theory of qualia. The people who think qualia aren't physical, aren't necessarily basing that on some kind of direct perception, and don't necessarily know less physics than the people who do.
Consciousness itself is overloaded (go figure!) since it can refer to both “a high-level computational process” and “an ontologically fundamental property of the universe”.
Again, those are theories of consciousness, not definitions of consciousness.
Qualia can be a synonym for consciousness (if you are in Camp #2) or mean something like “this incredibly annoying and ill-defined concept that confused people insist on talking about” (if you’re in Camp #1). I recommend only using this term if you’re talking to a Camp #2 audience.
There are many more than two possibilities. You can take consciousness seriously, in Chalmer's sense, and accept that there is a Hard Problem, without denying that there are other, easier aspects to consciousness.. Chalmers accepts that there are easy problem as well.
And the definition problem becomes much easier if you remember that definitions aren't theories.
Free will in the general context means that you are in complete control of the decisions you make, that is farthest from the truth. Sure you can hack your body and brain ...
Why "complete" control? You can disprove anything , in a fake sort of way, by setting then bar high -- if you define memory as Total Recall, it turns out no-one has a memory.
Who's this "you" who's separate from both brain and body? Shouldn't you be asking how the machine works? A machine doesn't have to be deterministic , and can be self-modifying.
When Robert Sapolsky says there is no free will, he means that if we know your current body state perfectly, we can predict with 100% accuracy what you will do in the next moment given an input.
We can't, in general. Theres no perfect predictability in the human sciences.
I specifically mentioned wife instead of a generic friends
Then you you are picking a special case to make a general point.
If we sufficiently understand how the brain and body works we should be able to predict.
Why? Determinism isn't a fact. We don't have evidence of physical determinism, so we can't make a bottom up argument, and we dont have perfect predictability in psychology, either.
Why would anyone ever care if a god could predict their actions, when no such god exists, and humans can only make bad guesses?I
Predictability implies determinism, determinism implies no (libertarian) free will.
Possibly we are just in one of the mathematical universes that happens to have an arrow of time—the arrow seems to arise from fairly simple assumptions, mainly an initial condition and coarse graining
You are misunderstanding the objection. It's not just an arrow of time in the sense of order, such as increasing entropy, it's the passingness of the time. An arrow can exist statically, but that's not how we experience time. We don't experience it as a simultaneous group of moments that happen to be ordered , we experience one moment at a time. A row of houses is ordered but not one-at-a-time, like a succession of movie frames.
The valence of pleasure and pain is not just a sign change, they serve vastly different psychological functions and evolved for distinct evolutionary reasons.
And the associate qualia? What's the mathematical theory of qualia? Is it bottom-up ...we have some mathematical descriptions of qualia, and it's only a matter of time before we have the rest...or top-down...everything is mathematical, so qualia must be...?
(Extensively reviesed and edited).
Reductionism
Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.
Things like airplane wings actually are, at least as approximations. I don't see why you are.approvingly quoting this: it conflates reduction and elimination.
But the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces.
If that's a scientific claim ,it needs to be treated as falsifiable, not as dogma.
You can’t handle the raw truth, but reality can handle it without the slightest simplification. (I wish I knew where Reality got its computing power.)"
It's not black and white. A simplified model isn't entirely out there, but it's partly out there. There's still a difference between an aeroplane wing and horse feathers.
Vitalistic Force
Vitalistic force (§3.3) is an intuitive concept that we apply to animals, people, cartoon characters, and machines that “seem alive” (as opposed to seeming “inanimate”).
It amounts to a sense that something has intrinsic important unpredictability in its behavior
The intuitive model says that the decisions are caused by the homunculus, and the homunculus is infused with vitalistic force and hence unpredictable. And not just unpredictable as a state of our limited modeling ability, but unpredictable as an intrinsic property of the thing itself—analogous to how it’s very different for something to be “transparent” versus “of unknown color”, or how “a shirt that is red” is very different from “a shirt that appears red in the current lighting conditions
Unpredictability is the absence of a property: predictability. Vitalistic force sounds like the presence of one. It's difficult to see why a negative property would equate to a positive one. We don't have to regard an unpredictable entity as quasi-alive. We don't regard gambling machines in casinos as quasi alive. Our ancestors used to regard the weather as quasi alive, but we don't -- so it's not all that compulsive. We also don't have to regard living things as unpredictable --an ox ploughing a furrow is pretty predictable. Unpredictability and vitalism aren't the same concept, and aren't very rigidly linked, psychologically.
It doesn’t veridically (§1.3.2) correspond to anything in the real world (§3.3.3).
Except..
Granted, one can argue that observer-independent intrinsic unpredictability does in fact exist “in the territory”. For example, there’s a meaningful distinction between “true” quantum randomness versus pseudorandomness. However, that property in the “territory” has so little correlation with “vitalistic force” in the map, that we should really think of them as two unrelated things.
So let's say that two different things: unpredictableness , non-pseudo randomness could exist in the territory, and could found a real, non-supernatural version of free will. Vitality could exist in the territory too -- reductionism only requires that it is not fundamental, not that it is not real at all. It could be as real as an airplane wing. Reduction is not elimination.
However, that property in the “territory” has so little correlation with “vitalistic force” in the map, that we should really think of them as two unrelated things
So what is the definition of vitalistic force that's a) different from intrinsic surprisingness b) incapable of existing in the territory even as an approximation?
Homunculi
The strong version of the homunculus , the one-stop-shop that explains everything about consciousness, identity, and free will, is probably false...but bits and pieces of it could still be rescued.
Function: it's possible that there are control systems even if they don't have a specific physical location.
Location: Its quite possible for higher brain areas to be a homunculus (or homunculi) lite, in the sense that , they exert executive control, or are where sensory data are correlated. Rejecting ghostly homunculi because they are ghostly doesn't entail rejecting physical homunculi The sensory and mirror homunculi.
Vitalism: It's possible for intrinsic surprisingness to exist in the territory, because intrinsic surprisingness is the same thing as indeterminism.
There's also a further level of confusion about whether your idea of homunculus is observer or observed.
Are "we" are observing "ourselves" as a vitalistic homunculus , observing the rest of ourselves? If the latter, which is the real self, the the observer or the homunculus?
As discussed in Post 1, the cortex’s predictive learning algorithm systematically builds generative models that can predict what’s about to happen
No one has discovered a brain algorithm, so far.
Free Will
the suite of intuitions related to free will has spread its tentacles into every corner of how we think and talk about motivation, desires, akrasia, willpower, self, and more
And now we come to the part of the argument where an objective unbiased assessment of free will. concludes that the concept (or rather concepts) are so utterly broken and wrong that any vestige has to be "rooted out".
Now, I expect that most people reading this are scoffing right now that they long ago moved past their childhood state of confusion about free will. Isn’t this “Physicalism 101” stuff?
It's the case that a lot of people think that the age old problem of free will is solved at a stroke by "physics, lol"... but there are also sophisticated naturalistic defences.
There are two dimensions to the problem: the what-we-mean-by-free-will dimension, and the what-reality-offers-us dimension. The question of free will partially depends on how free will is defined, so accepting a basically scientific approach does not avoid the "semantic" issues of how free will, determinism , and so on, are best conceptualised.
I don’t know what people mean by “free will” and I don’t think they usually do either.
Professional philosophers are quite capable of stating their definitions, and you at capable of looking them up.)
Mr. Yudkowsky has no novel insight to offer into how the territory works, nor any novel insight into the correct semantics of free will. He has not solved either sub problem, let alone both. He has proposed a mechanism (not novel) about how the feeling of free will could be a predictable illusion, but that falls short of proving that it is..he basically relies on having an audience who are already strongly biased against free will.
To dismiss fee will, just on the basis of Physicalism, not even deterministic physics, is to tacitly define it as supernatural. Does everyone define it that way? No,there are compatibilists and naturalistic libertarians.
Compatibilism is a naturalistic theory of free will, and libertarianism can be.
(https://insidepoliticalscience.com/libertarian-free-will-vs-compatibilism/)
To provide a mechanism by which the feeling of free will could be an illusion , which he had done, , does not show that it actually is an illusion, because of the usual use laws of modal logic -- he needs to show that his model is the only possibility, not just a possibility. (These problems were pointed out long ago, of course).
It is possible, in the right kind universe to have libertarian free will backed by an entirely physical mechanism, since physics be indeterministic ... and to have a veridical perception of it. The existence of another possibility, where the sense of free will is illusory, doesn't negate the veridical possibility. "Yes,but physicalism " doesn't either.
You don’t observe your brain processes so you don’t observe them as deterministic or indeterministic .. An assumption of determinism has been smuggled in by a choice of language, the use of the word “algorithm". But, contrary to what many believe, algorithms can be indeterministic.
If someone demonstrated that brains run on an indeterministic algorithm, that fulfils the various criteria for libertarian free will, would you still deny that humans have any kind of free will?
Didn’t Eliezer Yudkowsky describe free will as “about as easy as a philosophical problem in reductionism can get, while still appearing ‘impossible’ to at least some philosophers”?
Questions can seem easy if you don't understand their complexities.
Yudkowsky posted his solution to the question of free will along time ago, and the problems were pointed out almost immediately. And ignored for over a decade.
More precisely: If there are deterministic upstream explanations of what the homunculus is doing and why, e.g. via algorithmic or other mechanisms happening under the hood, then that feels like a complete undermining of one’s free will and agency (§3.3.6)
Why? How can you demonstrate that without a definition of free will Obviously , that would have no impact given the compatibilist definition of free will, for instance?
I have had a lot of discussions on the subject , and I have noticed that many laypeople believe in dualism, or a ghost -in-the-machine theory. In that case, I suppose lead that the machine is do it could be devastating. But..I said laypeople. Professional philosophers generally don't define FW that way, and don't think that dualism and free will are the same thing.
Typical definitions are:-
-
The ability or discretion to choose; free choice.
-
The power of making choices that are neither determined by natural causality nor predestined by fate or divine will.
-
A person's natural inclination; unforced choice.
And if there are probabilistic upstream explanations of what the homunculus is doing and why, e.g. the homunculus wants to eat when hungry, then that correspondingly feels like a partial undermining of free will and agency, in proportion to how confident those predictions are.
That's hardly an undermining of libertarian free will at all..LFW only requires that you could have done otherwise..not that you could have done anything at all, or that you could defy statistical laws.
The way intuitive models work (I claim) is that there are concepts, and associations / implications / connotations of those concepts. There’s a core intuitive concept “carrot”, and it has implications about shape, color, taste, botanical origin, etc. And if you specify the shape, color, etc. of a thing, and they’re somewhat different from most normal carrots, then people will feel like there’s a question “but now is it really a carrot?” that goes beyond the complete list of its actual properties.
There's way of thinking about free will and selfhood that is just a list of naturalistically respectable properties , and nothing beyond. Libertarianism doesn't require imperceptible essences, on the naturalistic view, it could just be the operation of a ghost-free machine.I
According to science, the human brain/body is a complex mechanism made up of organs and tissues which are themselves made of cells which are themselves made of proteins, and so on.
Science does not tell you that you are a ghost in a deterministic machine, trapped inside it and unable to control its operation. Or that you are an immaterial soul trapped inside an indetrministic machine. Science tells you that you are, for better or worse, the machine itself.
Although I have used the term "machine", I do not intend to imply that a, machine is necessarily deterministic. It is not known whether physics is deterministic, so "you are a deterministic machine" does not follow from "you are entirely physical". The correct conclusion is "you are no more undetermined than physics allows you to be".
So the scientific question of free will becomes the question of how the machine behaves, whether it has the combination of unpredictability, self direction, self modification and so on, that might characterise free will... depending on how you define free will.
There is a whole science of self-controlling machines: cybernetics. Airplane autopilots and , more recently, self driving cars are examples. Self control, without indeterminism is not sufficient for libertarian free will, but indeterminism without self control is not either
All of those things can be ascertained by looking at a person (or an animal or a machine) from the outside. They don't require a subjective inner self... unless you define free will that way. If you define free will as dependent on a ghostly inner self, then you are not going to have a scientific model of free will.
Consciousness
As a typical example, Loch Kelly at one point mentions “the boundless ground of the infinite, invisible life source”. OK, I grant that it feels to him like there’s an infinite, invisible life source. But in the real world, there isn’t. I’m picking on Loch Kelly, but his descriptions of PNSE are much less mystical than most of them. "
I grant that it feels to you like you have certain knowledge of the universe's true ontology, but at best what you actually have a set of scientific models -- mental constructs, maps -- that make good predictions. I am not saying I have certain knowledge that the mystical ontology is certainly correct, I am saying we are both behind Kantian veils. Prediction underdermines ontology. So long as boundless life source somehow behaves just like matter, under the right circumstances, physics can't disprove it -- just as physicalism requires matter to behave like consciousness, somehow, under the right circumstances
The old Yudkowsky post “How An Algorithm Feels From Inside” is a great discussion of this point.
As has been pointed out many times, there is no known reason for an algorithm to feel like anything from the inside
This Cartesian dualism in various disguises is at the heart of most “paradoxes” of consciousness. P-zombies are beings materially identical to humans but lacking this special res cogitans sauce, and their conceivability requires accepting substance dualism.
Only their physical possibility requires some kind of nonphysicality. Physically impossible things can be conceivable if you don't know why they are physically impossible, if you can't see the contradiction between their existence and the laws of physics. The conceivability of zombies is therefore evidence for phenomenal consciousness not having been explained, at least. Which it hasn't anyway: zombies are in no way necessary to state the HP.
The famous “hard problem of consciousness” asks how a “rich inner life” (i.e., res cogitans) can arise from mere “physical processing” and claims that no study of the physical could ever give a satisfying answer.
A rich inner life is something you have whatever your metaphysics. It doesn't go.away when you stop believing in it. It's the phenomenon to be explained. Res Cogitans, or some other dualistic metaphysics, is among an number of ways explaining it...not something needed to pose the problem.
The HP only claims that the problem of phenomenal consciousness is harder-er than other aspects of consciousness. Further arguments by Chalmers tend towards the lack of a physical solution, but you are telescoping them all into the same issue.
We have also solved the mystery of “the dress”:
But not the Hard Problem: the HP is about having any qualia at all, not about ambiguous or anomalous qualia. There would be an HP if everyone just saw the same.uniform shade of red all the time.
As with life, consciousness can be broken into multiple components and aspects that can be explained, predicted, and controlled. If we can do all three we can claim a true understanding of each
If. But we in fact lag in understanding the phenomenal aspect, compared to the others. In that sense, there is a defacto hard-er problem.
The important point here is that “redness” is a property of your brain’s best model for predicting the states of certain neurons. Redness is not “objective” in the sense of being “in the object".
No, that's not important. The HP starts with the subjectivity of qualia, it doesn't stop with it.
Subjectivity isn't just the trivial issue of being had by a subject, it is the serious issue of incommunicability, or ineffability.
Philosophers of consciousness have committed the same sins as “philosophers of life” before them: they have mistaken their own confusion for a fundamental mystery, and, as with élan vital, they smuggled in foreign substances to cover the gaps. This is René Descartes’ res cogitans, a mental substance that is separate from the material.
No, you can state and justify the HP without assuming dualism.
Are you truly exercising free will or merely following the laws of physics?
Or both?
And how is the topic of free will related to consciousness anyway?
There is no “spooky free will”
There could be non spooky free will...that is more than a mere feeling. Inasmuch as Seth has skipped that issue -- whether there is a physically plausible, naturalistic free will -- he hasn't solved free will.
There are ways in which you could have both, because there are multiple definitions of free will, as well as open questions about physics. Apart from compatibilist free will, which is obviously compatible with physics, including deterministic physics, naturalistic libertarian free will is possible in an indeterministic universe. NLFW is just an objectively determinable property of a system, a man-machine. Free will doesn't have to be explained away, and isn't direct require an assumption of dualism.
But selfhood is itself just a bundle of perceptions, separable from each other and from experiences like pain or pleasure.
The subjective e, sense -of-self is,.pretty much by definition. Whether there are any further objective facts, that would answer questions about destructive teleportation and the like, is another question. As with free will, explaining the subjective aspect doesn't explain away the objective.aspect.
First, computationalism doesn’t automatically imply that, without other assumptions, and indeed there are situations where you can’t clone data perfectly,
Thats a rather small nit. The vast majority of computationalists are talking about classical computation.
Indeed, I was basically trying to say that computationalism is so general that it cannot predict any result that doesn’t follow from pure logic/tautologies,
That's not much of a boast: pure logic can't solve metaphysical problems about consciousness, time, space, identity, and so on. That's why they are still problems. There's a simple logical theory of identity, but it doesn't answer the metaphysical problems, what I have called the synchronic and diachronic problems.
Secondly, one could semi-reasonably argue that the inability to clone physical states is an artifact of our technological immaturity, and that in the far-future, it will be way easier to clone physical states to a level of fidelity that is way closer to the level of copyability of computer programs.
Physicalism doesn't answer the problems. You need some extra information about how similar or different physical things are in order to answer questions about whether they are the same or different individuals. At least, if you want to avoid the implications of raw physicalism --along the lines of "if one atom changes, you're a different person". An abstraction would be useful -- but it needs to be the right one.
Third, I gave a somewhat more specific theory of identity in my linked answer, and it’s compatible with both computationalism and physicalism as presented, I just prefer the computationalist account for the general case and the physicaliskt answer for specialized questions.
You seem to be saying that consciousness is nothing but having a self model, and whatever the self believes about itself is the last word...that there are no inconvenient objective facts that could trump a self assessment ("No you're not the original Duncan Idaho, you're ghola number 476. You think you're the one and only Duncan because you're brain state is a clone of the original Duncan's"). That makes things rather easy. But the rationalist approach to the problem of identity generally relies on bullet biting about whatever solution is appealing -- if computationalism is is correct, you can be cloned, and the you really are on two places at once.
My main non-trivial claim here is that the sense of a phenomenal experience/awareness fundamentally comes down to the fact that the brain needs to control the body, and vice-versa, so you need a self-model of yourself, which becomes a big part of why we say we have consciousness, because we are referring to our self models when we do that.
Well, how? If you could predict qualia from self control, you'd have a solution --not a dissolution --to the HP.
Another reason why the hard problem seems hard is that way too many philosophers are disinclined to gather any data on the phenomenon of interest at all, because they don’t have backgrounds in neuroscience, and instead want to purely define consciousness without reference to any empirical reality.
Granting that "empirical" means "outer empirical" .... not including introspection.
I don't think there is much evidence for the "purely". Chalmers doesn't disbelieve in the easy problem aspects of conscious.
We’re talking about “physical processes”
We are talking about functionalism -- it's in the title. I am contrasting physical processes with abstract functions.
In ordinary parlance, the function of a physical thing is itself a physical effect...toasters toast, kettles boil, planes fly.
In the philosophy of mind, a function is an abstraction, more like the mathematical sense of a function. In maths, a function takes some inputs and or produces some outputs. Well known examples are familiar arithmetic operations like addition, multiplication , squaring, and so on. But the inputs and outputs are not concrete physical realities. In computation,the inputs and outputs of a functional unit, such as a NAND gate, always have some concrete value, some specific voltage, but not always the same one. Indeed, general Turing complete computers don't even have to be electrical -- they can be implemented in clockwork, hydraulics, photonics, etc.
This is the basis for the idea that a compute programme can be the same as a mind, despite being made of different matter -- it implements the same.abstract functions. The abstraction of the abstract, philosopy-of-mind concept of a function is part of its usefulness.
Searle is famous critic of computationalism, and his substitute for it is a biological essentialism in which the generation of consciousness is a brain function -- in the concrete sense of function.It's true that something whose concrete function is to generate consciousness will generate consciousness..but it's vacuously, trivially true.
The point is that the functions which this physical process is implementing are what’s required for consciousness not the actual physical properties themselves.
If you mean that abstract, computational functions are known to be sufficient to give rise to all.asoexs of consciousness including qualia, that is what I am contesting.
I think I’m more optimistic than you that a moderately accurate functional isomorph of the brain could be built which preserves consciousness (largely due to the reasons I mentioned in my previous comment around robustness.
I'm less optimistic because of my.arguments.
But putting this aside for a second, would you agree that if all the relevant functions could be implemented in silicon then a functional isomorph would be conscious?
No, not necessarily. That , in the "not necessary" form --is what I've been arguing all along. I also don't think that consciousnes had a single meaning , or that there is a agreement about what it means, or that it is a simple binary.
The controversial point is whether consciousness in the hard problem sense --phenomenal consciousness, qualia-- will be reproduced with reproduction of function. It's not controversial that easy problem consciousness -- capacities and behaviour-- will be reproduced by functional reproduction. I don t know which you believe, because you are only talking about consciousness not otherwise specified.
If you do mean that a functional duplicate will necessarily have phenomenal consciousness, and you are arguing the point, not just holding it as an opinion, you have a heavy burden:-
You need to show some theory of how computation generates conscious experience. Or you need to show why the concrete physical implementation couldn't possibly make a difference.
@rife
Yes, I’m specifically focused on the behaviour of an honest self-report
Well,. you're not rejecting phenomenal consciousness wholesale.
fine-grained information becomes irrelevant implementation details. If the neuron still fires, or doesn’t, smaller noise doesn’t matter. The only reason I point this out is specifically as it applies to the behaviour of a self-report (which we will circle back to in a moment). If it doesn’t materially effect the output so powerfully that it alters that final outcome, then it is not responsible for outward behaviour.
But outward behaviour is not what I am talking about. The question is whether functional duplication preserves (full) consciousness. And, as I have said, physicalism is not just about fine grained details. There's also the basic fact of running on the metal
I’m saying that we have ruled out that a functional duplicate could lack conscious experience because: we have established conscious experience as part of the causal chain
"In humans". Even if it's always the case that qualia are causal in humans, it doesn't follow that reports of qualia in any entity whatsoever are caused by qualia. Yudkowsky's argument is no help here, because he doesn't require reports of consciousness to be *directly" caused by consciousness -- a computational zombies reports would be caused , not by it's own consciousness, but by the programming and data created by humans.
to be able to feel something and then output a description through voice or typing that is based on that feeling. If conscious experience was part of that causal chain, and the causal chain consists purely of neuron firings, then conscious experience is contained in that functionality.
Neural firings are specific physical behaviour, not abstract function. Computationalism is about abstract function
I don’t find this position compelling for several reasons:
First, if consciousness really required extremely precise physical conditions—so precise that we’d need atom-by-atom level duplication to preserve it, we’d expect it to be very fragile.
Don't assume that then. Minimally, non computation physicalism only requires that the physical substrate makes some sort of difference. Maybe approximate physical resemblance results in approximate qualia.
Yet consciousness is actually remarkably robust: it persists through significant brain damage, chemical alterations (drugs and hallucinogens) and even as neurons die and are replaced.
You seem to be assuming a maximally coarse-grained either-conscious-or-not model.
If you allow for fine grained differences in functioning and behaviour , all those things produce fine grained differences. There would be no point in administering anaesthesia if it made no difference to consciousness. Likewise ,there would be no point in repairing brain injuries. Are you thinking of consciousness as a synonym for personhood?
We also see consciousness in different species with very different neural architectures.
We don't see that they have the same kind of level of consciousness.
Given this robustness, it seems more natural to assume that consciousness is about maintaining what the state is doing (implementing feedback loops, self-models, integrating information etc.) rather than their exact physical state.
Stability is nothing like a sufficient explabation of consciousness, particularly the hard problem of conscious experience...even if it is necessary.But it isn't necessary either , as the cycle of sleep and waking tells all of us every day.
Second, consider what happens during sleep or under anaesthesia. The physical properties of our brains remain largely unchanged, yet consciousness is dramatically altered or absent.
Obviously the electrical and chemical activity changes. You are narrowing "physical" to "connectome". Physcalism is compatible with the idea that specific kinds of physical.acriviry are crucial.
Immediately after death (before decay sets in), most physical properties of the brain are still present, yet consciousness is gone. This suggests consciousness tracks what the brain is doing (its functions)
No, physical behaviour isn't function. Function is abstract, physical behaviour is concrete. Flight simulators functionally duplicate flight without flying. If function were not abstract, functionalism would not lead to substrate independence. You can build a model of ion channels and synaptic clefts, but the modelled sodium ions aren't actual sodium ion, and if the universe cares about activity being implemented by actual sodium ions, your model isn't going to be conscious
Rather than what it physically is. The physical structure has not changed but the functional patterns have changed or ceased.
Physical activity is physical.
I acknowledge that functionalism struggles with the hard problem of consciousness—it’s difficult to explain how subjective experience could emerge from abstract computational processes. However, non-computationalist physicalism faces exactly the same challenge. Simply identifying a physical property common to all conscious systems doesn’t explain why that property gives rise to subjective experience.
I never said it did. I said it had more resources. It's badly off, but not as badly off.
Yet, we generally accept behavioural evidence (including sophisticated reasoning about consciousness) as evidence of consciousness in humans.
If we can see that someone is a human, we know that they gave a high degree of biological similarity. So webl have behavioural similarity, and biological similarity, and it's not obvious how much lifting each is doing.
Functionalism doesn’t require giving up on qualia, but only acknowledging physics. If neuron firing behavior is preserved, the exact same outcome is preserved,
Well, the externally visible outcome is.
If I say “It’s difficult to describe what it feels like to taste wine, or even what it feels like to read the label, but it’s definitely like something”—There are two options—either -it’s perpetual coincidence that my experience of attempting to translate the feeling of qualia into words always aligns with words that actually come out of my mouth or it is not Since perpetual coincidence is statistically impossible, then we know that experience had some type of causal effect.
In humans.
So far that tells us that epiphenomenalism is wrong, not that functionalism is right.
The binary conclusion of whether a neuron fires or not encapsulates any lower level details, from the quantum scale to the micro-biological scale
What does "encapsulates"means? Are you saying that fine grained information gets lost? Note that the basic fact of running on the metal is not lost.
—this means that the causal effect experience has is somehow contained in the actual firing patterns.
Yes. That doesn't mean the experience is, because a computational Zombie will produce the same outputs even if it lacks consciousness, uncoincidentally.
A computational duplicate of a believer in consciousness and qualia will continue to state that it has them , whether it does or not, because its a computational duplicate , so it produces the same output in response to the same input
We have already eliminated the possibility of happenstance or some parallel non-causal experience,
You haven't eliminated the possibility of a functional duplicate still being a functional duplicate if it lacks conscious experience.
Basically
- Epiphenenomenalism
- Coincidence
- Functionalism
Aren't the only options.
Imagine that we could successfully implement a functional isomorph of the human brain in silicon. A proponent of 2) would need to explain why this functional isomorph of the human brain which has all the same functional properties as an actual brain does not, in fact, have consciousness.
Physicalism can do that easily,.because it implies that there can be something special about running running unsimulated , on bare metal.
Computationalism, even very fine grained computationalism, isn't a direct consequence of physicalism. Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That's the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because CTM implies that the same algorithm running on different hardware will be sufficient. Physicalism doesn't imply computationalism, and arguments against p-zombies don't imply the non existence of c-zombies -- unconscious duplicates that are identical computationally, but not physically.
So it is possible,given physicalism , for qualia to depend on the real physics , the physical level of granularity, not on the higher level of granularity that is computation.
Anil Seth where he tries to pin down the properties X which biological systems may require for consciousness https://osf.io/preprints/psyarxiv/tz6an. His argument suggests that extremely complex biological systems may implement functions which are non-Turing computable
It presupposes computationalism to assume that the only possible defeater for a computational theory is the wrong kind of computation.
My contention in this post is that if they’re able to reason about their internal experience and qualia in a sophisticated manner then this is at least circumstantial evidence that they’re not missing the “important function.”
There's no evidence that they are not stochastic-parrotting , since their training data wasn't pruned of statements about consciousness.
If the claim of consciouness is based on LLMs introspecting their own qualia and report on them , there's no clinching evidence they are doing so at all. You've got the fact computational functionalism isn't necessarily true, the fact that TT type investigations don't pin down function, and the fact that there is another potential explanation diverge results.
Whether computationalism functionalism is true or not depends on the nature of consciousness as well as the nature of computation.
While embracing computational functionalism and rejecting supernatural or dualist views of mind
As before, they also reject non -computationalist physicalism, eg. biological essentialism whether they realise it or not.
It seems to privilege biology without clear justification. If a silicon system can implement the same information processing as a biological system, what principled reason is there to deny it could be conscious?
The reason would be that there is more to consciousness than information processing...the idea that experience is more than information processing not-otherwise-specified, that drinking the wine is different to reading the label.
It struggles to explain why biological implementation specifically would be necessary for consciousness. What about biological neurons makes them uniquely capable of generating conscious experience?
Their specific physics. Computation is an abstraction from physics, so physics is richer than computation. Physics is richer than computation, so it has more resources available to explain conscious experience. Computation has no resources to explain conscious experience -- there just isn't any computational theory of experience.
It appears to violate the principle of substrate independence that underlies much of computational theory.
Substrate independence is an implication of computationalism, not something that's independently known to be true. Arguments from substrate independence are therefore question begging.
Of course, there is minor substrate independence, in that brains which have biological differences able to realise similar capacities and mental states. That could be explained by a coarse graining or abstraction other than computationalism. A standard argument against computationalism, not mentioned here is that it allows to much substrate independence and multiple realisability -- blockheads and so on.
It potentially leads to arbitrary distinctions. If only biological systems can be conscious, what about hybrid systems? Systems with some artificial neurons? Where exactly is the line?
Consciousness doesn't have to be a binary. We experience variations in our conscious experience every day.
However, this objection becomes less decisive under functionalism. If consciousness is about implementing certain functional patterns, then the way these patterns were acquired (through evolution, learning, or training) shouldn’t matter. What matters is that the system can actually perform the relevant functions
But that can't be inferred from responses alone, since, in general, more than one function can generate the same output for a given input.
It’s not clear what would constitute the difference between “genuine” experience and sophisticated functional implementation of experience-like processing
You mean there is difference to an outside observer, or to the subject themself?
The same objection could potentially apply to human consciousness—how do we know other humans aren’t philosophical zombies
It's implausible given physicalism, so giving up computationalism in favour of physicalism doesn't mean embracing p-zombies.
If we accept functionalism, the distinction between “real” consciousness and a perfect functional simulation of consciousness becomes increasingly hard to maintain.
It's hard to see how you can accept functionalism without giving up qualia, and easy to see how zombies are imponderable once you have given up qualia. Whether you think qualia are necessary for consciousness is the most important crux here.
We de-empahsized QM in the post
You did a bit more than de-emphasize it in the title!
Also:
Like latitude and longitude, chances are helpful coordinates on our mental map, not fundamental properties of reality.
"Are"?
**Insofar as we assign positive probability to such theories, we should not rule out chance as being part of the world in a fundamental way. **Indeed, we tried to point out in the post that the de Finetti theorem doesn’t rule out chances, it just shows we don’t need them in order to apply our standard statistical reasoning. In many contexts—such as the first two bullet points in the comment to which I am replying—I think that the de Finetti result gives us strong evidence that we shouldn’t reify chance.
The perennial source of confusion here is the assumption that the question is whether chance/probability is in the map or the territory... but the question sidelines the "both" option. If there were
strong evidence of mutual exclusion, of an XOR rather than IOR premise, the question would be appropriate. But there isn't.
If there is no evidence of an XOR, no amount of evidence in favour of subjective probability is evidence against objective probability, and objective probability needs to be argued for (or against), on independent grounds. Since there is strong evidence for subjective probability, the choices are subjective+objective versus subjective only, not subjective versus objective.
(This goes right back to "probability is in the mind")
Occams razor isn't much help. If you assume determinism as the obvious default, objective uncertainty looks like an additional assumption...but if you assume randomness as the obvious default, then any deteministic or quasi deteministic law seems like an additional thing
In general, my understanding is that in many worlds you need to add some kind of rationality principle or constraint to an agent in the theory so that you get out the Born rule probabilities, either via self-locating uncertainty (as the previous comment suggested) or via a kind of decision theoretic argument.
There's a purely mathematical argument for the Born rule. The tricky thing is explaining why observations have a classical basis -- why observers who are entangled with a superposed system don't go into superposition with themselves. There are multiple aspects to the measurement problem...the existence or otherwise if a fundamental measurement process, the justification the Born rule, the reason for the emergence of sharp pointer states, and reason for the appearance of a classical basis. Everett theory does rather badly on the last two.
If the authors claim that adding randomness in the territory in classical mechanics requires making it more complex, they should also notice that for quantum mechanics, removing the probability from the territory for QM (like Bohmian mechanics) tends to make the the theories more complex.
OK, but people here tend to prefer many worlds to Bohmian mechanics.. it isn't clear that MWI is more complex ... but it also isn't clear that it is a actually simpler than the alternatives ...as it's stated to be in the rationalsphere.
Computationalism is a bad theory of synchronic non-identity (in the sense of "why am I a unique individual, even though I have an identical twin"), because computations are so easy to clone -- computational states are more cloneable than physical states.
Computationalism might be a better theory of diachronic identity (in the sense of "why am I still the same person, even though I have physically changed"), since it's abstract, and so avoids the "one atom has changed" problem of naive physicalism. Other abstractions are available, though. "Having the same memories" is a traditional one unadulterated computation.
Its still a bad theory of consciousness-qua-awareness (phenomenal consciousness , qualia, hard problem stuff) because, being an abstraction, it has fewer resources than physicalism to explain phenomenal experience. There is no computational theory of qualia whatsoever, no algorithm for seeRed().
It's still an ok explanation of consciousness-qua-function (easy problem stuff), but not obviously the best.
Most importantly: it's still the case that, if you answer one of these four questions, you don't get answers to the other three automatically.
I believe computationalism is a very general way to look at effectively everything,
Computation is an abstraction, and its not guaranteed to be the best.
This also answers andeslodes’s point around physicalism, as the physicalist ontology is recoverable as a special case of the computationalist ontology
A perfect map has the same structure as the territory, but still is not the territory. The on-the-metalness is lacking. Flight simulators don't fly. You can grow potatoes in a map, not even a 1:1 one.
...also hears that the largest map considered really useful would be six inches to the mile; although his country had learnt map-making from his host Nation, it had carried it much further, having gone through maps that are six feet to the mile, then six yards to the mile, next a hundred yards to the mile—finally, a mile to the mile (the farmers said that if such a map was to be spread out, it would block out the sun and crops would fail, so the project was abandoned).
https://en.m.wikipedia.org/wiki/Sylvie_and_Bruno
my biggest view on what consciousness actually is, in that it’s essentially a special case of modeling the world, where in order to give your own body at one time alive, you need to have a model of the body and brain, and that’s what consciousness basically is, a model of ourselves
So..it's nothing to do with qualia/phenomenality/HP stuff? Can't self modelling and phenomenality be separate questions?
Others say chance is a physical property – a “propensity” of systems to produce certain outcomes. But this feels suspiciously like adding a mysterious force to our physics.[4] When we look closely at physical systems (leaving quantum mechanics aside for now), they often seem deterministic: if you could flip a coin exactly the same way twice, it would land the same way both times.
Don't sideline QM: it's highly relevant. If there are propensities, real probabilities, then they are not mysterious, they are just the way reality works. They might be unnecessary to explain many of our practices of ordinary probablistic reasoning, but that doesn't make them mysterious in themselves.
If you can give a map-based account of probablistic reasoning, that's fine as far as it goes ...but it doesn't go as far as proving there are no propensities
This approach aligns perfectly with the rationalist emphasis on “the map is not the territory.”
Whatever that means , it doesn't mean that maps can never correspond to territories. In-the-map does not imply not-in-the-territory. "Can be thought about in a certain way" does not imply "has be thought about in a certain way".
Like latitude and longitude, chances are helpful coordinates on our mental map, not fundamental properties of reality. When we say there’s a 70% chance of rain, we’re not making claims about mysterious properties in the world.
But you could be partially making claims about the world,since propensities are logically possible...even though there is a layer of subjective ,lack-of-knowkedge-based uncertainty on top.
(And the fact that there is so much ambiguity between in-the-map probability and in-the-territory probability itself explains why there is so much confusion about QM).
Well, you can regard QM as deterministic, so long as you are willing to embrace nonlocality..but you don't have to.
Although it is worth noting that many theories of quantum mechanics— in particular, Everettian and Bohmian quantum mechanics—are perfectly deterministic.
...only means you can.
The existence of real probabilities is still an open question, and still not closed by noticing that there is a version of probability/possibility/chance in the mind/map ...because that doesn't mean there is isn't also a version in the territory/reality.
Bayesianism in particular doesn't mean probability is in the mind in a sense exclusive of being in the territory.
Consider performing a Bayesian experiment in a universe with propensities. You start off with a prior of 0.5 , on indifference, that your photons will be spin up. You perform a run of a experiments,and 50% of them are spin up. So your posterior is also 0.5...which is also in the in-the-territory probability.
Credences need to be about something, but they don't need to be about propensities. A Bayesian can prove that they have the right credences by winning bets, which is quite possible in a deterministic universe.
ethical, political and religious differences (which i’d mostly not place in the category of ‘priors’, e.g. at least ‘ethics’ is totally separate from priors aka beliefs about what is)
That's rather what I am saying. Although I would include "what is" as opposed to "what appears to be". There may well be fact/value gap, but there's also an appearance/reality gap. The epistemology you get from evolutionary argument only goes as far as the apparent. You are not going to die if you have interpreted the underlying nature or reality of a dangerous thing incorrectly -- you should drink water even if you think it's a fundamental element, you should avoid marshes even if you think fever is caused by bad smells.
are explained by different reasons (some also evolutionary, e.g. i guess it increased survival for not all humans to be the same), so this question is mostly orthogonal / not contradicting that human starting beliefs came from evolution.
But that isn't the point of the OP. The point of the OP is to address an epistemological problem, to show that our priors have some validity, because the evolutionary process that produced them would tend to produce truth seeking ones. It's epistemically pointless to say that we have some arbitrary starting point of no known validity -- as the already-in-motion argument in fact does
I don’t understand the next three lines in your comment.
The point is that an evolutionary process depends on feedback from what is directly observable and workable ("a process tuned to achieving directly observable practical results")...and that has limitations. It's not useless, but it doesn't solve every epistemological problem. (Ie. "non-obvious theoretical truth").
Truth and usefulness, reality and appearance are different
The usefulness cluster of concepts includes the ability to make predictions, as well as create technology. The truth cluster of concepts involves identification of the causes of perceptions, and offering explanations, not just predictions. The usefulness cluster corresponds to scientific instrumentalism , the truth cluster to scientific instrumentalism. The truth cluster corresponds to epistemological rationalism, the usefulness cluster to instrumental rationalism. Truth is correspondence to reality , which is not identical to the ability to make predictions. One can predict that the sun will rise, without knowing what the Sun really is. "Curve fitting" science is adequate to make predictions. Trial and error is adequate to come up with useful technologies. But other means are needed to find the underlying reality. One can't achieve convergence by "just using evidence" because the questions of what evidence is, and how to interpret it depends on one's episteme.
A) If priors are formed by an evolutionary process common to all humans, why do they differ so much? Why are there deep ethical, political and religious divides?
B) how can a process tuned to achieving directly observable practical results allow different agents to converge on non-obvious theoretical truth?
These questions answer each other, to a large extent. B -- they cant, A -- that's where the divides come from. Values aren't dictated by facts, and neither are interpretations-of-facts.
The already-in-motion argument is even weaker than the evolutionary argument, because it says nothing about the validity of the episteme you already have...and nothing about the uniformity/divergence between individuals , either
Observations overwhelming priors needs to account for the divergence as well. But , of course, real agents aren't ideal Bayesians...in particular , they dont have access to every possible hypothesis , and if you've never even thought of a hypothesis, the evidence can't support it in practice. It's as if the unimagined hypotheses -- the overwhelming majority -- have 0 credence.
you can only care about what you fully understand
I think I need an operational definition of “care about” to process this
If you define "care about" as "put resources into trying to achieve" , there's plenty of evidence that people care about things that can't fully define, and don't fully understand, not least the truth-seeking that happens here.
You can only get from the premise "we can only know our own maps" to the conclusion "we can only care about our own maps" via the minor premise "you can only care about what you fully understand ". That premise is clearly wrong: one can care about unknown reality, just as one can care about the result of a football match that hasn't happened yet. A lot of people do care about reality directionally.
Embedded agents are in the territory. How helpful that is depends on the territory
you can model the territory under consideration well enough to make the map-territory distinction illusory.
Well,no. A perfect map is still a map. The map territory distinction dies not lie in imperfect representation alone.
To specify the Universe, you only have to specify enough information to pick it out from the landscape of all possible Universes
Of course not. You have to specify the landscape itself, otherwise it's like saying "page 273 of [unspecified book]" .
According to string theory (which is a Universal theory in the sense that it is Turing-complete)
As far as I can see, that is only true in that ST allows Turing machines to exist physically. That's not the kind s of Turing completeness you want. You want to know that String Theory is itself Turing computable, not requiring hypercomputation. Or whatever is actually the ultimate physical theory. Because K complexity doesn't work other wise. And the computability of physics is far from a given:-
https://en.m.wikipedia.org/wiki/Computability_in_Analysis_and_Physics
Note that the fact that a theory might consist of a small number differential equations is quite irrelevant, because any one equation could be uncomputable.
They are not the same things though. Quantum mechanical measure isn’t actually a head count, like classical measure. The theory doesn’t say that—it’s an extraneous assumption. It might be convenient if it worked that way, but that would be assuming your conclusion.
QM measure isn’t probability—the probability of something occurring or not—because all possible branches occur in MWI.
Another part of the problem stems from the fact that what other people experience is relevant to them, whereas for a probability calculation, I only need to be able to statistically predict my own observations. Using QM to predict my own observations, I can ignore the question of whether something has a ten percent chance of happening in the one and only world, or a certainty of happening in one tenth of possible worlds. However, these are not necessarily equivalent ethically.
@Dagon
This comes down to a HUGE unknown - what features of reality need to be replicated in another medium in order to result in sufficiently-close results
That's at least two unknowns: what needs to be replicated in order to get the objective functioning; and what needs to be replicated to get the subjective awarness as well.
Which is all just to say -- isn't it much more likely that the problem has been solved, and there are people who are highly confident in the solution because they have verified all the steps that led them there, and they know with high confidence which features need to be replicated to preserve consciousness...
And how do they that, in terms of the second problem? The final stage would need to be confirmation of subjective awareness. We don't have instruments for that, and it's no good just asking the sim, since a functional duplicate is likely to answer yes, even if it's a zombie.
And that's why it it can be argued that consciousness is a uniqueness difficult problem, beyond the "non-existent proof"
because "find the correct solution" and "convince people of a solution" are mostly independent problems,
That's not just a theoretical possibility People , eg. Dennett,keep claiming to have explained consciousness, and other people keep being unconvinced because they notice they have skipped the hard part.
"That's just saying he hasn't explained some invisible essence of consciousness , equivalent to élan vital".
"Qualia aren't invisible, they are the most obvious thing there is to the person that has them".
Physicalist epiphenomenalism is the only philosophy that is compatible with the autonomy of matter and my experience of consciousness, so it has not competitors as a cosmovision
No, identity theory and illusionism are competitors. And epiphenenomenalism is dualism, not physicalism. As I have pointed out before.