Posts

REALM Bimonthly Meeting (1st & 3rd Saturday) 2019-07-30T18:43:52.944Z

Comments

Comment by Lance Bush (lance-bush) on Cornell Meetup · 2021-11-23T23:33:23.305Z · LW · GW

I'm at Cornell. I study psychology and metaethics, but I do not work on AI alignment. I am always happy to provide insights from the areas I study to anyone working on AI alignment, though.

Comment by Lance Bush (lance-bush) on 2020 PhilPapers Survey Results · 2021-11-03T23:03:43.344Z · LW · GW

When it comes to describing moral discourse in general, I endorse semantic pluralism / 'different groups are talking about wildly different things, and in some cases talking about nothing at all, when they use moral language'.


I agree, but this is orthogonal to whether moral realism is true. Questions about moral realism generally concern whether there are stance-independent moral facts. Whether or not there are such facts does not directly depend on the descriptive status of folk moral thought and discourse. Even if it did, it's unclear to me how such an approach would vindicate any substantive account of realism. 
 

You could call these views "anti-realist" in some senses. In other senses, you could call them realist (as I believe Frank Jackson does).

I'd have to know more about what Jackson's specific position is to address it. 

But ultimately the labels are unimportant; what matters is the actual content of the view, and we should only use the labels if they help with understanding that content, rather than concealing it under a pile of ambiguities and asides.

I agree with all that. I just don't agree that this is diagnostic of debates in metaethics about realism versus antirealism. I don't consider the realist label to be unhelpful, I do think it has a sufficiently well-understood meaning that its use isn't wildly confused or unhelpful in contemporary debates, and I suspect most people who say that they're moral realists endorse a sufficiently similar enough cluster of views that there's nothing too troubling about using the term as a central distinction in the field. There is certainly wiggle room and quibbling, but there isn't nearly enough actual variation in how philosophers understand realism for it to be plausible that a substantial proportion of realists don't endorse the kinds of views I'm objecting to and claiming are indicative of problems in the field.

I don't know enough about Jackson's position in particular, but I'd be willing to bet I'd include it among those views I consider objectionable.

Comment by Lance Bush (lance-bush) on 2020 PhilPapers Survey Results · 2021-11-03T22:53:59.292Z · LW · GW

I also wanted to add that I am generally receptive to the kind of approach you are taking. My approach to many issues in philosophy is roughly aligned with quietists and draws heavily on identifying cases in which a dispute turns out to be a pseudodispute predicated on imprecision in language or confusion about concepts. More generally, I tend to take a quietist or a "dissolve the problem away" kind of approach. I say this to emphasize that it is generally in my nature to favor the kind of position you're arguing for here, and that I nevertheless think it is off the mark in this particular case. Perhaps the closest analogy I could make would be to theism: there is enough overlap in what theism refers to that the most sensible stance to adopt is atheism.

Comment by Lance Bush (lance-bush) on 2020 PhilPapers Survey Results · 2021-11-03T22:46:24.360Z · LW · GW

I think there's enormous variation in what people mean by "moral realism", enough to make it a mostly useless term

 

I disagree with this claim, and I don’t think that, even if there were lots of variation in what people meant by moral realism, that this would render my claim that the large proportion of respondents who favor realism indicates a problem in the profession. The term is not “useless,” and even if the term were useless, I am not talking about the term. I am talking about the actual substantive positions held by philosophers: whatever their conception of “realism,” I am claiming that enough of that 60% endorse indefensible positions that it is a problem.

I have a number of objections to the claim you’re making, but I’d like to be sure I understand your position a little better, in case those objections are misguided. You outline a number of ways we might think of objectivity and subjectivity, but I am not sure what work these distinctions are doing. It is one thing to draw a distinction, or identify a way one might use particular terms, such as “objective” and “subjective.” It is another to provide reasons or evidence to think these particular conceptions of the terms in question are driving the way people responded to the PhilPapers survey.

I’m also a bit puzzled at your focus on the terms “objective” and “subjective.” Did they ask whether morality was objective/subjective in the 2009 or 2020 versions of the survey? 

It would be better to ask questions like 'is it a supernatural miracle that morality exists / that humans happen to endorse the True Morality

I doubt that such questions would be better. 

Both of these questions are framed in ways that are unconventional with respect to existing positions in metaethics, both are a bit vague, and both are generally hard to interpret. 

For instance, a theist could believe that God exists and that God grounds moral truth, but not think that it is a “supernatural miracle” that morality exists. It's also unclear what it means to say morality "exists." Even a moral antirealist might agree that morality exists. That just isn't typically a way that philosophers, especially those working in metaethics, would talk about putative moral claims or facts.

I’d have similar concerns about the unconventionality of asking about “the True Morality.” I study metaethics, and I’m not entirely sure what this would even mean. What does capitalizing it mean? 

It also seems to conflate questions about the scope and applicability of moral concerns with questions about what makes moral claims true. More importantly, it seems to conflate descriptive claims about the beliefs people happen to hold with metaethical claims, and may arbitrarily restrict morality to humans in ways that would concern respondents.

I don't know how much this should motivate you to update away from what you're proposing here, but I can do so here. My primary area of specialization, and the focus of my dissertation research, concerns the empirical study of folk metaethics (that is, the metaethical positions nonphilosophers hold). In particular, my focus in on the methodology of paradigms designed to assess what people think about the nature of morality. Much of my work focuses on identifying ways in which questions about metaethics could be ambiguous, confusing, or otherwise difficult to interpret (see here). This also extends to a lesser extent to questions about aesthetics (see here). Much of this work focuses on presenting evidence of interpretative variation specifically in how people respond to questions about metaethics. Interpretative variation refers to the degree to which respondents in a study interpret the same set of stimuli differently from one another. I have amassed considerable evidence of interpretative variation in lay populations specifically with respect to how they respond to questions about metaethics. 

While I am confident there is interpretative variation in how philosophers responded to the questions in the PhilPapers survey, I'm skeptical that such variation would encompass such radically divergent conceptions of moral realism that the number of respondents who endorsed what I'd consider unobjectionable notions of realism would be anything more than a very small minority.

I say all this to make a point: there may be literally no person on the planet more aware of, and sensitive to, concerns about how people would interpret questions about metaethics. And I am still arguing that you are very likely missing the mark in this particular case.

Comment by Lance Bush (lance-bush) on 2020 PhilPapers Survey Results · 2021-11-03T02:46:43.224Z · LW · GW

The number of moral realists, and especially non-naturalist moral realists, both strike me as indications that there is something wrong with contemporary academic philosophy. It almost seems like philosophers reliably hold one of the less defensible positions across many issues.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-11-02T13:45:47.754Z · LW · GW

I don’t know the answer to these questions. I’m not sure the questions are sufficiently well-specified to be answerable, but I suspect if you rephrased them or we worked towards getting me to understand the questions, I’d just say “I don’t know.” But my not knowing how to answer a question does not give me any more insight into what you mean when you refer to qualia, or what it means to say that things “feel like something.”

I don’t think it means anything to say things “feel like something.” Every conversation I’ve had about this (and I’ve had a lot of them) goes in circles: what are qualia? How things feel. What does that mean? It’s just “what it’s like” to experience them. What does that mean? They just are a certain way, and so on. This is just an endless circle of obscure jargon and self-referential terms, all mutually interdefining one another.

I don’t notice or experience any sense of a gap. I don’t know what gap others are referring to. It sounds like people seem to think there is some characteristic or property their experiences have that can’t be explained. But this seems to me like it could be a kind of inferential error, the way people may have once insisted that there’s something intrinsic about living things that distinguishes from nonliving things, and living things just couldn’t be composed of conventional matter arranged in certain ways, that they just obviously had something else, some je ne sais quoi.

I suspect if I found myself feeling like there was some kind of inexplicable essence, or je ne sais quoi to some phenomena, I’d be more inclined to think I was confused than that there really was je ne sais quoiness. I’m not surprised philosophers go in for thinking there are qualia, but I’m surprised that people in the lesswrong community do. Why not think “I’m confused and probably wrong” as a first pass? Why are many people so confident that there is, what as far as I can tell, amounts to something that may be fundamentally incomprehensible, even magical? That is, it’s one thing to purport to have the concept of qualia; it’s another to endorse it. And it sounds not only like you claim to grok the notion of qualia, but to endorse it.

 

 



 

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-11-01T21:12:47.955Z · LW · GW

It is a functional difference, but there must be some further (conscious?) reason why we can do so, right?


Do you mean like a causal reason? If so then of course, but that wouldn’t have anything to do with qualia.

Where I want to go with this is that you can distinguish them because they feel different, and that's what qualia refers to.

I have access to the contents of my mental states, and that includes information that allows me to identify and draw distinctions between things, categorize things, label things, and so on. A “feeling” can be cashed out in such terms, and once it is, there’s nothing else to explain, and no other properties or phenomena to refer to. 

I don’t know what work “qualia” is doing here. Of course things feel various ways to me, and of course they feel different. Touching a hot stove doesn’t feel the same as touching a block of ice. 

But I could get a robot, that has no qualia, but has temperature detecting mechanisms, to say something like “I have detected heat in this location and cold in this location and they are different.” I don’t think my ability to distinguish between things is because they “feel” different; rather, I’d say that insofar as I can report that they “feel different” it’s because I can report differences between them. I think the invocation of qualia here is superfluous and may get the explanation backwards: I don’t distinguish things because they feel different; things “feel different” if and only if we can distinguish differences between them.


This "feeling" in qualia, too, could be a functional property.

Then I’m even more puzzled by what you think qualia are. Qualia are, I take it, ineffable, intrinsic qualitative properties of experiences, though depending on what someone is talking about they might include more or less features than these. I’m not sure qualia can be “functional” in the relevant sense. 

How would you cash out "desire to move my hand away from the object" and "distinguish it from something cold or at least not hot" in functional terms?

I don't know. I just want to know what qualia are. Either people can explain what qualia are or they can’t. My inability to explain something wouldn’t justify saying “therefore, qualia,” so I’m not sure what the purpose of the questions are. I’m sure you don’t intend to invoke “qualia of the gaps,” and presume qualia must figure into any situation in which I, personally, am not able to answer a question you've asked. 

I'm asking you cash out desire and distinguishing in functional terms, too, and if we keep doing this, do "qualia" come up somewhere?

I don’t know what you think qualia are, so I wouldn’t be able to tell you. People keep invoking this concept, but nobody seems able to offer a substantive explanation of what it is, and why I should think I or anyone else has such things, or why such things would be important or necessary for anything in particular, and so on.

I hope I'm not coming off as stubborn here. I'm very much interested in answering any questions I'm able to answer, I'm just not sure precisely what you're asking me or how I might go about answering it. "What are the functional properties of X?" doesn't strike me as a very clear question.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-11-01T15:15:55.670Z · LW · GW

Most people don't know the word "qualia". Nonetheless, most people will state something equivalent....that they have feelings and seemings that they can't fully describe. So it's a "speaking prose" thing.

 

There are many reasons why a person might struggle to describe their experiences that wouldn't be due to them having qualia or having some implicit qualia-based theory, especially among laypeople who are not experienced at describing their mental states. It would be difficult to distinguish these other reasons from reasons having to do with qualia.

So I don't agree that what you describe would necessarily be equivalent, and I don't think it would be easy to provide empirical evidence specifically of the notion that people have or think they have qualia, or speak or think in a way best explained by them having qualia.

Even if it could be done, I don't know of any empirical evidence that would support this claim. Maybe there is some. But I don't have a high prior on any empirical investigation into how laypeople think turning out to support your claim, either.

And something like that is implicit in Illusionism. Illusionism attempts to explain away reports of ineffable subjective sensations, reports of qualia like things. If no one had such beliefs, or made such reports, there would be nothing for Illusionism to address.

You know, I think you're right. And I believe the course of this discussion has clarified things for me sufficiently for me to recognize that I do not, strictly speaking, endorse illusionism.

Illusionism could be construed as the conjunction of two claims:

(1) On introspection, people systematically misrepresent their experiential states as having phenomenal properties.

(2) There are no such phenomenal properties.

For instance, Frankish (2016) defines (strong) illusionism as the view that:

“[...] phenomenal consciousness is illusory; experiences do not really have qualitative, ‘what-it’s-like’ properties, whether physical or non-physical” (p. 15)

Like illusionists, I deny that there are phenomenal properties, qualia, what-its-likeness, and so on. In that sense, I deny phenomenal realism (Mandik, 2016). As such, I agree with (2) above. Thus, I agree with the central claim of illusionism, that there are no phenomenal properties, and I deny that there are qualia, or that there’s “what it’s likeness” and so on. However, what I am less comfortable doing is presuming that things seem this way to nonphilosophers, and that they are all systematically subject to some kind of error. In that regard, I do not fully agree with illusionists.

To the extent that illusionists mistakenly suppose that people are subject to an illusion, we could call this meta-illusionism. Mandik distinguishes meta-illusionism from illusionism as follows:

“The gist of meta-illusionism is that it rejects phenomenal realism while also insisting that no one is actually under the illusion that there are so-called phenomenal properties” (pp. 140-141).

Mandik goes on to distance his position from illusionism, in reference to Frankish as follows:

“One thing Frankish and I have in common is that neither of us wants to assert that there are any properties instantiated that are referred to or picked out by the phrase ‘phenomenal properties’. One place where Frankish and I part ways is over whether that phrase is sufficiently meaningful for there to be a worthwhile research programme investigating how it comes to seem to people that their experiences instantiate any such properties. Like Frankish, I’m happy with terms like ‘experience’, ‘consciousness’, and ‘conscious experience’ and join Frankish in using what he calls ‘weak’ and functional construals of such terms. But, unlike Frankish, I see no use at all, not even an illusionist one, for the term ‘phenomenal’ and its ilk. The term ‘phenomenal’, as used in contemporary philosophy of mind, is a technical term. I am aware of no non-technical English word or phrase that is accepted as its direct analogue. Unlike technical terms in maths and physics, which are introduced with explicit definitions, ‘phenomenal’ has no such definition. What we find instead of an explicit definition are other technical terms treated as interchangeable synonyms. Frankish follows common practice in philosophy of mind when he treats ‘phenomenal’ as interchangeable with, for instance, ‘qualitative’ or, in scare-quotes, ‘“feely”’. (p. 141)

I can’t quote the whole article (though it’s short), but he concludes this point by stating that:

“We have then, in place of an explicit definition of ‘phenomenal properties’, a circular chain of interchangeable technical terms — a chain with very few links, and little to relate those links to nontechnical terminology. The circle, then, is vicious. I’m sceptical that any properties seem ‘phenomenal’ to anyone because this vicious circle gives me very little idea what seeming ‘phenomenal’ would be.” (p. 142)

Mandik is not so sure he wants to endorse meta-illusionism, since this might turn on concerns about what it means for something to be an illusion, and because he’s reluctant to state that illusionists are themselves subject to an illusionism. What he proposes instead is qualia quietism, the view that:

“the terms ‘qualia’, ‘phenomenal properties’, etc. lack sufficient content for anything informative to be said in either affirming or denying their existence. Affirming the existence of what? Denying the existence of what? Maintaining as illusory a representation of what? No comment. No comment. No comment” (p. 148)

This is much closer to what I think than illusionism proper. So, in addition to denying that there are qualia, or phenomenal properties, or whatever other set of terminology is used to characterize some putative set of special properties that spell trouble for those of us ill-disposed to believe in such things, I also deny that it seems this way to nonphilosophers. 

My entire academic career has centered on critiquing work in experimental philosophy, and close scrutiny of this and related articles might reveal what I take to be significant methodological problems. Nevertheless, insofar as research has been conducted on the subject of whether nonphilosophers have phenomenal properties, or think about consciousness in the same way as philosophers, at least some of the results indicate that they may not. See here, for instance Sytsma & Machery (2010):

Abstract: “Do philosophers and ordinary people conceive of subjective experience in the same way? In this article, we argue that they do not and that the philosophical concept of phenomenal consciousness does not coincide with the folk conception. We first offer experimental support for the hypothesis that philosophers and ordinary people conceive of subjective experience in markedly different ways. We then explore experimentally the folk conception, proposing that for the folk, subjective experience is closely linked to valence. We conclude by considering the implications of our findings for a central issue in the philosophy of mind, the hard problem of consciousness.”

I doubt this one study is definitive evidence one way or the other. What I will say, though, is that whether people think of consciousness the way philosophers do is an empirical question. I suspect they don’t, and absent any good reasons to think that they do, I’m not inclined to accept without argument that they do.

Trying to attack qualia from every possible angle is rather self-defeating. For instance, if you literally don't know what "qualia" means, you can't report that you have none. And if no one even seems to have qualia, there is nothing for Illusionism to do. And so on.

I disagree. You can claim to both not know what something means, and claim to not have the thing in question. 

In some cases, you might not know what something means because you’re ignorant of what is meant by the concept in question. For instance, someone might use the term  “zown zair” to refer to brown hair. I might not know this, even if I do have brown hair. In that case, I would not know what they mean, even though I do have brown hair. It would be a mistake for me to think that because I don’t know what they mean, that I don’t have “zown zair.” And it would be foolish to insist both that “zown zair” is false, and that “zown zair” is meaningless. I would simply have failed to find out what they were referring to with the term.

But this is not the case with qualia. I am not merely claiming that I don’t understand the concept. I am claiming that nobody understands the concept, because it is fundamentally confused and meaningless.

First, in the course of an exchange, This is especially the case when one is responding to a host of people, over an extended period of time, who are incapable of explaining the putative concept in a way that isn’t circular or vacuous. 

In the course of an exchange, people may employ a concept. They might say that, e.g. some objects have the property A. Yet when asked to explain what A is, they are unable to do so, or they provide unsatisfactory attempts. For instance, they might point to several objects, and say “all these objects have property A.” This is what was done earlier in this thread: I was given examples, as though this was independently helpful in understanding the concept. It’s not. If I pointed to a truck, a flock of geese, and a math textbook and said “these all have property A,” you wouldn’t be much closer to knowing what I was talking about. In other cases, they might use metaphors. But the metaphors may be unilluminating. In still other cases, they might appeal to other terms or concepts. Yet these terms or concepts might themselves be obscure or poorly defined, and if one asks for clarification, one begins the journey through an endless loop of mutual interdefinitions that never get you anywhere.

In such cases, it can become apparent that a person’s concepts are circular and self-referential, and don’t really describe anything about the way the world is. They might define A in terms of B, B in terms of C, and C in terms of A. And they might insist that A is a property we all have.

When numerous people all claim that we have property A, but they cannot define it, one may reasonably wonder whether all of these people are confused or mistaken. That is, one might conclude that property A is a pseudoconcept, something vague and meaningless.

In such cases, I am fine saying both that 

(a) I don’t have property A

(b) I don’t know what people referring to property A are talking about

I can believe that (a), because it’s meaningless. I don’t have meaningless properties. And I can conclude that (b), because it’s meaningless. I can’t understand a meaningless concept, because there isn’t anything to understand.

Maybe that’s an awkward way of framing why one would reject circular concepts that ascribe meaningless properties to people, in which case I’d be happy to revise the way I frame my rejection of qualia.

But then , why insist that you are right? If you have something like colour blindness , then why insist that everyone else is deluded when they report colours?

There are very good reasons to think people can see colors, and one would have such reasons even if they were colorblind. We can point to the physical mechanisms involved in color detection, the properties of light, and so on. We can point to specific color words in our and other languages, and it would be fairly easy to determine that nonphilosophers can see colors. I don’t think any of these conditions apply to qualia. So, first, there's that.

To emphasize just the last of these,  I don’t think “everyone else” is deluded. I think philosophers are deluded, and that people who encounter the work of these philosophers often become deluded as well. I don’t think the notion of qualia is a psychological mistake so much as it is an intellectual mistake only a subset of people make.

I suspect such mistakes are endemic to philosophy. The same thing has occurred, to an alarming extent, in contemporary metaethics. Moral realists frequently invoke the notion of decisive or external reasons, irreducible normativity, categorical imperatives, stance-independent normative and evaluative facts, and so on. I reject all of these concepts as fundamentally confused. And yet philosophers like Parfit, Huemer, Cuneo, and others  have not only tangled themselves into knots of confusion, their work has trickled out into the broader culture. I routinely encounter people who have come across their work claiming to “have” concepts that they are incapable of expressing. And these philosophers, when pressed, will fall back on claiming that the concepts in question are “brute” or “primitive” or “unanalyzable,” which is to say, they can’t give an account of them, and don’t think that they need to. Maybe they do "have" these concepts, but since I am very confident we can explain everything there is to now about the way the world is without invoking them, I suspect they're vacuous nonsense, and that these philosophers are uniformly confused.

And, like the notion of qualia, philosophers have for a long time presumed that ordinary people tend to be moral realists (see e.g. Sinclair, 2012). My own academic work specifically focuses on this question. And like the question of what people think about consciousness, this, too, is an empirical question. So far, little empirical evidence supports the conclusion that ordinary people tend to be moral realists, or at least that they tend to be consistently and uniformly committed to some kind of moral realism. By and large, they struggle to understand what they are being asked (Bush & Moss, 2020). I suspect, instead, that something like Gill’s (2009) indeterminacy-variability thesis is much more likely: that people have variable but (I suspect mostly) indeterminate metaethical standards.

The same may turn out to be the case for the only other issue I looked into: free will. This has led me, in my own work, to point towards the broader possibility that many of the positions philosophers purport to be intuitive, and that they claim are widespread among nonphilosophers, simply aren’t. Rather, I suspect that philosophers are over-intellectualizing some initial pool of considerations, then generating theories that are neither implicitly nor explicitly part of the way ordinary people speak or think.

I don’t think this is a situation where I am color blind, while others have color vision. Rather, it’s more like recognizing that many of the people around you are subject to a collective, and contagious, hallucination. So I suspect, instead, that I have come to recognize over time that academic philosophy has played an alarming role in duping large numbers of people into a wide range of confusions, then duped them further by convincing them that these confusions are shared by nonphilosophers.

 

References

Bush, L. S., & Moss, D. (2020). Misunderstanding Metaethics: Difficulties Measuring Folk Objectivism and Relativism. Diametros, 17(64). 6-21

Frankish, K. (2016). Illusionism as a theory of consciousness. Journal of Consciousness Studies, 23(11-12), 11-39.

Gill, M. B. (2009). Indeterminacy and variability in meta-ethics. Philosophical studies, 145(2), 215-234.

Mandik, P. (2016). Meta-illusionism and qualia quietism. Journal of Consciousness Studies, 23(11-12), 140-148.

Sinclair, N. (2012). Moral realism, face-values and presumptions. Analytic Philosophy, 53(2). 158-179


 

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-11-01T14:24:42.490Z · LW · GW

Hmm, I'm not sure it's vacuous, since it's not like they're applying "redness" to only one thing; redness is a common feature of many different experiences. 14 could have "sevenness", too.

 

One can apply a vacuous term to multiple things, so pointing out that you could apply the term to more than one thing does not seem to me to indicate that it isn't vacuous. I could even stipulate a concept that is vacuous by design: "smorf", which doesn't mean anything, and then I can say something like "potatoes are smorf." 
 

Maybe we can think of examples of different experiences where it's hard to come up with distinguishing functional properties, but you can still distinguish the experiences? 

The ability to distinguish the experiences in a way you can report on would be at least one functional difference, so this doesn't seem to me like it would demonstrate much of anything. 

Some of the questions you ask seem a bit obscure, like how I can tell something is hotter. Are you asking for a physiological explanation? Or the cognitive mechanisms involved? If so, I don'tknow, but I'm not sure what that would have to do with qualia. But maybe I'm not understanding the question, and I'm not sure how that could get me any closer to understanding what qualia are supposed to be.

What would be basic functional properties that you wouldn't cash out further?

I don't know. Likewise for most of the questions you ask. "What are the functional properties of X?" questions are very strange to me. I am not quite sure what I am being asked, or how I might answer, or if I'm supposed to be able to answer. Maybe you could help me out here, because I'd like to answer any questions I'm capable of answering, but I'm not sure what to do with these.

 

 

 



 

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-11-01T02:51:31.587Z · LW · GW

Can you be aware, not only of your sensations, but of the sensation of having those sensations?


I'm not sure. I have sensations, but I don't know what a sensation of a sensation would be. 

Can you have thoughts, and be aware of having those thoughts?

Sure, but that just sounds like metacognition, and that doesn't strike me as being identical with or indicative of having qualia. I can know that I know things, for instance. 

And be aware of having these awarenesses?

I would describe this as third-order metacognition, or recursive cognition, or something like that. And yea, I can do that. I can think that Sam thinks that I think that he lied, for instance. Or I can know that my leg hurts and then think about the fact that I know that my leg hurts.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-11-01T02:45:26.598Z · LW · GW

I don't think I can replicate exactly the kinds of ways people framed the questions. But they might do something like this: they'd show me a red object. They'd ask me "What color is this?" I say red. Then they'd try to extract from me an appreciation for the red "being a certain way" independent of, e.g., my disposition to identify the object as red, or my attitudes about red, as a color, and so on. Everything about "seeing red" doesn't to me indicate that there is a "what it's like" to seeing red. I am simply ... seeing red. Like, I can report that fact, and talk about it, and say things like "it isn't blue" and "it is the same color as a typical apple" and such,  but there's nothing else. There's no "what it's likeness" for me, or, if there is, I'm not able to detect and report on this fact. The most common way people will frame this is to try to get me to agree that the red has a certain "redness" to it. That chocolate is "chocolatey" and so on.

I can be in an entire room of people insisting that red has the property of "redness" and that chocolate is "chocolately" and so on, and they all nod and agree that our experiences have these intrinsic what-its-likeness properties. This seems to be what people are talking about when they talk about qualia. To me, this makes no sense at all. It's like saying seven has the property of "sevenness." That seems vacuous to me.

I can look at something like Dennett's account: that people report experiences as having some kind of intrinsic nonrelational properties that are ineffable and immediately apprehensible. I can understand all those words in combination, but I don't see how anyone could access such a thing (if that's what qualia are supposed to be), and I don't think I do. 

It may be that that I am something akin to a native functionalist. I don't know. But part of the reason I was drawn to Dennett's views is that they are literally the only views that have ever made any sense to me. Everything else seems like gibberish.

Or would you say something like "I know it's hot, but I don't feel it's hot"?

Well, I would cash out it "feeling hot" in functional terms. That I feel a desire to move my hand away from the object, that I can distinguish it from something cold or at least not hot, and so on. There doesn't seem to me to be anything else to touching a hot thing than its relational properties and the functional role it plays in relation to my behavior and the rest of my thoughts. What else would there be than this? It does seem to me that people who think there are qualia think there's something else. They certainly seem insistent that there is after I describe my experience. 

Are you saying that what you're reporting is only the verbal inner thought you had that it's hot, and that happened without any conscious mental trigger?

No, I think I have a conscious mental trigger, and I can and do say things like "that feels hot." I respond to hot things in normal ways, can report on those responses, and so on. I can certainly distinguish hot from cold without having to say anything, but I'm not sure what else you might be going for, and all of that seems like something you could get a robot to do that I don't think anyone would say "has qualia." But this is a very superficial pass at everything that would be going on if I touched something hot and reacted to it. So, it might be something we'd need to dig into more.

Doesn't your inner monologue also sound like something? 

Nobody ever asked me that. That's an awesome question. I think that no, it does not sound like anything. It's in English, and it's "my voice," but it doesn't "sound like" my actual speaking voice.

More generally, the contents of your mental states are richer than the ones you report on symbolically (verbally or otherwise) to yourself or others, right? 

Yes. 

Isn't this perceptual richness what people mean by qualia?

I don't think that it is. It sounds a bit like you're gesturing towards block's notion of access consciousness. I'm not sure though.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-31T13:45:11.472Z · LW · GW

I forgot to add a reference to the Robbins and Jack citation above. Here it is: 

Robbins, P., & Jack, A. I. (2006). The phenomenal stance. Philosophical studies, 127(1), 59-85.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-31T02:56:17.398Z · LW · GW

No worries, it's not a gotcha at all, and I already have some thoughts about this. 

I was more interested in this topic back about seven or eight years ago, when I was actually studying it. I moved on to psychology and metaethics, and haven't been actively reading about this stuff since about 2014.

I'm not sure it'd be ideal to try to dredge all that up, but I can roughly point towards something like Robbins and Jack (2006) as an example of the kind of research I'd employ to develop a type of debunking explanation for qualia intuitions. I am not necessarily claiming their specific account is correct, or rigorous, or sufficient all on its own, but it points to the kind of work cognitive scientists and philosophers could do that is at least in the ballpark.

Roughly, they attempt to offer an empirical explanation for the persistent of the explanatory gap (the problem of accounting for the consciousness by appeal to physical or at least nonconscious phenomena). Its persistence could be due to quirks in the way human cognition works. If so, it may be difficult to dispel certain kinds of introspective illusions.

Roughly, suppose we have multiple, distinct "mapping systems" that each independently operate to populate their own maps of the territory. Each of these systems evolved and currently functions to facilitate adaptive behavior. However, we may discover that when we go to formulate comprehensive and rigorous theories about how the world is, these maps seem to provide us with conflicting or confusing information.

Suppose one of these mapping systems was a "physical stuff" map. It populates our world with objects, and we have the overwhelming impression that there is "physical stuff" out there, that we can detect using our senses.

But suppose also we have a "important agents that I need to treat well" system, that detects and highlights certain agents within the world for whom it would be important to treat appropriately, a kind of "VIP agency mapping system" that recruited a host of appropriate functional responses: emotional reactions, adopting the intentional stance, cheater-detection systems, and so on. 

On reflecting on the first system, we might come to form the view that the external world really is just this stuff described by physics, whatever that is. And that includes the VIP agents we interact with: they're bags of meat! But this butts up against the overwhelming impression that they just couldn't be. They must be more than just bags of meat. They have feelings! We may find ourselves incapable of shaking this impression, no matter how much of a reductionist or naturalist or whatever we might like to be.

What could be going on here is simply the inability for these two mapping systems to adequately talk to one another. We are host to divided minds with balkanized mapping systems, and may find that we simply cannot grok some of the concepts contained in one of our mapping systems in terms of the mapping system in the other. You might call this something like "internal failure to grok." It isn't that, say, I cannot grok  some other person's concepts, but that some of the cognitive systems I possess cannot grok each other

You might call this something like "conceptual incommensurability." And if we're stuck with a cognitive architecture like this, certain intuitions may seem incorrigible, even if we could come up with a good model, based on solid evidence, that would explain why things would seem this way to us, without us having to suppose that it is that way.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-31T02:33:24.458Z · LW · GW

Thanks for clarifying. Not all statistical claims in e.g., psychology are intended to generalize towards most people, so I didn't want to assume you meant most people.

If the claim is that most people have a concept of qualia, that may be true, but I'm not confident that it is. That seems like an empirical question it'd be worth looking into.

Either way, I wouldn't be terribly surprised if most people had the concept, or (I think more likely) could readily acquire it on minimal introspection (though on my view I'd say that people are either duped or readily able to be duped into thinking they have the concept).

I don't know if I am different, or if so, why. It's possible I do have the concept but don't recognize it, or am deceiving myself somehow. 

It's also possible I am somehow atypical neurologically. I went into philosophy precisely because I consistently found that I either didn't have intuitions about conventional philosophical cases at all (e.g., Gettier problems), or had nonstandard or less common views (e.g. illusionism, normative antirealism, utilitarianism). That led me to study intuitions, the psychological underpinnings of philosophical thought, and a host of related topics. So there is no coincidence in my presenting the views expressed here. I got into these topics because everyone else struck me as having bizarre views.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T21:48:30.742Z · LW · GW

Like, Lance, do you not feel like you experience that things seem ways?

I don't know what that means, so I'm not sure. What would it mean for something to seem a certain way?
 

Or just that they don't seem to be ways in ways that seem robustly meaningful or something?

I don't think it's this. It's more that when people try to push me to have qualia intuitions, I can introspect, report on the contents of my mental states, and then they want me to locate something extra. But there never is anything extra, and they can never explain what they're talking about, other than to use examples that don't help me at all, or metaphors that I don't understand. Nobody seems capable of directly explaining what they mean. And when pressed, they insist that the concept in question is "unanalyzable" or inexplicable or otherwise maintain that they cannot explain it. 

Despite his fame, the majority of students who take Dennett's courses that I encountered do not accept his views at all, and take qualia quite seriously. I had conversations that would last well over an hour where I would have one or more of them try to get me to grok what they're talking about, and they never succeeded. I've had people make the following kinds of claims:

(1) I am pretending to not get it so that I can signal my intellectual unconventionality.

(2) I do get it, but I don't realize that I get it.

(3) I may be neurologically atypical.

(4) I am too "caught in the grip" of a philosophical theory, and this has rendered me unable to get it.

One or more of these could be true, but I'm not sure how I'd find out, or what I might do about it if I did. But I strangely drawn to a much more disturbing possibility, that an outside view would suggest is pretty unlikely:

(5) all of these people are confused, qualia is a pseudoconcept, and the whole discussion predicated on it is fundamentally misguided

I find myself drawn to this view, in spite of it entailing that a majority of people in academic philosophy, or who encounter it, are deeply mistaken.

I should note, though, that I specialize in metaethics in particular. Most moral philosophers are moral realists (about 60%) and I consider every version of moral realism I'm familiar with to be obviously confused, mistaken, or trivial in ways so transparent that I do think I am justified in thinking that, on this particular issue, most moral philosophers really are mistaken. 

Given my confidence about moral realism, I'm not at all convinced that philosophers generally have things well-sorted on consciousness.
 

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T21:36:33.938Z · LW · GW

See above; I posted a link to a recent study. There hasn't been much work on this. While my views may be atypical, so too might the views popular among contemporary analytic philosophers. A commitment to the notion that there is a legitimate hard problem of consciousness, that we "have qualia," and so on might all be idiosyncrasies of the specific way philosophers think, and may even result from unique historical contingencies, such that, were there many more philosophers like Quine and Dennett in the field, such views might not be so popular.

Some philosophical positions seem to rise and fall over time. Moral realism was less popular a few decades ago, but as enjoyed a recent resurgence, for instance. This suggests that the perspectives of philosophers might result in part from trends or fashions distinctive of particular points in time.

 

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T21:32:08.766Z · LW · GW

But the qualiaphilic claim is typical, statistically. 

 

Typical of who?

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T21:31:27.926Z · LW · GW

I'm not sure how to answer the first question. I'm sure my introspection revealed all manner of things over the course of years, and I'm also not sure what level of specificity you are going for. I don't want to evade actually reporting on the contents of my mental states, so perhaps a more specific question would help me form a useful response.

I may very well not have even the illusion of phenomenal consciousness, but I'm not sure I am alone in lacking it. While it remains an open empirical question, and I can’t vouch for the methodological rigor of any particular study, there is some empirical research on whether or not nonphilosophers are inclined towards thinking there is a hard problem of consciousness:

https://www.ingentaconnect.com/content/imp/jcs/2021/00000028/f0020003/art00002

It may be that notions of qualia, and the kinds of views that predominate among academic philosophers are outliers that don’t represent how other people think about these issues, if they think about them at all.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T19:23:13.886Z · LW · GW

I have introspected and it has not resulted in acquaintance with qualia.

I believe people can introspect and then draw mistaken conclusions about the nature of their experiences, and that qualia is a good candidate for one of these mistaken conclusions.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T18:50:10.269Z · LW · GW

Is "unmarried man" a mere stand-in for "bachelor"?


In some cases, but not others. One can reasonably ask whether the Pope is a bachelor, but for the purposes of technical philosophical work one might treat “unmarried man” and “bachelor” as identical in the context of some technical discussion.

They are ways of gesturing towards your own experience. If you refuse to introspect you are not going to get it.

I can understand if someone who doesn’t know me or my educational background might think that I just haven’t thought about the topic of qualia enough, or that I am refusing to introspect about it, but that isn’t the case. This isn’t a topic I’ve thought about only casually; it is relevant to my work.

That being said, I have introspected, and I have come to the conclusion that there isn’t anything to get with respect to qualia. Nothing about my introspection gives me any insight into what you or others mean by qualia. Instead, I have concluded that the notion of qualia that has trickled out from academic philosophy is most likely a conceptual confusion enshrining the kinds of introspective errors Dennett and others argue that people are prone to make.

Me.

Okay, thanks. I apologize for having had to ask but you provided a paragraph in quotation with no attribution, and it was difficult for me to interpret what that meant.

The phenomenon properties you mentioned...those are qualia. You have the concept , because you need the concept to say it's illusory.

I have a kind of meta-concept: that other people have a concept of qualia but I myself am not personally acquainted with them, and would not say that I have the concept. One does not need to personally be subject to an illusion to believe that others are.

I know that other people purport to have a notion of qualia, but I do not. But thinking other people have mistaken or confused concepts does not require that one have the concept in the sense of possessing or understanding it. In other words, other people might tell me that there's, e.g., "something it's like" to see red or taste chocolate that somehow defies explanation, is private, is inaccessible, and so on. But I myself do not have such experiences.In such cases, I think people are simply confused, and that this can result in the case of believing in qualia in developing pseudoconcepts. 

This isn't the only case where I think this could or does occur. If people insisted they had a concept that was unintelligible or self-contradictory, such as a “colorless color” or if they insisted something could be "intrinsically north-facing," I could hold that they are mistaken in having such concepts, , and maintain that I don’t “have the concept,” in that I am not actually capable of personally entertaining entertaining the notion of colorless colors or intrinsically north-facing objects.

In fact, this is exactly my position on non-naturalist moral realism: I regard the notion of stance-independent moral facts to be unintelligible. I can talk about “stance-independent moral facts,” as a concept other people purport to “have” in the sense of understanding it without understanding it myself. That is, I don’t actually have the concepts non-natural moral realists purport to have, while still regarding the people who hold such views to be subject to an intellectual or experiential error of some kind.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T17:01:36.677Z · LW · GW

Satisfactory for whom? I use examples because they are sufficient to get the point across to people who aren't too biased.

 

It’s not satisfactory to me. Does this mean I am “too biased?” That seems like a potentially unjustified presumption to make, and not a fair way to have a discussion with others who might disagree with you.

Anyone could offer a definition then state in advance that anyone who doesn’t accept it is “too biased” then, when someone says they don’t accept it say “see, I told you so,” even if an unbiased person would judge the definition to be inadequate.

In any case, I’m not making a selective demand for rigor. Even if I were, I’d probably just shrug and raise the challenge, anyway. I don’t know what people talking about qualia are talking about. But I am also pretty confident they don’t know what they are talking about. I suspect qualia is a pseudoconcept invented by philosophers, and that to the extent that we adequately characterize it, it faces pretty serious challenges. 

Where are the calls for rigourous definitions of "matter", "computation", etc?

The main person I discuss illusionism and consciousness with specializes in philosophy of computation and philosophy of science, with an emphasis on broad metaphysical questions. We both endorse illusionism, and have for years, so there’s little to say there. Instead, regularly we mostly discuss their views on computation and metaphysics, and I’m often asked to read their papers on these topics. So, in the past few years, I have read significantly more work on what computers and matter are than I have on consciousness.

Thus, ironically, I have more discussions about rigorous attempts to define computers and features of the external world than I do about consciousness. So if you think that, in denying qualia, I am somehow failing to apply a similar degree of rigor as I do to other ideas, you could not have picked worse examples. It is not the case that I’m especially tough on the notion of qualia.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T16:17:17.804Z · LW · GW

Unfortunately, I don’t think the account of qualia you’ve presented is adequate.

First, I don’t know what is meant by “perceived sensation” of the pain of a headache. This could be cashed out in functional terms that don’t make appeal to what I am very confident philosophers are typically referring to when they refer to qualia. So this strikes me as a kind of veiled way of just using another word or phrase (in this case, “perceived sensation”) as a stand-in for “qualia,” rather than a definition. It’s a bit like saying the definition of morality is that it is “about ethics.”

I’m likewise at a loss about the second part of this. What is the qualitative character of a sensation? What does it mean to say that you’re referring to “what it is directly like to be experiencing” rather than a belief about experiences? Again, these just seem like roundabout ways of gesturing towards something that remains so underspecified that I still don’t know what people are talking about. 

Whereas illusionism is almost impossible to define coherently.

Illusionism holds that our introspections about the nature of our conscious experiences are systematically mistaken in particular ways that induce people to hold the incorrect belief that our experiences have phenomenal properties.

I think this is a coherent position, and I’m reasonably confident it comports with how Dennett and Frankish would characterize it.

Where is that quote from? It seems to imply that all mental states are other propositional attitudes or perceptions. If so, that doesn’t seem right to me. Also, the complaint primarily seems to be with the name “illusionism.” I’m happy to call it delusionism. If we do that, do they still have an objection? If so, I’m not quite sure what the objection is.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T05:40:30.860Z · LW · GW

I'm not sure I understand. What do you mean when you say it was coined to "describe a particular thing that humans experience"? Or maybe, to put this another way: at least in this conversation, what are you referring to with the term "qualia"?

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-29T22:01:09.067Z · LW · GW

Yudkowsky specifically using the term is a good reason. Thanks for pointing that out, and now I feel a little silly for asking. He says, "I mean qualia, yes." You can't get more blunt than that.

While I agree that qualia is less ambiguous than other terms, I am still not sure it is sufficiently unambiguous. I don’t know what you mean by the term, for instance. Generally, though, I would say that I think consciousness exists, but that qualia do not exist.

I think illusionism does offer an account of consciousness; it’s just that consciousness turns out not to be what some people thought that it was. Personally, I don’t have and apparently have never had qualia intuitions, and thus never struggled with accepting Dennett’s views. This might be unusual, but the only view I ever recall holding on the matter was something like Dennett’s. His views immediately resonated with me and I adopted them the moment I heard them, with something similar to a “wow, this is obviously how it is!” response, and bewilderment that anyone could think otherwise.

I’m glad we agree most alternatives are poor. I do happen to agree that this isn’t especially good evidence against the plausibility of some compelling alternative to illusionism emerging. I definitely think that’s a very real possibility. But I do not think it is going to come out of the intuition-mongering methodology many philosophers rely on. I also agree that this is probably due to the difficulty of coming up with alternative models. Seems like we’re largely in agreement here, in that case.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-29T21:54:21.929Z · LW · GW

The lack of an internal monologue is a distressing question to me. I run a constant inner monologue, and can’t imagine thinking differently. There may be some sense in which people who lack an inner monologue lack certain features of consciousness that others who do have one possess. 

Part of the issue here is to avoid thinking of consciousness as either a discrete capacity one either has or doesn’t have, or even to think of it as existing a continuum, such that one could have “more” or “less” of it. Instead, I think of “consciousness” as a term we use to describe a set of both qualitative and quantitatively distinct capacities. It’d be a bit like talking about “cooking skills.” If someone doesn’t know how to use a knife, or start a fire, do they “lack cooking skills”? Well, they lack a particular cooking skill, but there is no single answer as to whether they “lack cooking skills” because cooking skills break down into numerous subskills, each of which may be characterized by its own continuum along which a person could be better or worse. Maybe a person doesn’t know how to start a fire, but they can bake amazing cakes if you give them an oven and the right ingredients.

This is why I am wary of saying that animals are “not conscious” and would instead say that whatever their “consciousness” is like, it would be very different from ours, if they lack a self-model and if a self-model is as central to our experiences as I think it is.

As for someone who lacks an inner monologue, I am not sure what to make of these cases. And I’m not sure whether I’d want to say someone without an inner monologue “isn’t conscious,” as that seems a bit strange. Rather, I think I’d say that they may lack a feature of the kinds of consciousness most of us have that strikes me, at first glance, as fairly central and important. But perhaps it isn’t. I’d have to think more about that, to consider whether an enculturated construction of a self-model requires an inner monologue. I do think it probably requires exposure to language...at least in practice, for humans (at least in practice, since I don’t think an AI would have to proceed through the same developmental stages as humans would to become conscious. And, of course, in principle you could print out an adult human brain, which could be conscious without having to itself have ever been subjected to childhood enculturation). 

However, once the relevant concepts and structures have been “downloaded,” this may not require a very specific type of phenomenology. Maybe it does, but at the very least, we could point to substantial overlap in many of the functional outputs of people who lack inner monologues, analogues to those of us who do have an inner monologue that we would not observe in animals. People who lack inner monologues can still speak meaningfully about themselves in the past, make plans for the future, talk about themselves as agents operating within the world, employ theory of mind, would probably report that they are conscious, could describe their phenomenal experiences, and so on. In other words, there would be substantial functional overlap in the way they spoke, thought, and behaved, with only a few notable differences in how they describe their phenomenology. At least, I am supposing all this is the case. Maybe they are different in other ways, and if I knew about them, and really thought about this, it might have really disturbing implications. But I doubt that will turn out to be the case.

This reminds me of an idea for a science fiction novel. I don’t know where it came from, but I’m not sure I was the first to think if a scenario like this:

Suppose we discovered that some subset of the population definitely did not have conscious experiences, and that the rest of us did. And suppose we had some reliable test for determining who was or was not conscious. It was easy to administer, and we quickly found that our spouses, children, parents, closest friends, and so on, were not conscious at all. Such people were simply automata. There were no lights on inside. In short: they simply had no qualia at all. 

How would society react? What would people do? One could imagine a story like this addressing both interpersonal relationships, and the broader, societal-scale implications of such discoveries. I hope someone can take that idea and run with it, and turn it into something worth reading or watching.

Comment by Lance Bush (lance-bush) on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-29T13:36:46.123Z · LW · GW

I suspect I endorse something like what Yudkowsky seems to be claiming. Essentially, I think that humans are uniquely disposed (at least among life on earth) to develop a kind of self-model, and that nonhuman animals lack the same kind of systems that we have. As a result, whatever type of consciousness they have, I think it is radically unlike what we have. I don’t know what moral value, if any, I would assign to nonhuman animals were I to know more about their mental lives or what type of “consciousness” they have, but I am confident that the current high level of confidence people have that animals have rich conscious experiences is not justified. I wrote an old comment on this I’ve shared a couple of times. Here it is:

I think that what we take to be our conscious experience involves a capacity for "checking in" on an ongoing internal narrative, or story that we are constantly "telling ourselves" that functions to provide a unified timeline which we can utilize, report on, and talk about with others. I think this "narrative center of gravity" requires a degree of cultural input, and the inculcation of specific memes/concepts that lead us to form a sense of a self that integrates our experiences and that can think about "our" past experiences and "our" future experiences. In a sense, I think that conscious experience is built up as a sort of software that we have the hardware to develop, but requires a degree of developmental and cultural input to become fully operational. I don't think animals have or need this capacity. As such, what it is like to be us is something we can talk about, but I am not convinced that there is anything it is "like" to be an animal.

This is a largely Dennettian view of consciousness, and I believe he coined or at least used the term “narrative center of gravity.”

You identify consciousness with having qualia.

However, I don’t know what you mean by qualia. While it remains sensible to me to attribute something like consciousness to humans, I would typically deny that we “have qualia” and would not define consciousness in terms of having qualia. Were others to do so, I’d deny we have that form of qualia. Perhaps Yudkowsky would, too. It really depends on what one means by “consciousness” and “qualia.”

I don’t know exactly what Yudkowsky thinks, so I wouldn’t put a number on it as you do (i.e., 15%). But, I’ll put it this way: I don’t know of any alternatives to something like Dennett/Frankish on illusionism that seem individually more plausible than illusionism. I don’t know if the collective weight of plausibility for all competing hypotheses is enough to push illusionism below 50%, but I don’t think so. So, while I am not overwhelmingly confident that something that seems roughly in the ballpark (if not very similar) to Yudkowsky’s view is correct, I have yet to see any viable alternatives. Most seem weird and to not capture what strike me as important elements of consciousness, or they seem to appeal to intuitions I don’t have and don’t trust in others.

Comment by Lance Bush (lance-bush) on Frankfurt Declaration on the Cambridge Declaration on Consciousness · 2021-10-24T22:23:35.905Z · LW · GW

An eliminativist might say no, but insofar as I would say "yes," I would be wary of thinking that we can confidently infer what the experiences (if any) of any particular animal species are like based on what strike me as premature and weak forms of evidence.

Comment by Lance Bush (lance-bush) on Frankfurt Declaration on the Cambridge Declaration on Consciousness · 2021-10-24T22:21:25.511Z · LW · GW

I've seen people cite this declaration and appeal to it as authoritative on numerous occasions. Unfortunately, it is presented as though it represents some kind of expert consensus, when the people involved may not even be experts on the topic, and hardly represent any kind of consensus. 

Yet, while I would want to suggest that we consult philosophers of mind, even they seem drawn to what strike me as dubious, intuition-based reasons for presuming animals are conscious in ways analogous to humans with what strikes me as far too low of standards for drawing such inferences. 

My impression early on is that something like Dennett's approach was likely to win out eventually, even if Dennett's early account was superseded by more comprehensive and accurate accounts. The resistance to Dennett's views, which often consists of dismissive, uncharitable caricatures, is also quite alarming.

I'd check out Dennett's paper, "Animal Consciousness: What Matters and Why":

https://www.jstor.org/stable/40971115?seq=1#metadata_info_tab_contents 

Comment by Lance Bush (lance-bush) on How can one train philosophical skill? · 2021-10-01T05:14:51.323Z · LW · GW

I'm worried that competitive debating and argumentation could lead to developing some negative habits. 

The ability to adopt a scout mindset, listen to and process opposing views, be receptive to criticism, engage in counterfactual thinking effectively, know how to handle thought experiments, employ intuition and other tools characteristic to philosophy judiciously, be able to switch between level of complexity in speech for different audiences (e.g. avoiding technical jargon with non-specialists, using examples that resonate with the audience, etc.) are all skills that can operate well both within and outside an argumentative context. 

While being good at arguing may be the most central skill to cultivate, the specifics are going to matter!

Comment by Lance Bush (lance-bush) on How can one train philosophical skill? · 2021-10-01T05:10:39.269Z · LW · GW

Direct and immediate feedback is a good idea. Given your first suggestion, is the assumption that philosophers would attempt to address everything? Many are going to be specializing, and may not have thought much about many central questions. I can tell you a lot about metaethics. I can't tell you much about metaphysics. Maybe that's a mistake, but given the current incentive system and the way academic fields are set up, it'd be hard not to narrowly focus in this way.

Comment by Lance Bush (lance-bush) on How can one train philosophical skill? · 2021-10-01T05:08:57.158Z · LW · GW

I'm absolutely on board with this, though I doubt it would help much with the kinds of work I do in either field (philosophy or psychology), though maybe not; in the last few years I had no choice but to pick up R and I think I'd be quite a bit further along if I'd taken computer science courses earlier. Even so, there's still a lot of low hanging fruit for people who work in philosophy or philosophy-adjacent fields to do that probably wouldn't benefit all that much from understanding computer science. That's not to say it might not help in some way, but I'm not sure exactly how or if it would be of comparatively high value compared to studying other topics.

Comment by Lance Bush (lance-bush) on How can one train philosophical skill? · 2021-10-01T05:05:30.937Z · LW · GW

Very little about the courses I took in philosophy were directly about how to think better. They were much more focused on understanding what some thinker said for the sake of doing so. If the purpose of these courses were to improve critical thinking, I don't think I benefited much from it, and it's a strange and roundabout way to pursue the goal. Plus, they almost never collect any actual data on whether these methods work.

My dissertation is in psychology (though it is heavily focused on philosophy as well), so I'm not really sure myself how much is focused on a literature review. Mine is almost entire critical discussion of studies, and is only concerned with studies that go back to 2003, with the bulk focused on 2008 onwards. I'm literally responding to current papers as they come out. So, it's very recent stuff. I'd be surprised if this weren't often the case for philosophers as well.

For instance, suppose you were writing in metaethics. You could easily write a dissertation on contemporary issues, such as evolutionary debunking arguments, companions in guilt arguments, phenomenal conservatism, moral progress, or any number of topics, and the bulk of your discussion could focus on papers written in the past 5 years. So, it's simply not the case that one's approach to philosophy is that contingent on the past, or an extreme focus on literature reviews.

Comment by Lance Bush (lance-bush) on How can one train philosophical skill? · 2021-09-30T18:18:06.438Z · LW · GW

But his book was nearly fifty years ago. Is that still the state of things?

 

I was in a terminal MA program, and that was very much the case. Entire courses were taught on the works of specific people, e.g. "David Lewis," and much of the focus was on exegesis of written works, ranging from the Greeks to the mid 20th century. There were certainly exceptions to this, but philosophology was very much alive and well where I was. I don't know how it is for PhD programs, and I'd bet it varies considerably. 

There's a recent trend towards formal methods, and you've had some movements like experimental philosophy that have also deviated from these trends. I myself went into a psychology PhD program since I thought there'd be more tolerance for my empirical approach to philosophy there (I was correct), and because philosophology isn't my thing. I've noticed that almost all of the papers and work I deal with in philosophy was published in the last 30 years or so, with an emphasis on the past 15 years or so. But I'm an advocate of "exophilosophy": doing philosophy outside the formal academic setting of philosophy departments, so I have limited insight into the state of philosophy PhD programs proper.

Pirsig's remarks seem a bit pessimistic. I've seen plenty of dissertations or articles generated from philosophical work that are very argument-centric. I don't know what an exhaustive search of the literature would reveal, but people can and do succeed at focusing on doing philosophy and presenting arguments; there isn't some universal demand that everyone focus mostly on history. I'd like to hear from some people with more direct experience in these programs, though.
 

Comment by Lance Bush (lance-bush) on How can one train philosophical skill? · 2021-09-30T16:25:09.378Z · LW · GW

I like the idea of stand-up philosophy. I worry at the implied goal though; if it's to blow people's minds that might create incentives to signal and to pursue goals such as being exciting or original rather than accurate, precise, or convincing.

Comment by Lance Bush (lance-bush) on How can one train philosophical skill? · 2021-09-30T16:24:11.709Z · LW · GW

I rarely received much in the way of substantive feedback when I was in a philosophy graduate program. This isn't a critique of professors; they have so many things to do that going through a paper line-by-line to offer feedback is incredibly time consuming. So it's hard to achieve much of it, in practice, in graduate programs.

There are also norms of politeness that can impede criticism. I've been to far too many talks where the audience goes easier on the audience than they would if they really wanted to help the person improve. Then you get echo chambers. If a bunch of people are moral realists or think the only solutions to the hard problem are anti-physicalist or whatever, the only pushback a person presenting is likely to receive is internalistic - that is, it comes from people who share background assumptions that should themselves be consistently challenged.

Overall, I think the institutions we have in place don't do a good job of providing enough critical feedback if one is studying and presenting on philosophy.

Comment by Lance Bush (lance-bush) on How can one train philosophical skill? · 2021-09-30T16:21:06.254Z · LW · GW

That's a reasonable starting point and probably somewhere to look. But I've been a philosophy graduate student and I don't think most of what I did made me any better at philosophy.

Among other issues, my experience with philosophy graduate education is that philosophers almost exclusively spoke to one another and read works of philosophy. This risks developing a narrow and insular conception not only of philosophy but, given the ubiquitous reliance on intuitions, on what "commonsense" is like. 

Philosophers will declare something as "obvious" or "intuitive" with little regard for who it is supposed to be obvious or intuitive to; the implicit presumption is that it'd be obvious or intuitive to everyone, or anyone who is thinking correctly, and so on, but little serious consideration is given to the possibility that one's intuitions may be idiosyncratic and not probative of the world so much as how the individual with the intuition is disposed to think about the world. 

In short, a lot of philosophy strikes me as bad psychology, with a sample size of one (yourself) or a handful of idiosyncratic people whose views aren't even independent of one another because they're in the same graduate program and all talk to and influence each other. 

Comment by Lance Bush (lance-bush) on Thoughts on Moral Philosophy · 2021-08-18T17:26:27.117Z · LW · GW

Great, thanks for clarifying. I am a very enthusiastic proponent of moral antirealism so feel free to get in touch if you want to discuss metaethics.

Comment by Lance Bush (lance-bush) on Thoughts on Moral Philosophy · 2021-08-18T06:05:48.377Z · LW · GW

People may say those sorts of things, but it is easy to find poor representation for any position. Relativism/subjectivism as they are put forward by moral philosophers are a very different thing, and are less (or not at all) susceptible to the kinds of concerns you raise.

The persistence of intractable disagreement is the tip of a bigger iceberg for reasons to doubt moral realism; I share your view that it is not very strong evidence, but there are other reasons, and the overall picture seems to me to overwhelmingly favor the antirealist position. At the very least, the persistence of disagreement can spark a broader discussion about how one would go about determining what the allegedly objective moral facts are, and I don’t think moral realists have anything very convincing to say on the epistemic front.

Then there's the fact that they seems to be denying people the ability to claim anything is right or wrong, except for them personally or from within the perspective of their culture,  whilst simultaneously claiming the relativity or subjectivity of morality universally.

Some relativists may do this, but relativism as a metaethical stance does not require or typically entail the claim that people don’t have the ability to claim anything is right or wrong except with respect to that person’s standards or the standards of their culture. 

Insofar as relativism includes a semantic thesis, the thesis is that as a matter of fact this is what people mean when they make moral claims; not that they lack the ability to do otherwise. In other words, the relativist might say “when people make moral claims, they intend to report facts that are implicitly indexicalized to themselves or their culture’s standards.” The semantic aspect of relativism is about the meaning of ordinary moral thought and discourse; it isn’t (necessarily) a strict requirement that nobody could speak or think differently.

Relativists can and do acknowledge the existence of people who don’t speak or think this way; after all, they often find themselves arguing with moral realists, whose reflective moral stance is that there are non-relative moral facts. The relativist might adopt an error theory towards these people.

I’m not entirely sure I understand the last part, about simultaneously claiming the “relativity or subjectivity of morality universally.”

Well, a lot of people who support relativism/subjectivism just want us to be more respectful of other people's perspectives and cultures or believe that we should stay out of things that aren't our business - if they actually saw a women being stoned to death for adultery, their position would change pretty fast.

It may be that some people’s relativism is really just a clumsy and roundabout way to endorse tolerance towards others, but that’s a problem for these people’s views, it isn’t really a problem with relativism as a metaethical position; relativism as a metaethical position doesn’t entail and isn’t really about tolerance.

Also, when you say that people’s position would change pretty fast, do you mean that they’d endorse some form of realism? People whose apparent metaethical standards change when asked about atrocities may very well be confused, as the question may give the rhetorical impression that if they don’t object to atrocities in the realist sense, that they don’t object to them at all. This simply isn’t the case. Relativists who fold under pressure when presented with atrocities don’t need to: nothing about relativism requires that one be any less opposed, disgusted, and outraged by stoning adulterers.

In any case, what kind of people do you have in mind? Are these laypeople who don’t study metaethics? I study metaethics, and I am an antirealist; my particular stance is different than relativists/subjectivists, but I share with them the denial that there are objective moral facts. My metaethical standards don’t change in response to people pointing to actions I oppose; my opposition to those actions is fully consistent with antirealism.

Well, most people agree that the Future Tuesday preference is objectively wrong or bad or mistaken.

Do they? That’s an empirical question. In any case, even if most people did, I’d just say these people are mistaken. Most people agreeing on something is at best very weak evidence for whatever it is they agree on. I’ve discussed the Future Tuesday indifference scenario several times and have yet to hear a good explanation as to how one extracts towards objectivity, or external reasons, or justify claiming the agent in the scenario is “irrational,” etc. The typical response I get is simply that it “seems intuitive” or something like that. Should we take other people’s intuitions to be probative of the truth? If so, why?

FWIW, I don’t even think the type of moral realism Parfit was going for is intelligible. So when people report that they intuit implications from the Future Tuesday thought experiment, I’m not entirely clear on what it is they’re claiming seems to be true to them; that is, I don’t think it even makes sense to say something is objectively right or wrong. Happy to discuss this further!

Finally, regarding what I think may be going on: it seems far more plausible that people reading the scenario are projecting their own notions of what would or wouldn’t be rational onto the agent in the scenario, and mistakenly thinking that there is some stance-independent standard of what is “rational.” In other words, they’re actually just imputing their own standards onto agents without realizing it. Personally, I just don’t think there’s anything irrational about future Tuesday indifference.

If we concede that some non-moral preferences are objectively better than others, then analogously it seems plausible that some moral preferences could be objectively better than others.

Unfortunately, I also do not agree with this. It’s not just that I don’t think it makes any sense to describe some preferences as objectively better than others. It’s that even if there were objective nonmoral normative facts, I don’t think this provides much support for moral realism.

 I’m also simply not sure it’s true that moral facts are similar to preferences. I am not sure that’s true, and I am not sure that if it is that whatever respects in which they’re similar provide much a reason to take moral realism seriously. After all, unicorns are quite similar to horses, but the existence of horses is hardly a good reason to think the existence of unicorns is plausible.

Consider a hypothetical society that had a completely distinct sui generis category of norms that aren’t moral norms, they are, say, “zephyrian norms.” Zephyrian norms developed over the centuries in this society in response to a wide array of considerations, and are built around regulating and maintaining social order through adherence to various rituals, taboos, and ceremonial practices. For instance, one important zephyrian norm revolves around never wearing the color blue, because it is intrinsically unzephyrian to do so.

I take it you and I would find “zephyrian realism” utterly implausible. There’s just no reason to think it’s objectively bad to wear blue, or to sing the Hymn of the Aegis every 7th moon. Yet if we grew up in a society with zephyrian norms, we may regard them as distinct from and just as important as moral norms.

And we could argue that, if objectivism about preferences is true, then it seems plausible there could be objective zephyrian facts about what we should or shouldn’t do. Of course, zephyrian realism is false; there are no objective zephyrian facts. 

That there might be some other normative facts does very little to increase the plausibility of zephyrian realism. I think the same holds for moral realism. Even if preference realism were true, none of us would be tempted to take zephyrian realism much more seriously. Likewise, it's not clear why preference realism should does much to render moral realism more plausible.

Comment by Lance Bush (lance-bush) on Thoughts on Moral Philosophy · 2021-08-17T20:27:07.187Z · LW · GW

I don't endorse relativism/subjectivism, however, relativist positions do not strike me as especially troublesome or implausible, nor do they seem to me to entail biting any bullets.

You state that you dislike arguments for relativism/subjectivism because they involve handwaving and biting bullets. Could you say a bit more about this? What kind of handwaving do you have in mind? And what bullets do you think these arguments lead one to bite?

I’m also interested in why you consider Parfit’s Future Tuesday thought experiment to be the best attempt (at getting around the is-ought divide I take it?), and more generally what (if anything) you think the thought experiment demonstrates or provides evidence for. I grant that “best” doesn’t necessarily mean good or convincing and I recognize that you don’t find these arguments completely persuasive, but I don’t find Parfit’s considerations even slightly persuasive.

I am also intrigued by the suggestion that “people generally believe that we can have knowledge about mathematics.” Sure, but we can also have knowledge about the rules of soccer or chess, but I don’t think that the rules of these games are stance-independently true. They are constructed. Just the same, the rules of mathematics could likewise be constructed. The fact that we can have knowledge of something does not entail realism about that thing in the sense Parfit and other non-naturalist moral realists are claiming there are objective moral facts (i.e., they don’t think these facts are true in the way the rules of soccer are). In any case, claims about what people generally believe about the metaphysics of mathematics seems like an empirical question; is there convincing data that people are generally mathematical realists?

Either way, I don’t see much reason to infer that there are objective moral facts even if there are objective mathematical facts; while I grant that identifying examples of bodies of non-empirical knowledge (like math) raises the plausibility of other bodies of non-empirical knowledge (like morality), this provides at best only marginal evidence of the possibility of the latter; it’s hardly a good reason to think moral realism is tenable.