Posts
Comments
It's not unusual to count "thwarted aims" as a positive bad of death (as I've argued for myself in my paper Value Receptacles), which at least counts against replacing people with only slightly happier people (though still leaves open that it may be worthwhile to replace people with much happier people, if the extra happiness is sufficient to outweigh the harm of the first person's thwarted ends).
In the case you describe, the "HSC content" is just that Jesus is magic. So there's no argument being offered at all. Now, if they offer an actual argument, from some other p to the conclusion that Jesus is magic, then we can assess this argument like any other. How the arguer came to believe the original premise p is not particularly relevant. What you call the "defeater critique", I call the genetic fallacy.
It's true that an interlocutor is never going to be particularly moved by an argument that starts from premises he doesn't accept. Such is life.
The more interesting question is whether the arguer herself should be led to abandon her intuited judgments. But unless you offer some positive evidence for an alternative rational credence to place in p, it's not clear that a "debunking" explanation of her current level of credence should, by itself, make any difference.
Think of intuitied judgments as priors. Someone might say, "There's no special reason to think that your priors are well-calibrated." And that may be true, but it doesn't change what our priors are. We can't start from anywhere but where we start.
Yes, that's the idea. I mean, (2) is plausibly true if the "because" is meant in a purely causal, rather than rationalizing, sense. But we don't take the fact that we stand in a certain psychological relation to this content (i.e., intuiting it) to play any essential justifying role.
Thanks for following up on this issue! I'm looking forward to hearing the rest of your thoughts.
I'm not sure what you have in mind here. We need to distinguish (i) the referent of a concept from (ii) its reference-fixing "sense" or functional role. The way I understood your view, the reference-fixing story for moral terms involves our (idealized) desires. But the referent is "rigid" in the sense that it's picking out the content of our desires: the thing that actually fills the functional role, rather than the role-property itself.
Since our desires typically aren't themselves about our desires, so it will turn out, on this story, that morality is not "about" desires. It's about "love, friendship," and all that jazz. But there's a story to be told about how our moral concepts came to pick out these particular worldly properties. And that's where desires come in (as I understand your view). Our moral concepts pick out these particular properties because they're the contents of our idealized desires. But that's not to say that therefore morality is "really" just about fulfilling any old desires. For that would be to neglect the part that rigid designation, and the distinction between reference and reference-fixing, plays in this story.
Does that capture your view? To further clarify: the point of appealing to "rigid designation" is just to explain how desires could play a reference-fixing role without being any part of the referent of moral talk (or what it is "about"). Isn't that what you're after? Or do you have some other reference-fixing story in mind?
Correct. Eliezer has misunderstood rigid designation here.
Jonathan Ichikawa, 'Who Needs Intuitions'
Elizabeth Harman, 'Is it Reasonable to “Rely on Intuitions” in Ethics?
Timothy Williamson, 'Evidence in Philosophy', chp 7 of The Philosophy of Philosophy.
The debate over intuitions is one of the hottest in philosophy today
But it -- at least the "debate over intuitions" that I'm most familiar with -- isn't about whether intuitions are reliable, but rather over whether the critics have accurately specified the role they play in traditional philosophical methodology. That is, the standard response to experimentalist critics (at least, in my corner of philosophy) is not to argue that intuitions are "reliable evidence", but rather to deny that we are using them as evidence at all. On this view, what we appeal to as evidence is not the psychological fact of my having an intuition, but rather the propositional content being judged.
The purpose of thought experiments, on this view, is to enable one to grasp new evidence (namely, the proposition in question) that they hadn't considered before. Of course, this isn't a "neutral" methodology because only those who intuit the true proposition thereby gain genuine evidence. But the foolishness of such a "neutrality" constraint (and the associated "psychological" view of evidence) is one of the major lessons of contemporary epistemology (see, esp., Williamson).
And this responds to what I said... how?
It's a nice parable and all, but it doesn't seem particularly responsive to my concerns. I agree that we can use any old external items as tokens to model other things, and that there doesn't have to be anything "special" about the items we make use of in this way, except that we intend to so use them. Such "derivative intentionality" is not particularly difficult to explain (nor is the weak form of "natural intentionality" in which smoke "means" fire, tree rings "signify" age, etc.). The big question is whether you can account for the fully-fledged "original intentionality" of (e.g.) our thoughts and intentions.
In particular, I don't see anything in the above excerpt that addresses intuitive doubts about whether zombies would really have meaningful thoughts in the sense familiar to us from introspection.
This is somewhat absurd
More than that, it's obviously incoherent. I assume your point is that the same should be said of zombies? Probably reaching diminishing returns in this discussion, so I'll just note that the general consensus of the experts in conceptual analysis (namely, philosophers) disagrees with you here. Even those who want to deny that zombies are metaphysically possible generally concede that the concept is logically coherent.
Well, you could talk about how she is covered with soft fur, but it's possible to imagine something fuzzy and not covered with fur, or something covered with fur but not fuzzy. Because it's possible to imagine these things, clearly fuzziness must be non-physical.
Erm, this is just poor reasoning. The conclusion that follows from your premises is that the properties of fuzziness and being-covered-in-fur are distinct, but that doesn't yet make fuzziness non-physical, since there are obviously other physical properties besides being-covered-in-fur that it might reduce to. The simple proof: you can't hold ALL the other physical facts fixed and yet change the fuzziness facts. Any world physically identical to ours is a world in which your cat is still fuzzy. (There are no fuzz-zombies.) This is an obvious conceptual truth.
So, in short, the reason why you can't just "pick any concept and declare it a bedrock case" is that competent conceptual analysis would soon expose it to be a mistake.
I'm not sure I follow you. Why would you need to analyse "thinking" in order to "get a start on building AI"? Presumably it's enough to systematize the various computational algorithms that lead to the behavioural/functional outputs associated with intelligent thought. Whether it's really thought, or mere computation, that occurs inside the black box is presumably not any concern of computer scientists!
I couldn't help one who lacked the concept. But assuming that you possess the concept, and just need some help in situating it in relation to your other concepts, perhaps the following might help...
Our thoughts (and, derivatively, our assertions) have subject-matters. They are about things. We might make claims about these things, e.g. claiming that certain properties go together (or not). When I write, "Grass is green", I mean that grass is green. I conjure in my mind's eye a mental image of blades of grass, and their colour, in the image, is green. So, I think to myself, the world is like that.
Could a zombie do all this? They would go "through the motions", so to speak, but they wouldn't actually see any mental image of green grass in their mind's eye, so they could not really intend that their words convey that the world is "like that". Insofar as there are no "lights on inside", it would seem that they don't really intend anything; they do not have minds.
If you can understand the above two paragraphs, then it seems that you have a conception of meaning as a distinctively mental relation (e.g. that holds between thoughts and worldly objects or states of affairs), not reducible to any of the purely physical/functional states that are shared by our zombie twins.
You can probably give a functionalist analysis of computation. I doubt we can reductively analyse "thinking" (at least if you taboo away all related mentalistic terms), so this strikes me as a bedrock case (again, like "qualia") where tabooing away the term (and its cognates) simply leaves you unable to talk about the phenomenon in question.
But what are brains thinking, if not thoughts?
Right, according to epiphenomenalists, brains aren't thinking (they may be computing, but syntax is not semantics).
If it doesn't appear in the causal diagram, how could we tell that we're not living in a totally meaningless universe?
Our thoughts are (like qualia) what we are most directly acquainted with. If we didn't have them, there would be no "we" to "tell" anything. We only need causal connections to put us in contact with the world beyond our minds.
Meaning doesn't seem to be a thing in the way that atoms and qualia are, so I'm doubtful that the causal criterion properly applies to it (similarly for normative properties).
(Note that it would seem rather self-defeating to claim that 'meaning' is meaningless.)
In my experience, most philosophers are actually pretty motivated to avoid the stigma of "epiphenomenalism", and try instead to lay claim to some more obscure-but-naturalist-friendly label for their view (like "non-reductive physicalism", "anomalous monism", etc.)
FWIW, my old post 'Zombie Rationality' explores what I think the epiphenomenalist should say about the worry that "the upper-tier brain must be thinking meaningless gibberish when the upper-tier lips [talk about consciousness]"
One point to flag is that from an epiphenomenalist's perspective, mere brains never really mean anything, any more than squiggles of ink do; any meaning we attribute to them is purely derivative from the meaning of appropriately-related thoughts (which, on this view, essentially involve qualia).
Another thing to flag is that epiphenomenalism needn't imply that our thoughts are causally irrelevant, but merely their experiential component. It'd be a mistake to identify oneself with just one's qualia (as Eliezer seems to attribute to the epiphenomenalist). It's true that our qualia don't write philosophy papers about consciousness. But we, embodied conscious persons, do write such papers. Of course, the causal explanation of the squiggles depends only on our physical parts. But the fact that the squiggles are about consciousness (or indeed anything at all) depends crucially upon the epiphenomenal aspects of our minds, in addition.
Nope. Epiphenomenalism is motivated by the thought that you could (conceivably, in a world with different laws from ours) have the same bundles of neurons without any consciousness. You couldn't conceivably have the same bundles of trees not be a forest.
Did this ever happen? (If so, updating the OP with links would be very helpful.)
Thanks, that's helpful. Two (related) possible replies for the afterlife believer:
(1) The Y-component is replaceable: brains play the Y role while we're alive, but we get some kind of replacement device in the afterlife (which qualifies as "us", rather than a "replica of us", due to persisting soul identity).
(2) The brain is only needed for physical expressions of mentality ("talking", etc.), and we revert to purely non-physical mental functioning in the afterlife.
These are silly views, of course, but I'm not yet convinced that the existence of brain damage makes them any more so than they were to begin with. (They seem pretty natural developments of the substance dualist view, rather than big bullets they have to bite.)
Did you miss the "N.B." at the end of my post?
I agree that the soul hypothesis is not generally worth taking seriously. What I'm denying is that the existence of brain damage is good evidence for this.
That's surely going to depend on the details of the non-naturalist view. Epiphenomenalism, for example, makes all the same empirical predictions as physicalism. (Though it might be harder to combine with a "soul" view -- it goes more naturally with property dualism than substance dualism.)
But even Cartesian Interactionists, who see the brain as an "intermediary" between soul and body, should presumably expect brain damage to cause the body to be less responsive to the soul (just as in the radio analogy).
Or are you thinking of "non-naturalism" more broadly yet, to include views on which the brain has nothing whatsoever to do with the mind or its physical expression? I guess if one had not yet observed the world at all, this evidence would slightly lower one's credence in non-naturalism by ruling out this most extreme hypothesis. But I take it that the more interesting question is whether this is any kind of evidence against particular non-naturalist views that people actually hold, like Cartesian Interactionism or Epiphenomenalism. (And if you think it is, I hope you'll say a bit more to me to explain why...)
The tooth fairy example gets a variety of responses
Seriously? I've never heard anyone insist that the tooth fairy really exists (in the form of their mother). It would seem most contrary to common usage (in my community, at least) to use 'Tooth Fairy' to denote "whoever replaced the tooth under my pillow with a coin". The magical element is (in my experience) treated as essential to the term and not a mere "connotation".
I've heard of the saying you mention, but I think you misunderstand people when you interpret it literally. My response was not intended as some "peculiar" declaration of mind-independent meaning facts, but rather as a straightforward interpretation of what people who utter such claims have in mind when they do so. (Ask them, "Do you mean that the tooth fairy exists?" and I expect the response, "No, silly, I just mean that my mother is responsible for the coin under my pillow.")
So, to clarify: I don't think that there are free-floating "meaning" facts out there independently of our linguistic dispositions. I just dispute whether your definitions adequately capture the things that most people really care about (i.e. treat as essential) when using the terms in question.
It's no excuse to say that metaethical reductionism "gets reality right" when the whole dispute is instead over whether they have accommodated (or rather eliminated) some concept of which we have a pre-theoretic grasp. Compare the theological reductionist thesis that "God is love". Love exists, therefore God exists, voila! If someone pointed out that this view is needlessly misleading since love is not what most people mean to be talking about when they speak of 'God' (and it would be more honest to just admit one's atheism), it would be no response to give a lecture about constellations and tinkerbell.
No, you learned that the tooth fairy doesn't exist, and that your mother was instead responsible for the observable phenomena that you had previously attributed to the tooth fairy.
(It's a good analogy though. I do think that claiming that morality exists "as a computation" is a lot like claiming that the tooth fairy really exists "as one's mother".)
I'm not arguing for moral realism here. I'm arguing against metaethical reductionism, which leaves open either realism OR error theory.
For all I've said, people may well be mistaken when they attribute normative properties to things. That's fine. I'm just trying to clarify what it is that people are claiming when they make moral claims. This is conceptual analysis, not metaphysics. I'm pointing out that what you claim to be the meaning of 'morality' isn't what people mean to be talking about when they engage in moral discourse. I'm not presupposing that ordinary people have any great insight into the nature of reality, but they surely do have some idea of what their own words mean. Your contrary linguistic hypothesis seems completely unfounded.
Purported debates about the true meaning of "ought" reveal that everyone has their own balancing equation, and the average person thinks all others are morally obliged by objective morality to follow his or her equation.
You're confusing metaethics and first-order ethics. Ordinary moral debates aren't about the meaning of "ought". They're about the first-order question of which actions have the property of being what we ought to do. People disagree about which actions have this property. They posit different systematic theories (or 'balancing equations' as you put it) as a hypothesis about which actions have the property. They aren't stipulatively defining the meaning of 'ought', or else their claim that "You ought to follow the prescriptions of balancing equation Y" would be tautological, rather than a substantive claim as it is obviously meant to be.
That asserting there are moral facts is incompatible with the fact that people disagree about what they are?
No, I think there are moral facts and that people disagree about what they are. But such substantive disagreement is incompatible with Eliezer's reductive view on which the very meaning of 'morality' differs from person to person. It treats 'morality' like an indexical (e.g. "I", "here", "now"), which obviously doesn't allow for real disagreement.
Compare: "I am tall." "No, I am not tall!" Such an exchange would be absurd -- the people are clearly just talking past each other, since there is no common referent for 'I'. But moral language doesn't plausibly function like this. It's perfectly sensible for one person to say, "I ought to have an abortion", and another to disagree: "No, you ought not to have an abortion". (Even if both are logically omniscient.) They aren't talking past each other. Rather, they're disagreeing about the morality of abortion.
What would you say to someone who does not share your intuition that such "objective" morality likely exists?
I'd say: be an error theorist! If you don't think objective morality exists, then you don't think that morality exists. That's a perfectly respectable position. You can still agree with me about what it would take for morality to really exist. You just don't think that our world actually has what it takes.
One related argument is the Open Question Argument: for any natural property F that an action might have, be it promotes my terminal values, or is the output of an Eliezerian computation that models my coherent extrapolated volition, or whatever the details might be, it's always coherent to ask: "I agree that this action is F, but is it good?"
But the intuitions that any metaethics worthy of the name must allow for fundamental disagreement and fallibility are perhaps more basic than this. I'd say they're just the criteria that we (at least, many of us) have in mind when insisting that any morality worthy of the name must be "objective", in a certain sense. These two criteria are proposed as capturing that sense of objectivity that we have in mind. (Again, don't you find something bizarrely subjectivist about the idea that we're fundamentally morally infallible -- that we can't even question whether our fundamental values / CEV are really on the right track?)
The part about computation doesn't change the fundamental structure of the theory. It's true that it creates more room for superficial disagreement and fallibility (of similar status to disagreements and fallibility regarding the effective means to some shared terminal values), but I see this as an improvement in degree and not in kind. It still doesn't allow for fundamental disagreement and fallibility, e.g. amongst logically omniscient agents.
(I take it to be a metaethical datum that even people with different terminal values, or different Eliezerian "computations", can share the concept of a normative reason, and sincerely disagree about which (if either) of their values/computations is correctly tracking the normative reasons. Similarly, we can coherently doubt whether even our coherently-extrapolated volitions would be on the right track or not.)
malice implies poor motivations. Rather, the egalitarian instinct appears to be natural to most people.
Why the "rather"? How 'natural' an instinct is implies nothing about its moral quality.
It's not entirely clear what you're asking. Two possibilities, corresponding to my above distinction, are:
(1) What (perhaps more general) normatively significant feature is possessed by [saving lives for $500 each] that isn't possessed by [saving mosquitoes for $2000 each]? This would just be to ask for one's fully general normative theory: a utilitarian might point to the greater happiness that would result from the former option. Eventually we'll reach bedrock ("It's just a brute fact that happiness is good!"), at which point the only remaining question is....
(2) In what does the normative signifiance of [happiness] consist? That is, what is the nature of this justificatory status? What are we attributing to happiness when we claim that it is normatively justifying? This is where the non-naturalist insists that attributing normativity to a feature is not merely to attribute some natural quality to it (e.g. of "being the salient goal under discussion" -- that's not such a philosophically interesting property for something to have. E.g., I could know that a feature has this property without this having any rational significance to me at all).
(Note that it's a yet further question whether our attributions of normativity are actually correct, i.e. whether worldly things have the normative properties that we attribute to them.)
I gather it's this second question you had in mind, but again it's crucial to carefully distinguish them since non-naturalist answers to the first question are obviously crazy.
People claim all sorts of justifications for 'ought' statements (aka normative statements).
You still seem to be conflating justification-giving properties with the property of being justified. Non-naturalists emphatically do not appeal to non-natural properties to justify our ought-claims. When explaining why you ought to give to charity, I'll point to various natural features -- that you can save a life for $500 by donating to VillageReach, etc. It's merely the fact that these natural features are justifying, or normatively important, which is non-natural.
Thanks, this is helpful. I'm interested in your use of the phrase "source of normativity" in:
The only source of normativity I think exists is the hypothetical imperative
This makes it sound like there's a new thing, normativity, that arises from some other thing (e.g. desires, or means/ends relationships). That's a very realist way of talking.
I take it that what you really want to say something more like, "The only kind of 'normativity'-talk that's naturalistically reducible and hence possibly true is hypothetical imperatives -- when these are understood to mean nothing more than that a certain means-end relation holds." Is that right?
I'd then understand you as an error theorist, since "being a means-end relationship", like "being red", is not even in the same ballpark as what I mean by "being normative". (It might sometimes have normative importance, but as we learn from Parfit, that's a very different thing.)
Thanks for this reply. I share your sense that the word 'moral' is unhelpfully ambiguous, which is why I prefer to focus on the more general concept of the normative. I'm certainly not going to stipulate that motivational internalism is true of the normative, though it does seem plausible that there's something irrational about someone who acknowledges that they really ought (all things considered) to phi and yet fails to do so. (I don't doubt that it's possible for someone to form the judgment without any corresponding motivation though, as it's always possible for people to be irrational!)
I trust that we all have a decent pre-theoretic grasp of normativity (or "ought-ness"). The question then is whether this phenomenon that we have in mind (i) is reducible to some physical property, and (ii) actually exists.
Error theory (answering 'no' and 'no' to the two questions above) seems the most natural position for the physicalist. And it sounds like you may be happy to agree that you're really an error theorist about normativity (as I mean it). But then I'm puzzled by what you take yourself to be doing in this series. Why even use moral/normative vocabulary at all, rather than just talking about the underlying natural properties that you really have in mind?
P.S. What work is the antecedent doing in your conditional?
If you want to torture children, you should_ToTortureChildren volunteer as a babysitter.
Why do you even need the modus ponens? Assuming that "should_ToTortureChildren" just means "what follows is an effective means to torturing children", then isn't the consequent just plain true regardless of what you want? (Perhaps only someone with the relevant desire will be interested in this means-ends fact, but that's true of many unconditional facts.)
That doesn't really answer my question. Let me try again. There are two things you might mean by "mind dependent".
(1) You might just mean "makes some reference to the mind". So, for example, the necessary truth that "Any experience of red is an experience of colour" would also count as "mind-dependent" in this sense. (This seems a very misleading usage though.)
(2) More naturally, "mind dependent" might be taken to mean that the truth of the claim depends upon certain states of mind actually existing. But "pain is bad for people" (like my example above) does not seem to be mind-dependent in this sense.
Which did you have in mind?
As I argue elsewhere:
"Hypothetical imperatives thus reveal patterns of normative inheritance. But their highlighted 'means' can't inherit normative status unless the 'end' in question had prior normative worth. A view on which there are only hypothetical imperatives is effectively a form of normative nihilism -- no more productive than an irrigation system without any water to flow through it."
(Earlier in the post explains why hypothetical imperatives aren't reducible to mere empirical statements of a means-ends relationship.)
I tentatively favour non-naturalist realism over non-naturalist error theory, but my purpose in my previous comment was just to flag the latter option as one that physicalists should take (very) seriously.
I'm inclined not to write about moral non-naturalism because I'm writing this stuff for Less Wrong, where most people are physicalists
Physicalists could (like Mackie) accept the non-naturalist's account of what it would take for something to be genuinely normative, and then simply deny that there are any such properties in reality. I'm much more sympathetic to this hard-headed "error theory" than to the more weaselly forms of naturalism.
I was thinking of "fundamental" concepts as those that are most basic, and not reducible to (or built up out of) other, more basic, concepts. I do think that normative concepts are conceptually isolated, i.e. not reducible to non-normative concepts, and that's really the more relevant feature so far as the OQA is concerned. But by 'fundamental normative concept' I meant a normative concept that is not reducible to any other concepts at all. They are the most basic, or bedrock, of our normative concepts.
Just to clarify: By 'pain' I mean the hurtful aspect of the sensation, not the base sensation that could remain in the absence of its hurting.
In your first paragraph you describe people who take pain to be instrumentally useful in some circumstances, to bring about some other end (e.g. healing) which is itself good. I take no stand on that empirical issue. I'm talking about the crazy normative view that pain is itself (i.e. non-instrumentally) good.
Yes, I was imagining someone who thought that unmitigated pain and suffering was good for everyone, themselves included. Such a person is nuts, but hardly inconceivable.
It's not analytic that pain is bad. Imagine some crazy soul who thinks that pain is intrinsically good for you. This person is deeply confused, but their error is not linguistic (as if they asserted "bachelors are female"). They could be perfectly competent speakers of the english language, and even logically omniscient. The problem is that such a person is morally incompetent. They have bizarrely mistaken ideas about what things are good (desirable) for people, and this is a substantive (synthetic), not merely analytic, matter.
Perhaps the thought is that contingent (rather than necessary) facts about wellbeing are mind-dependent. That's still not totally obvious to me, but it does at least seem less clearly false than the original (unrestricted) claim.
If we taboo and reduce, then the question of "...but is it good?" is out of place. The reply is: "Yes it is, because I just told you that's what I mean to communicate when I use the word-tool 'good' for this discussion. I'm not here to debate definitions; I'm here to get something done."
I just wanted to flag that a non-reductionist moral realist (like myself) is also "not here to debate definitions". See my post on The Importance of Implications. This is compatible with thinking well of the Open Question Argument, if we think we have an adequate grasp of some fundamental normative concept (be it 'good', 'reason', or 'ought' -- I lean towards 'reason', myself, such that to speak of a person's welfare is just to talk about what a sympathetic party has reason to desire for the person's sake).
Note that if we're right to consider some normative concepts to be conceptually primitive (not analytically reducible to non-normative concepts) then your practice of "tabooing" all normative vocabulary actually has the effect of depriving us of the conceptual tools necessary to even talk about the normative sphere. Consequent talk of people's (incl. God's) desires or dispositions is simply changing the subject, on this way of looking at things.
Out of interest: Will you be arguing anywhere in this sequence against non-reductionist moral realism? Or are you simply assuming its falsity from the start, and exploring the implications from there? (Even the latter, more modest project is of course worth pursing, but I personally would be more interested in the former.) Either way, it'd be good to be clear about this. (You could then skip the silly rhetoric about how what is not "is", must be "is not".)
Tangentially:
facts about the well-being of conscious creatures are mind-dependent facts
How so? (Note that a proposition may be in some sense about minds without its truth value being mind-dependent. E.g. "Any experience of red is an experience of colour" is true regardless of what minds exist. I would think the same is true of, e.g., "All else equal, pain is bad for the experiencer.")
It's confusing that you use the word 'meta-ethics' when talking about plain first-order ethics.
Non-cognitivists, in contrast, think that moral discourse is not truth-apt.
Technically, that's not quite right (except for the early emotivists, etc.). Contemporary expressivists and quasi-realists insist that they can capture the truth-aptness of moral discourse (given the minimalist's understanding that to assert 'P is true' is equivalent to asserting just 'P'). So they will generally explain what's distinctive about their metaethics in some other way, e.g. by appeal to the idea that it's our moral attitudes rather than their contents that have a certain central explanatory role...
Depending on what you mean by 'direct access', I suspect that you've probably misunderstood. But judging by the relatively low karma levels of my recent comments, going into further detail would not be of sufficient value to the LW community to be worth the time.
How do you know that "people think zombies are conceivable"? Perhaps you will respond that we can know our own beliefs through introspection, and the inferential chain must stop somewhere. My view is that the relevant chain is merely like so:
zombies are conceivable => physicalism is false
I claim that we may non-inferentially know some non-psychological facts, when our beliefs in said facts meet the conditions for knowledge (exactly what these are is of course controversial, and not something we can settle in this comment thread).