Is requires ought
post by jessicata (jessica.liu.taylor) · 2019-10-28T02:36:43.196Z · LW · GW · 60 commentsThis is a link post for https://unstableontology.com/2019/10/28/is-requires-ought/
Contents
Epistemic virtue Functionalist theory of mind Social systems Nondualist epistemology Conclusion None 60 comments
The thesis of this post is: "Each 'is' claim relies implicitly or explicitly on at least one 'ought' claim."
I will walk through a series of arguments that suggest that this claim is true, and then flesh out the picture towards the end.
(note: I discovered after writing this post that my argument is similar to Cuneo's argument for moral realism; I present it anyway in the hope that it is additionally insightful)
Epistemic virtue
There are epistemic virtues, such as:
- Try to have correct beliefs.
- When you're not sure about something, see if there's a cheap way to test it.
- Learn to distinguish between cases where you (or someone else) is rationalizing, versus when you/they are offering actual reasons for belief.
- Notice logical inconsistencies in your beliefs and reflect on them.
- Try to make your high-level beliefs accurately summarize low-level facts.
These are all phrased as commands, which are a type of ought claim. Yet, they all assist one following such commands to have more accurate beliefs.
Indeed, it is hard to imagine how someone who does not (explicitly or implicitly) follow rules like these could come to have accurate beliefs. There are many ways to end up in lala land, and guidelines are essential for staying on the path.
So, "is" claims that rely on the speaker of the claim having epistemic virtue to be taken seriously, rely on the "ought" claims of epistemic virtue itself.
Functionalist theory of mind
The functionalist theory of mind is "the doctrine that what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part." For example, according to functionalism, for myself to have a world-representing mind, part of my brain must be performing the function of representing the world.
I will not here argue for the functionalist theory of mind, and instead will assume it to be true.
Consider the following "is" claim: "There is a plate on my desk."
I believe this claim to be true. But why? I see a plate on my desk. But what does that mean?
Phenomenologically, I have the sense that there is a round object on my desk, and that this object is a plate. But it seems that we are now going in a loop.
Here's an attempt at a way out. "My visual system functions to present me with accurate information about the objects around me. I believe it to be functioning well. And I believe my phenomenological sense of there being a plate on my desk to be from my visual system. Therefore, there is a plate on my desk."
Well, this certainly relies on a claim of "function". That's not an "ought" claim about me, but it is similar (and perhaps identical) to an "ought" claim about my visual system: that presenting me with information about objects is what my visual system ought to do.
Things get hairy when examining the second sentence. "I believe it to be functioning well." Why do I believe that?
I can consider evidence like "my visual system, along with my other sensory modalities, presents me with a coherent world that has few anomalies." That's a complex claim, and checking it requires things like checking my memories of how coherent the world my senses present to me is, which is again relying on the parts of my mind to perform their functions.
I can't doubt my mind except by using my mind. And using my mind requires, at least tentatively, accepting claims like "my visual system is there for presenting me with accurate information about the objects around me."
Indeed, even making sense of a claim such as "there is a plate on my desk" requires me to use some intuition-reliant faculty I have of mapping words to concepts; without trust in such a faculty, the claim is meaningless.
I, therefore, cannot make meaningful "is" claims without at the same time using at least some parts of my mind as tools, applying "ought" claims to them.
Social systems
Social systems, such as legal systems, academic disciplines, and religions, contain "ought" claims. Witnesses ought to be allowed to say what they saw. Judges ought to weigh the evidence presented. People ought not to murder each other. Mathematical proofs ought to be checked by peers before being published.
Many such oughts are essential for the system's epistemology. If the norms of mathematics do not include "check proofs for accuracy" and so on, then there is little reason to believe the mathematical discipline's "is" claims such as "Fermat's last theorem is true."
Indeed, it is hard for claims such as "Fermat's last theorem is true" to even be meaningful without oughts. For, there are oughts involved in interpreting mathematical notation, and in resolving verbal references to theorems. Such as, "the true meaning of '+' is integer addition, which can be computed using the following algorithm."
Without mathematical "ought"s, "Fermat's last theorem is true" isn't just a doubtful claim, it's a meaningless one, which is not even wrong.
Language itself can be considered as a social system. When people misuse language (such as by lying), their statements cannot be taken seriously, and sometimes can't even be interpreted as having meaning.
(A possible interpretation of Baudrillard's simulacrum theory is that level 1 is when there are sufficient "ought"s both to interpret claims and to ensure that they are true for the most part; level 2 is when there are sufficient "ought"s to meaningfully interpret claims but not to ensure that they are true; level 3 is when "ought"s are neither sufficient to interpret claims nor to ensure that they are true, but are sufficient for claims to superficially look like meaningful ones; and level 4 is where "ought"s are not even sufficient to ensure that claims superficially look meaningful.)
Nondualist epistemology
One might say to the arguments so far:
"Well, certainly, my own 'is' claims require some entities, each of which may be a past iteration of myself, a part of my mind, or another person, to be following oughts, in order for my claims be meaningful and/or correct. But, perhaps such oughts do not apply to me, myself, here and now."
However, such a self/other separation is untenable.
Suppose I am a mathematical professor, who is considering performing academic fraud, to ensure that false theorems end up in journals. If I corrupt the mathematical process, then I cannot, in the future, rely on the claims of mathematical journals to be true. Additionally, if others are behaving similarly to me, then my own decision to corrupt the process is evidence that others also decide to corrupt the process. Some of these others are in the past; my own decision to corrupt the process is evidence that my own mathematical knowledge is false, as it is evidence that those before me have decided similarly. So, my own mathematical "is" claims rely on myself following mathematical "ought" claims.
(More precisely, both evidential decision theory and functional decision theory have a notion by which present decisions can have past consequences, including past consequences affecting the accuracy of presently-available information)
Indeed, the idea of corrupting the mathematical process would be horrific to most good mathematicians, in a quasi-religious way. These mathematicians' own ability to take their work seriously enough to attain rigor depends on such a quasi-religious respect for the mathematical discipline.
Nondualist epistemology cannot rely on a self/other boundary by which decisions made in the present moment have no effects on the information available in the present moment. Lying to similar agents, thus, undermines both the meaningfulness and the truth of one's own beliefs.
Conclusion
I will summarize the argument thusly:
- Each "is" claim may or may not be justified.
- An "is" claim is only justified if the system producing the claim is functioning well at the epistemology of this claim.
- Specifically, an "is" claim that you make is justified only if some system you are part of is functioning well at the epistemology of that claim. (You are the one making the claim, after all, so the system must include the you who makes the claim)
- That system (that you are part of) can only function well at the epistemology of that claim if you have some function in that system and you perform that function satisfactorily. (Functions of wholes depend on functions of parts; even if all you do is listen for a claim and repeat it, that is a function)
- Therefore, an "is" claim that you make is justified only if you have some specific function and you expect to perform that function satisfactorily.
- If a reasonable agent expects itself to perform some function satisfactorily, then according to that agent, that agent ought to perform that function satisfactorily.
- Therefore, if you are a reasonable agent who accepts the argument so far, you believe that your "is" claims are only justified if you have oughts.
The second-to-last point is somewhat subtle. If I use a fork as a tool, then I am applying an "ought" to the fork; I expect it ought to function as an eating utensil. Similar to using another person as a tool (alternatively "employee" or "service worker"), giving them commands and expecting that they ought to follow them. If my own judgments functionally depend on myself performing some function, then I am using myself as a tool (expecting myself to perform that function). To avoid self-inconsistency between myself-the-tool-user and myself-the-tool, I must accept an ought, which is that I ought to satisfactorily perform the tool-function I am expecting myself to perform; if I do not accept that ought, I must drop any judgment whose justification requires me to perform the function generating this ought.
It is possible to make a similar argument about meaningfulness; the key point is that the meaningfulness of a claim depends on the functioning of an interpretive system that this claim is part of. To fail to follow the oughts implied by the meaningfulness of ones' statements is not just to be wrong, but to collapse into incoherence.
Certainly, this argument does not imply that all "ought"s can be derived from "is"es. In particular, an agent may have degrees of freedom in how it performs its functions satisfactorily, or in doing things orthogonal to performing its functions. What the argument suggests instead is that each "is" depends on at least one "ought", which itself may depend on an "is", in a giant web of interdependence.
There are multiple possible interdependent webs (multiple possible mind designs, multiple possible social systems), such that a different web could have instead come in to existence, and our own web may evolve into any one of a number of future possibilities. Though, we can only reason about hypothetical webs from our own actual one.
Furthermore, it is difficult to conceive of what it would mean for the oughts being considered to be "objective"; indeed, an implication of the argument is that objectivity itself depends on oughts, at least some of which must be pre-objective or simultaneous with objectivity.
Related, at least some of those oughts that are necessary as part of the constitution of "is", must themselves be pre-"is" or simultaneous with "is", and thus must not themselves depend on already-constituted "is"es. A possible candidate for such an ought is: "organize!" For the world to produce a map without already containing one, it must organize itself into a self-representing structure, from a position of not already being self-representing. (Of course, here I am referring to the denotation of "organize!", which is a kind of directed motion, rather than to the text "organize!"; the text cannot itself have effective power outside the context of a text-interpretation system)
One can, of course, sacrifice epistemology, choosing to lie and to confuse one's self, in ways that undermine both the truth and meaningfulness of one's own "is" claims.
But, due to the anthropic principle, we (to be a coherent "we" that can reason) are instead at an intermediate point of a process that does not habitually make such decisions, or one which tends to correct them. A process that made such decisions without correcting them would result in rubble, not reason. (And whether our own process results in rubble or reason in the future is, in part, up to us, as we are part of this process)
And so, when we are a we that can reason, we accept at least those oughts that our own reason depends on, while acknowledging the existence of non-reasoning processes that do not.
60 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2019-10-30T02:40:51.472Z · LW(p) · GW(p)
Additionally, if others are behaving similarly to me, then my own decision to corrupt the process is evidence that others also decide to corrupt the process. Some of these others are in the past; my own decision to corrupt the process is evidence that my own mathematical knowledge is false, as it is evidence that those before me have decided similarly. So, my own mathematical “is” claims rely on myself following mathematical “ought” claims.
(More precisely, both evidential decision theory and functional decision theory have a notion by which present decisions can have past consequences, including past consequences affecting the accuracy of presently-available information)
Not sure how much you're relying on this for your overall point, but I'm skeptical of this kind of application of decision theory.
- I don't know how to formalize the decision theory math for humans. According to EDT "the best action is the one which, conditional on one's having chosen it, gives one the best expectations for the outcome" but what does it actually mean to condition on "one's having chosen it"? UDT assumes that the agent knows their source code and can condition on "source code X outputs action/policy Y" but this is obviously not possible for humans and I don't know what the analogous thing is for humans.
- My guess is that mathematicians typically refrain from conducting mathematical fraud due to a combination of fearing the consequences of being caught, and having mathematical truth (for themselves and others) as something like a terminal value, and not due to this kind of decision theoretic reasoning. If almost no one used this kind of decision theoretic reasoning to make this kind of decision in the past, my current thought process has few other instances to "logically correlate" with (at least as far as the past and the present are concerned).
↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T02:48:48.170Z · LW(p) · GW(p)
but what does it actually mean to condition on “one’s having chosen it”
If your world model represents random variables such as "the action I will take in 1 second" then condition on that random variable being some value. That random variable is accessible from the here and now, in the same way the object right in front of a magnet is accessible to the magnet.
It wouldn't be hard to code up a reinforcement learning agent based on EDT (that's essentially what on-policy learning is), which isn't EDT proper due to not having a world model, but which strongly suggests that EDT is coherent.
If almost no one used this kind of decision theoretic reasoning to make this kind of decision in the past, my current thought process has few other instances to “logically correlate” with (at least as far as the past and the present are concerned).
The argument would still apply to a bunch of similar mathematicians who all do decision theoretic reasoning.
Humans operate on some decision theory (HumanDT) even if it isn't formalized yet, which may have properties in common with EDT/UDT (and people finding EDT/UDT intuitive suggests it does). The relevant question is how "mathematical truth" ends up seeming like a terminal value to so many; it's unlikely to be baked in, it's likely to be some Schelling point reached through a combination of priors and cultural learning.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-10-30T04:47:52.519Z · LW(p) · GW(p)
If your world model represents random variables such as “the action I will take in 1 second” then condition on that random variable being some value.
I don't think that works, especially for kind of purpose you have in mind. For example suppose I'm in a situation where I'm pretty sure the normative/correct action is A but due to to things like cosmic rays I have some chance of choosing B. Then if I condition on "the action I will take in 1 second is B" I will mostly be conditioning on choosing B due to things like cosmic rays, which would be very different from conditioning on "source code X outputs action B".
It wouldn’t be hard to code up a reinforcement learning agent based on EDT (that’s essentially what on-policy learning is), which isn’t EDT proper due to not having a world model, but which strongly suggests that EDT is coherent.
Can you explain what the connection between on-policy learning and EDT is? (And you're not suggesting that an on-policy learning algorithm would directly produce an agent that would refrain from mathematical fraud for the kind of reason you give, or something analogous to that, right?)
The relevant question is how “mathematical truth” ends up seeming like a terminal value to so many; it’s unlikely to be baked in, it’s likely to be some Schelling point reached through a combination of priors and cultural learning.
It seems like truth and beauty are directly baked in and maybe there's some learning involved for picking out or settling on what kinds of truth and beauty to value as a culture. But I'm not seeing how this supports your position.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T05:01:40.831Z · LW(p) · GW(p)
Then if I condition on “the action I will take in 1 second is B” I will mostly be conditioning on choosing B due to things like cosmic rays
This is an issue of EDT having problems, I wrote about this problem and a possible solution here [LW · GW].
Can you explain what the connection between on-policy learning and EDT is?
The q-values in on-policy learning are computed based on expected values estimated from the policy's own empirical history. Very similar to E[utility | I take action A, my policy is ]; these converge in the limit.
And you’re not suggesting that an on-policy learning algorithm would directly produce an agent that would refrain from mathematical fraud for the kind of reason you give, or something analogous to that, right?
I am. Consider tragedy of the commons which is simpler. If there are many on-policy RL agents that are playing tragedy of the commons and are synchronized with each other (so they always take the same action, including exploration actions) then they can notice that they expect less utility when they defect than when they cooperate.
But I’m not seeing how this supports your position.
My position is roughly "people are coordinating towards mathematical epistemology and such coordination involves accepting an 'ought' of not committing mathematical fraud". Such coordination is highly functional, so we should expect good decision theories to manage something at least as good as it. At the very least, learning a good decision theory shouldn't result in failing at such coordination problems, relative to the innocent who don't know good decision theory.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-10-30T06:12:15.603Z · LW(p) · GW(p)
This is an issue of EDT having problems, I wrote about this problem and a possible solution here.
That post seems to be trying to solve a different problem (it still assumes that the agent knows its own source code, AFAICT). Can you please re-read what I wrote and if that post really is addressing the same problem, explain how?
I am. Consider tragedy of the commons which is simpler. If there are many on-policy RL agents that are playing tragedy of the commons and are synchronized with each other (so they always take the same action, including exploration actions) then they can notice that they expect less utility when they defect than when they cooperate.
I see, but the synchronization seems rather contrived. To the extent that humans are RL agents, our learning algorithms are not synchronized (and defecting in tragedy of the commons happens very often as a result), so why is synchronized RL relevant? I don't see how this is supposed to help convince a skeptic.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T06:32:04.316Z · LW(p) · GW(p)
Can you please re-read what I wrote and if that post really is addressing the same problem, explain how?
You're right, the post doesn't address that issue. I agree that it is unclear how to apply EDT as a human. However, humans can still learn from abstract agents.
I see, but the synchronization seems rather contrived.
Okay, here's an attempt at stating the argument more clearly:
You're a bureaucrat in a large company. You're keeping track of how much money the company has. You believe there were previous bureaucrats there before you, who are following your same decision theory. Both you and the previous bureaucrats could have corrupted the records of the company to change how much money the company believes itself to have. If any past bureaucrat has corrupted the records, the records are wrong. You don't know how long the company has been around or where in the chain you are; all you know is that there will be 100 bureaucrats in total.
You (and other bureaucrats) want somewhat to corrupt the records, but want even more to know how much money the company has. Do you corrupt the records?
UDT says 'no' due to a symmetry argument that if you corrupt the records than so do all past bureaucrats. So does COEDT. Both believe that, if you corrupt the records, you don't have knowledge of how much money the company has.
(Model-free RL doesn't have enough of a world model to get these symmetries without artificial synchronization)
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-10-31T04:06:34.293Z · LW(p) · GW(p)
The assumption "You don’t know how long the company has been around or where in the chain you are" seems unrealistic/contrived, much like the assumption of "synchronized RL" in your previous argument. Again this seems like it's not going to be very convincing to a skeptic, at least without, for example, a further argument for why the assumption actually makes sense on some deeper level.
Aside from that, here's a counter-argument: among all fields of research, math is probably one of the hardest to corrupt, because publishing theorems require proofs which can be checked relatively easily and if frauds/errors (false theorems) creep into the literature anyway, eventually a contradiction will be derived and the field will know something went wrong and backtrack to find the problem. If fear of acausally corrupting the current state of the field is the main reason for refraining from doing fraud, then math ought to have a higher amount of fraud relative to other fields, but actually the opposite is true (AFAICT).
comment by romeostevensit · 2019-10-28T19:35:12.227Z · LW(p) · GW(p)
epistemics [physics, implementation level]->ontological commitments->ontology [information theory, algorithmic level]->teleological commitments->teleology [?, computational level]
You can also think about the other direction in the abstraction stack. What sort of homeostasis you're trying to maintain and extend somewhat constrains an ontology which somewhat constrains an epistemology. i.e. Your sensors constrain what sort of thoughts you can think. Church-turing thesis is weird precisely because it says maybe not?
See also: Quine's ship of Neurath. (or Neurathian bootstrap)
You'd likely enjoy Nozick's Invariances if you haven't seen it yet, he was also thinking in this direction IMO.
Replies from: gwern↑ comment by gwern · 2019-10-28T23:17:46.883Z · LW(p) · GW(p)
There's also quantum decision theory. The way I'd put this is, "beliefs are for actions".
comment by cousin_it · 2019-10-28T18:56:38.984Z · LW(p) · GW(p)
It seems to me that a theorem prover based on Peano arithmetic makes "is" claims that don't depend on any "ought" claims. And if the prover is part of a larger agent with desires, the claims it makes are still the same, so they don't suddenly depend on "ought" claims.
Replies from: jessica.liu.taylor, TAG↑ comment by jessicata (jessica.liu.taylor) · 2019-10-28T19:32:46.206Z · LW(p) · GW(p)
PA theorem provers aren't reflective philosophical agents that can answer questions like "what is the origin of my axioms?"
To psychologize them to the point that they "have beliefs" or "make claims" is to interpret them according to a normative/functional theory of mind, such that it is conceivable that the prover could be broken.
A philosophical mathematician using a PA theorem prover uses a wide variety of oughts in ensuring that the theorem prover's claims are trustworthy:
- The prover's axioms ought to correspond to the official PA axioms from a trustworthy source such as a textbook.
- The prover ought to only prove valid theorems; it must not have bugs.
- The programmer ought to reason about and test the code.
- The user ought to learn and understand the mathematical notation and enter well-formed propositions.
Etc etc. Failure to adhere to these oughts undermines justification for the theorem prover's claims.
↑ comment by TAG · 2019-10-29T13:54:44.583Z · LW(p) · GW(p)
Suppose I write a programme that spits out syntactically correct, but otherwise random statements. Is that a "theorem prover"? No, because it is not following any norms of reasoning. A theorem prover will follow norms programmed into it, or it is not a theorem prover. Of course it is not reflexively aware that is is following norms, any more than it is aware it is making claims. And it is not as if the ability of humans to follow norms is not at least partially "programmed" from the outside.
comment by Gordon Seidoh Worley (gworley) · 2019-10-29T20:11:53.528Z · LW(p) · GW(p)
I agree with your arguments if we consider explicit forms of knowledge, such as episteme and doxa. I'm uncertain if they also apply to what we might call "implicit" knowledge like that of techne and gnosis, i.e. knowledge that isn't separable from the experience of it. There I think we can make a distinction between pure "is" that exists prior to conceptualization and "is from ought" arising only after such experiences are reified (via distinction/discrimination/judgement) that makes it so that we can only talk about knowledge of the "is from ought" form even if it built over "is" knowledge that we can only point at indirectly.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T00:01:43.438Z · LW(p) · GW(p)
Yeah, I'm thinking specifically of knowledge that has an internal justificatory structure, that can ask (at least once) "why is this knowledge likely to be correct". Gnosis is likely pre-reflective enough that it doesn't have such a structure. (An epistemic claim that gnosis constitutes knowledge, on the other hand, will). Whether techne does or does not depends on how rich/structured its implicit world model is (e.g. model-free reinforcement learning can produce techne with no actual beliefs)
comment by Charlie Steiner · 2019-10-28T19:58:22.234Z · LW(p) · GW(p)
If by "ought" claims you mean things we assign truth values that aren't derivable from is-statements, then I agree that humans require such beliefs to function. Maybe we could describe choice of a universal Turing machine as such a belief for a Solomonoff inductor.
If by "ought" statements you mean the universally compelling truths of moral realism, then no, it seems straightforward to produce counterexample thinkers that would not be be compelled. As far as I can tell, the things you're talking about don't even set a specific course of action for the thing believing them, they have no necessary function beyond the epistemic.
I think there's some dangerous reasoning here around the idea of "why." If I believe that a plate is on the table, I don't need to know anything at all about my visual cortex to believe that. The explanation is not a part of the belief, nor is it inseparably attached, nor is it necessary for having the belief, it's a human thing that we call an explanation in light of fulfilling a human desire for a story about what is being explained.
Replies from: jessica.liu.taylor, TAG, TAG↑ comment by jessicata (jessica.liu.taylor) · 2019-10-28T20:07:42.969Z · LW(p) · GW(p)
I don't mean either of those. I mean things that are compelling to reasoning agents. There are also non reasoning agents that don't find these compelling. These non reasoning agents don't make justified "is" claims.
The oughts don't overdetermine the course of action but do place constraints on it.
If you believe the sense of a plate being there comes from your visual cortex and also that your visual cortex isn't functioning in presenting you with accurate information, then you should reconsider your beliefs.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2019-10-29T23:06:58.979Z · LW(p) · GW(p)
Have you set up your definitions in such a way that a system can use language to coordinate with allies even in highly abstract situations, but you would rule it out as "actually making claims" depending on whether you felt it was persuadable by the right arguments? In this case, you are right by definition.
Re:visual cortex, the most important point is that knowledge of my visual cortex, "ought"-type or not, is not necessary. People believed things just fine 200 years ago. Second, I don't like the language that my visual cortex "passes information to me." It is a part of me. There is no little homunculus in my head getting telegraph signals from the cortices, it's just a bunch of brain in there.
Replies from: jessica.liu.taylor, jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-29T23:29:41.050Z · LW(p) · GW(p)
People believed things just fine 200 years ago.
Yes, but as far as I can tell you believe your percepts are generated by your visual cortex, so the argument applies to you.
↑ comment by jessicata (jessica.liu.taylor) · 2019-10-29T23:26:13.944Z · LW(p) · GW(p)
Have you set up your definitions in such a way that a system can use language to coordinate with allies even in highly abstract situations, but you would rule it out as “actually making claims” depending on whether you felt it was persuadable by the right arguments?
Any sufficiently smart agent that makes mathematical claims about integers must be persuadable that 1+1=2, otherwise it isn't really making mathematical claims / smart / etc. (It can lie about believing 1+1=2, of course)
That is the sense in which I mean any agent with a sufficiently rich internal justificatory structure of 'is' claims, which makes 'is' claims, accepts at least some 'ought's. (This is the conclusion of the argument in this post, which you haven't directly responded to)
It's possible to use language to coordinate in abstract situations with only rudimentary logical reasoning, so that isn't a sufficient condition.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2019-10-31T08:00:50.957Z · LW(p) · GW(p)
I guess I'm just still not sure what you expect the oughts to be doing.
Is the sort of behavior your thinking of like "I ought not to be inconsistent" being one of your "oughts," and leading to various epistemological actions to avoid inconsistency? This seems to me plausible, but it also seems to be almost entirely packed into how we usually define "rational" or "rich internal justificatory structure" or "sufficiently smart."
One could easily construct a competent system that did not represent its own consistency, or represented it but took certain actions that systematically failed to avoid it. To which you would say "well, that's not sufficiently reflective." What we'd want, for this to be a good move, is for "reflective" (or "smart," "rich structure," "rational," etc) to be a simple thing that predicts the "oughts" neatly. But the "oughts" you describe seem to be running on a model of world-modeling / optimization that is more complicated than strictly necessary for an optimizer, and adding slightly more complication with each ought (though not as much as is required to specify each one separately).
I think one of the reasons people are poking holes or bringing up non-"ought"-compliant agents is that we expect humans to sometimes be non-compliant too. This goes back to my question of whether every agent has some oughts, or whether every (sufficiently smart/rational/etc) agent would be impacted by every ought. If you give me a big list of oughts, I'll give you a big list of ways humans violate them.
I thought at first that your post was about there being some beliefs with unusual properties, labeled "oughts," that everyone has to have some of. But now I think you're claiming that there is some big bundle of oughts that everyone (who is sufficiently X/Y/Z) has all of, and my response is that I'm totally unconvinced that X/Y/Z is in fact a neutral way of ranking systems we want to talk about with the language of epistemology.
Replies from: TAG↑ comment by TAG · 2019-10-31T11:31:04.943Z · LW(p) · GW(p)
I guess I’m just still not sure what you expect the oughts to be doing.
I was assuming that the point was that "oughts" and "ises" aren't completely disjoint, as a crude understanding of the "is-ought divide" might suggest.
I think one of the reasons people are poking holes or bringing up non-”ought”-compliant agents is that we expect humans to sometimes be non-compliant too. This goes back to my question of whether every agent has some oughts, or whether every (sufficiently smart/rational/etc) agent would be impacted by every ought. If you give me a big list of oughts, I’ll give you a big list of ways humans violate them.
If you assume something like moral realism, so that there is some list of "oughts" that are categorical, so that they don't relate to specific kinds of agents or specific situations, then it is likely that humans are violating most of them.
But moral realism is hard to justify.
On the other hand, given the premises that
- moral norms are just one kind of norm
- norms are always ways of performing a function or achieving an end
then you can come up with a constructivist metaethics that avoids the pitfalls of nihilism, relativism and realism. (I think. No idea if that is what Jesicata is saying).
↑ comment by TAG · 2019-10-29T13:20:50.498Z · LW(p) · GW(p)
If by “ought” statements you mean the universally compelling truths of moral realism,
"universally compelling" is setting the bar extremely high. To set it a bit more reasonably: there are moral facts if there is evidence or argument a rational agent would agree with.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2019-10-29T23:29:07.611Z · LW(p) · GW(p)
Fair enough. But that "compelling" wasn't so much about compelled agreement, and more about compelled action ("intrinsically motivating", as they say). It's impressive if all rational agents agree that murder is bad, but it doesn't have the same oomph if this has no effect on their actions re: murder.
Replies from: TAG↑ comment by TAG · 2019-10-31T11:24:09.198Z · LW(p) · GW(p)
If by “ought” claims you mean things we assign truth values that aren’t derivable from is-statements, then I agree that humans require such beliefs to function. Maybe we could describe choice of a universal Turing machine as such a belief for a Solomonoff inductor.
If by “ought” statements you mean the universally compelling truths of moral realism, then no, it seems straightforward to produce counterexample thinkers that would not be be compelled. As far as I can tell, the things you’re talking about don’t even set a specific course of action for the thing believing them, they have no necessary function beyond the epistemic.
There's a third way of thinking where norms are just rules for achieving a certain kind of result optimally or at least reliably.
I think there’s some dangerous reasoning here around the idea of “why.” If I believe that a plate is on the table, I don’t need to know anything at all about my visual cortex to believe that. The explanation is not a part of the belief, nor is it inseparably attached, nor is it necessary for having the belief, it’s a human thing that we call an explanation in light of fulfilling a human desire for a story about what is being explained.
Nonetheless, your visual cortex must do certain things reliably for you to be able to perceive,
comment by Bunthut · 2019-10-30T11:42:46.044Z · LW(p) · GW(p)
Is this a fair summary of your argument:
We already agree that conditional oughts of the form "If you want X, you should do Y" exist.
There are true claims of the form "If you want accurate beliefs, you should do Y".
Therefore, all possible minds that want accurate beliefs should do Y.
Or maybe:
We already agree that conditional oughts of the form "If you want X, you should do Y" exist.
There are true claims of the form "If you want accurate beliefs, you should do Y".
For some Y, these apply very strongly, such that it's very unlikely to have accurate beliefs if you don't do them.
For some of these Y, it's unlikely you do them if you shouldn't.
Therefore for these Y, if you have accurate beliefs you should propably do them.
The first one seems to be correct, if maybe a bit of a platitude. If we take Cuneo's analogy to moral realism seriously, it would be
We already agree that conditional oughts of the form "If you want X, you should do Y" exist.
There are true claims of the form "If you want to be good, you should do Y".
Therefore, all possible minds that want to be good should do Y.
But to make that argument, you have to define "good" first. Of course we already knew that a purely physical property could describe the good [? · GW].
As for the second one, it's correct as well, its still not clear what you would do with it. It's only propably true, so it's not clear why it's more philosophically interesting than "If you have accurate beliefs you propably have glasses".
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T17:00:03.598Z · LW(p) · GW(p)
It's much more like the second. I believe this to be very clearly true e.g. in the case of checking mathematical proofs.
I am using an interpretation of "should" under which an agent believes "I should X" iff they have a quasi-Fristonian set point of making "I do X" true. Should corresponds with "trying to make a thing happen". It's an internal rather than external motivation.
It is clear that you can't justifiably believe that you have checked a mathematical proof without trying to make at least some things happen / trying to satisfy at least some constraints, e.g. trying to interpret mathematical notation correctly.
comment by johnswentworth · 2019-10-30T00:27:02.806Z · LW(p) · GW(p)
Consider cats.
Indeed, it is hard to imagine how someone who does not (explicitly or implicitly) follow rules like these could come to have accurate beliefs. There are many ways to end up in lala land, and guidelines are essential for staying on the path.
I doubt that cats have much notion of "ought" corresponding to the human notion. Their brains seem to produce useful maps of the world without ever worrying about what they "ought" to do. You could maybe make up some interpretation in which the cat has completely implicit "oughts", but at that point the theory doesn't seem to pay any rent - we could just as easily assign some contrived implicit "oughts" to a rock.
And using my mind requires, at least tentatively, accepting claims like "my visual system is there for presenting me with accurate information about the objects around me."
Cats do not seem to have this problem. They seem able to use their eyes just fine without thinking that their visual system is there for something. Again, we could say that they're implicitly assuming their eyes are there for presenting accurate information, but that interpretation doesn't seem to pay any rent, and could just as easily apply to a rock.
If I use a fork as a tool, then I am applying an "ought" to the fork; I expect it ought to function as an eating utensil.
Again, this sounds like a very contrived "ought" interpretation - so contrived that it could just as easily apply to a rock.
Overall, I'm not seeing what all this "ought" business buys us. Couldn't we just completely ignore the entire subject of this post and generally expect to see the same things in the world?
Replies from: jessica.liu.taylor, TAG↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T00:48:05.907Z · LW(p) · GW(p)
Their brains seem to produce useful maps of the world without ever worrying about what they “ought” to do.
How do you know they don't have beliefs about what they ought to do (in the sense of: following of norms, principles, etc)? Of course their 'ought's won't be the same as humans', but neither are their 'is'es.
(Anyway, they probably aren't reflective philosophical agents, so the arguments given probably don't apply to them, although they do apply to philosophical humans reasoning about the knowledge of cats)
Again, we could say that they’re implicitly assuming their eyes are there for presenting accurate information, but that interpretation doesn’t seem to pay any rent, and could just as easily apply to a rock.
We can apply mentalistic interpretations to cats or not. According to the best mentalistic interpretation I know of, they would not act on the basis of their vision (e.g. in navigating around obstacles) if they didn't believe their vision to be providing them with information about the world. If we don't apply a mentalistic interpretation, there is nothing to say about their 'is'es or 'ought's, or indeed their world-models.
Applying mentalistic interpretations to rocks is not illuminating.
Again, this sounds like a very contrived “ought” interpretation—so contrived that it could just as easily apply to a rock.
Yes, if I'm treating the rock as a tool; that's the point.
Couldn’t we just completely ignore the entire subject of this post and generally expect to see the same things in the world?
"We should only discuss those things that constrain expectations" is an ought claim.
Anyway, "you can't justifiably believe you have checked a math proof without following oughts" constrains expectations.
Replies from: johnswentworth↑ comment by johnswentworth · 2019-10-30T00:58:06.800Z · LW(p) · GW(p)
Ok, I think I'm starting to see the point of confusion here. You're treating a "mentalistic interpretation" as a package deal which includes both is's and ought's. But it's completely possible for a map to correspond to a territory separate from any objectives, goals or oughts. It's even possible for a system to reliably produce a map which matches a territory without any oughts - see e.g. embedded naive Bayes [LW · GW] for a very rough example.
Replies from: jessica.liu.taylor, jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T01:00:05.818Z · LW(p) · GW(p)
It’s even possible for a system to reliably produce a map which matches a territory without any oughts—see e.g. embedded naive Bayes for a very rough example.
See what I wrote about PA theorem provers [LW(p) · GW(p)], it's the same idea.
Replies from: johnswentworth↑ comment by johnswentworth · 2019-10-30T02:15:27.729Z · LW(p) · GW(p)
I don't think that's the same idea. Assigning "beliefs" to PA requires assigning an interpretation to them; the embedded naive Bayes post argues that certain systems cannot be assigned certain interpretations.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T02:30:11.599Z · LW(p) · GW(p)
That's another way of saying that some claims of "X implements Y" are definitely false, no?
"This computer implements PA" is false if it outputs something that is not a theorem of PA, e.g. because of a hardware or software bug.
Replies from: johnswentworth↑ comment by johnswentworth · 2019-10-30T03:40:18.295Z · LW(p) · GW(p)
No, it's saying that there is no possible interpretation of the system's behavior in which it behaves like PA - not just that a particular interpretation fails to match.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T04:10:02.894Z · LW(p) · GW(p)
Doesn't a correct PA theorem prover behave like a bounded approximation of PA?
Replies from: johnswentworth↑ comment by johnswentworth · 2019-10-30T04:45:30.057Z · LW(p) · GW(p)
I'm not saying that they're don't exist things which behave like PA.
I'm saying that there exist things which cannot be interpreted as behaving like PA, under any interpretation (where "interpretation" = homomorphism). On the other hand, there are also things which do behave like PA. So, there is a rigorous sense in which some systems do embed PA, and others do not.
The same concept yields a general notion of "is", entirely independent of any notion of "ought": we have some system which takes in a "territory", and produces a (supposed) "map" of the territory. For some such systems, there is not any interpretation whatsoever under which the "map" produced will actually match the territory. For other systems, there is an interpretation under which the map matches the territory. So, there is a rigorous sense in which some systems produce accurate maps of territory, and others do not, entirely independent of any "ought" claims.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T04:51:05.680Z · LW(p) · GW(p)
I agree that once you have a fixed abstract algorithm A and abstract algorithm B, it may or may not be the case that there exists a homomorphism from A to B justifying the claim that A implements B. Sorry for misunderstanding.
But the main point in my PA comment still stands: to have justified belief that some theorem prover implements PA, a philosophical mathematician must follow oughts.
(When you're talking about naive Bayes or a theorem prover as if it has "a map" you're applying a teleological interpretation (that that object is supposed to correspond with some territory / be coherent / etc), which is not simply a function of the algorithm itself)
↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T01:12:42.699Z · LW(p) · GW(p)
To summarize my argument:
- Sufficiently-reflective reasonable agents that make internally-justified "is" claims also accept at least some Fristonian set-points (what Friston calls "predictions"), such as "my beliefs must be logically coherent". (I don't accept the whole of Friston's theory; I'm trying to gesture at the idea of "acting in order to control some value into satisfying some property")
- If a reasonable agent has a Fristonian set point for some X the agent has control over, then that agent believes "X ought to happen".
I don't know if you disagree with either of these points.
Replies from: johnswentworth↑ comment by johnswentworth · 2019-10-30T02:08:07.367Z · LW(p) · GW(p)
First, I think the "sufficiently-reflective" part dramatically weakens the general claim that "is requires ought"; reflectivity is a very strong requirement which even humans often don't satisfy (i.e. how often do most humans reflect on their beliefs?)
Second, while I basically agree with the Fristonian set-point argument, I think there's a lot of unjustified conclusions trying to sneak in by calling that an "ought". For instance, if we rewrite:
Indeed, it is hard for claims such as "Fermat's last theorem is true" to even be meaningful without oughts.
as
Indeed, it is hard for claims such as "Fermat's last theorem is true" to even be meaningful without Fristonian set-points.
... then that sounds like a very interesting and quite possibly true claim, but I don't think the post comes anywhere near justifying such a claim. I could imagine a theorem proving such a claim, and that would be a really cool result.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T02:21:58.847Z · LW(p) · GW(p)
First, I think the “sufficiently-reflective” part dramatically weakens the general claim
Incoherent agents can have all manner of beliefs such as "1+1=3" and "fish are necessarily green" and "eels are not eels". It's hard to make any kind of general claim about them.
The reflectivity constraint is essentially "for each 'is' claim you believe, you must believe that the claim was produced by something that systematically produces true claims", i.e. you must have some justification for its truth according to some internal representation.
… then that sounds like a very interesting and quite possibly true claim, but I don’t think the post comes anywhere near justifying such a claim. I could imagine a theorem proving such a claim, and that would be a really cool result.
Interpreting mathematical notation requires set-points. There's a correct interpretation of +, and if you don't adhere to it, you'll interpret the text of the theorem wrong.
In interpreting the notation into a mental representation of the theorem, you need set points like "represent the theorem as a grammatical structure following these rules" and "interpret for-all claims as applying to each individual".
Even after you've already interpreted the theorem, keeping the denotation around in your mind requires a set point of "preserve memories", and set points for faithfully accessing past memories.
Replies from: johnswentworth↑ comment by johnswentworth · 2019-10-30T03:38:41.990Z · LW(p) · GW(p)
Incoherent agents can have all manner of beliefs such as "1+1=3" and "fish are necessarily green" and "eels are not eels".
I am not talking about incoherent agents, I am talking about agents which are coherent but not reflective. To the extent that we expect coherence to be instrumentally useful and reflection to be difficult, that's exactly the sort of agent we should expect evolution to produce most often.
Most humans seem to have mostly-accurate beliefs, without thinking at all about whether those beliefs were systematically produced by something which produces accurate beliefs.
In interpreting the notation into a mental representation of the theorem, you need set points like "represent the theorem as a grammatical structure following these rules" and "interpret for-all claims as applying to each individual".
It's not at all obvious that representations and interpretations need to be implemented as set-points, or are equivalent to set points, or anything like that. That's the claim which would be interesting to prove.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T04:08:14.768Z · LW(p) · GW(p)
But believing one's own beliefs to come from a source that systematically produces correct beliefs is a coherence condition. If you believe your beliefs come from source X that does not systematically produce correct beliefs, then your beliefs don't cohere.
This can be seen in terms of Bayesianism. Let R[X] stand for "My system reports X is true". There is no distribution P (joint over X,R[X]) such that P(X|R[X])=1 and P(X) = 0.5 and P(R[X] | X) = 1 and P(R[X] | not X) = 1.
That’s the claim which would be interesting to prove.
Here's my attempt at a proof:
Let A stand for some reflective reasonable agent.
- Axiom 1: A believes X, and A believes that A believes X.
- Axiom 2: A believes that if A believes X, then there exists some epistemic system Y such that: Y contains A as an essential component, Y causes A to believe X, and Y functions well. [argument: A has internal justifications for beliefs being systematically correct. A is essential to the system because A's beliefs are a result of the system; if not for A's work, such beliefs would not be systematically correct]
- Axiom 3: A believes that, for all epistemic systems Y that contain A as an essential component and function well, A functions well as part of Y. [argument: A is essential to Y's functioning]
- Axiom 4: For all epistemic systems Y, if A believes that Y is an epistemic system that contains A as an essential component, and also that A functions well as part of Y, then A believes that A is trying to function well as part of Y. [argument: good functioning doesn't happen accidentally, it's a narrow target to hit. Anyway, accidental functioning wouldn't justify the belief; the argument has to be that the belief is systematically, not accidentally, correct.]
- Axiom 5: A believes that, for all epistemic systems Y, if A is trying to function well as part of Y, then A has a set-point of functioning well as part of Y. [argument: set-point is the same as trying]
- Axiom 6: For all epistemic systems Y, if A believes A has a set-point of functioning well as part of Y, then A has a set-point of functioning well as part of Y. [argument: otherwise A is incoherent; it believes itself to have a set-point it doesn't have]
- Theorem 1: A believes that there exists some epistemic system Y such that: Y contains A as an essential component, Y causes A to believe X, and Y functions well. (Follows from Axiom 1, Axiom 2)
- Theorem 2: A believes that A functions well as part of Y. (Follows from Axiom 3, Theorem 1)
- Theorem 3: A believes that A is trying to function well as part of Y. (Follows from Axiom 4, Theorem 2)
- Theorem 4: A believes A has a set-point of functioning well as part of Y. (Follows from Axiom 5, Theorem 3)
- Theorem 5: A has a set-point of functioning well as part of Y. (Follows from Axiom 6, Theorem 4)
- Theorem 6: A has some set-point. (Follows from Theorem 5)
(Note, consider X = "Fermat's last theorem universally quantifies over all triples of natural numbers"; "Fermat's last theorem" is not meaningful to A if A lacks knowledge of X)
Replies from: johnswentworth↑ comment by johnswentworth · 2019-10-30T06:02:25.908Z · LW(p) · GW(p)
But believing one's own beliefs to come from a source that systematically produces correct beliefs is a coherence condition.
This is only if you have some kind of completeness or logical omniscience kind of condition, requiring us to have beliefs about reflective statements at all. It's entirely possible to only have beliefs over a limited class of statements - most animals don't even have a concept of reflection, yet they have beliefs which match reality. One need not have any beliefs at all about the sources of one's beliefs.
As for the proof, seems like the interesting part would be providing deeper foundations for axioms 4 and 5. Those are the parts which seem like they could fail.
↑ comment by TAG · 2019-10-30T10:56:20.975Z · LW(p) · GW(p)
I doubt that cats have much notion of “ought” corresponding to the human notion.
A system can be interpreted as following norms from a "stance" perspective. For instance, a kettle ought to switch itself off when the water reaches boiling point. Following norms is not the same as having reflexive awareness of norms.
I doubt that cats have much notion of “ought” corresponding to the human notion.
Ditto.
comment by Mateusz Bagiński (mateusz-baginski) · 2024-04-02T09:43:10.264Z · LW(p) · GW(p)
If a reasonable agent expects itself to perform some function satisfactorily, then according to that agent, that agent ought to perform that function satisfactorily.
[this] is somewhat subtle. If I use a fork as a tool, then I am applying an "ought" to the fork; I expect it ought to function as an eating utensil. Similar to using another person as a tool (alternatively "employee" or "service worker"), giving them commands and expecting that they ought to follow them.
Can you taboo ought? I think I could rephrase these as:
- I am trying to use a fork as an eating utensil because I expect that if I do, it will function like I expect eating utensils to function.
- I am giving a person commands because I expect that if I do, they will follow my commands. (Which is what I want.)
More generally, there's probably a difference between oughts like "I ought to do X" and oughts that could be rephrased in terms of conditionals, e.g.
"I believe there's a plate in front of me because my visual system is a reliable producer of visual knowledge about the world."
to
"Conditional on my visual system being a reliable producer of visual knowledge about the world, I believe there's a plate in front of me and because I believe a very high credence in the latter, I have a similarly high credence in the former."
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2024-04-02T18:49:26.782Z · LW(p) · GW(p)
It's an expectation that has to do with a function of the thing, an expectation that the thing will function for some purpose. I suppose you could decompose that kind of claim to a more complex claim that doesn't involve "function", but in practice this is difficult.
I guess my main point is that sometimes fulfilling one's functions is necessary for knowledge, e.g. you need to check proofs correctly to have the knowledge that the proofs you have checked are correct, the expectation that you check proofs correctly is connected with the behavior of checking them correctly.
comment by TAG · 2019-10-30T11:58:30.693Z · LW(p) · GW(p)
There's a parallel with realism versus instrumentalism, in that both are downstream of value judgements. If you value metaphysical truth , instrumentalism is wrong-for you, because it can't deliver it. And if you don't value metaphysical truth, realism is wrong-for-you becae it is an unnecessary complication.
comment by TAG · 2019-10-30T11:07:37.099Z · LW(p) · GW(p)
The sections "Social Systems" and "Nondualist Epistemology" seem to be trying to establish that the norms of rationality are ethical norms, and I don't see any need for that. Simple arguments show that there are non-ethical norms, such as the norms relating to playing games, so an epistemological norm can just be another kind of non-ethical norm.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T17:02:07.200Z · LW(p) · GW(p)
I agree but it is philosophically interesting that at least some of those norms required for epistemology are ethical norms, and this serves to justify the 'ought' language in light of criticisms that the 'ought's of the post have nothing to do with ethics.
Replies from: TAGcomment by countingtoten · 2019-10-29T21:50:27.368Z · LW(p) · GW(p)
Why define goals as ethics (knowing that definitions are tools that we can use and replace depending on our goal of the moment)? You seem to be saying that 'ought' has a structure which can also be used to annihilate humanity or bring about unheard-of suffering. That does not seem to me like a useful perspective.
Seriously, just go and watch "Sorry to Bother You."
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-29T23:16:46.102Z · LW(p) · GW(p)
The claim is "any reasonable agent that makes internally-justified 'is' claims also accepts 'ought' claims"
Not "any 'ought' claim that must be accepted by any reasonable agent to make some internally-justified 'is' claim is a true 'ought'"
Or "all true 'ought's are derivable from 'is'es"
Which means I am not saying that the true 'ought' has a structure which can be used to annihilate humanity.
I've seen "Sorry to Bother You" and quite liked it, although I believe it to be overly optimistic about how much science can happen under a regime of pervasive deception.
Replies from: countingtoten↑ comment by countingtoten · 2019-10-30T05:33:21.581Z · LW(p) · GW(p)
Do you have a thesis that you argue for in the OP? If so, what is that thesis?
Are you prepared to go down the other leg of the dilemma and say that the "true oughts" do not include any goal which would require you to, eg, try to have correct beliefs? Also: the Manhattan Project.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-30T05:57:57.069Z · LW(p) · GW(p)
It's very clear that you didn't read the post. The thesis is in the first line, and is even labeled for your convenience.
Replies from: TAG, countingtoten↑ comment by countingtoten · 2019-10-30T06:33:07.172Z · LW(p) · GW(p)
I almost specified, 'what would it be without the confusing term "ought" or your gerrymandered definition thereof,' but since that was my first comment in this thread I thought it went without saying.
Replies from: jessica.liu.taylorcomment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-10-29T13:50:40.107Z · LW(p) · GW(p)
Interesting ideas and arguments, thanks! Does LW have a tagging system? It would be cool to see all the posts tagged "philosophy" and see how much of it is new philosophy vs. reinventing something in the literature. Ideally we'd have a mix of both kinds.
I don't agree with your thesis, btw. I haven't thought that carefully about it yet, but hot take: I feel like we should distinguish between the technical content of a claim and the conversational implications it almost always carries; perhaps is-claims almost always carry normative implications but nevertheless their technical content is strictly non-normative.
Also, if you are right, then I think the conclusion shouldn't be that the distinction is useless but rather that there are two importantly different kinds of ought-claims, because from past experience the is-ought distinction has proved useful and I don't think what you've said here shows that I was making a mistake when I used it in the past.
Replies from: jessica.liu.taylor
↑ comment by jessicata (jessica.liu.taylor) · 2019-10-29T16:54:36.313Z · LW(p) · GW(p)
perhaps is-claims almost always carry normative implications but nevertheless their technical content is strictly non-normative.
I agree and don't think I implied otherwise? I said "is requires ought" not "is is ought".
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-10-29T19:13:11.726Z · LW(p) · GW(p)
Ah, oops, yeah I should have read more closely!