Posts
Comments
"You cannot hide behind a comforting shield of correct-by-definition. Both extensional definitions and intensional definitions can be wrong, can fail to carve reality at the joints. "Categorizing is a guessing endeavor, in which you can make mistakes; so it's wise to be able to admit, from a theoretical standpoint, that your definition-guesses can be "mistaken"."
I agree heartily with most of this post, but it seems to go off the rails a bit at the end in the section I quote above. Eliezer says intensional definitions (that is, categorizations based on the arbitrary highlighting of certain dimensions as salient) can be "wrong" (i.e. untrue) because they fail to carve reality at the joints. But reality, in its full buzzing and blooming confusion, contains an infinite numbers of 'joints' along which it could be carved. It is not at all clear how we could say that focusing one some of those joints is "true" while focusing on other joints is "false," since all such choices are based on similarly arbitrary conventions.
Now, it is certainly true that certain modes of categorization (i.e. the selection of certain joints) have allowed us to make empirical generalizations that would not otherwise have been possible, whereas other modes of categorization have not yielded any substantial predictive power. But why does that mean that one categorization is "wrong" or "untrue"? Better would seem to be to say that the categorization is "unproductive" in a particular empirical domain.
Let me make my claim more clear (and thus probably easier to attack): categories do not have truth values. They can be neither true nor false. I would challenge Eliezer to give an example of a categorization which is false in and of itself (rather than simply a categorization which someone then used improperly to make a silly empirical inference).
Elizer says:
"We aren't enchanted by Bayesian methods merely because they're beautiful. The beauty is a side effect. Bayesian theorems are elegant, coherent, optimal, and provably unique because they are laws."
This seems deeply mistaken. Why should we believe that bayesian formulations are any more inherently "lawlike" than frequentist formulations? Both derive their theorems from within strict formal systems which begin with unchanging first principles. The fundamental difference between Bayesians and Frequentists seems to stem from different ontological assumptions about the nature of a probability distribution (Frequentists imagine a distribution as a set of possible outcomes which would have occurred under different realizations of our world, whereas Bayesians imagine a distribution as a description of single subjective mental state regarding a single objective world about which we are uncertain).
Moreover, doesn't Cox's Theorem imply that at a sufficient level of abstraction, any Bayesian derivation could (at least in principle) be creatively re-framed as a Frequentist derivation, since both must map (at some level) onto the basic rules of probability? It seems to me, that as far as the pure "math" is concerned, both frameworks have equal claim to "lawlike" status.
It therefore seems that what drives Eliezer (and many others, myself included) towards Bayesian formulations is a type of (dare I say it?) bias towards a certain kind of beauty which he has cleverly re-labeled as "law."
Elizer says:
"We aren't enchanted by Bayesian methods merely because they're beautiful. The beauty is a side effect. Bayesian theorems are elegant, coherent, optimal, and provably unique because they are laws."
This seems deeply mistaken. Why should we believe that bayesian formulations are any more inherently "lawlike" than frequentist formulations? Both derive their theorems from within strict formal systems which begin with unchanging first principles. The fundamental difference between Bayesians and Frequentists seems to stem from different ontological assumptions about the nature of a probability distribution (Frequentists imagine a distribution as a set of possible outcomes which would have occurred under different realizations of our world, whereas Bayesians imagine a distribution as a description of single subjective mental state regarding a single objective world about which we are uncertain).
Moreover, doesn't Cox's Theorem imply that at a sufficient level of abstraction, any Bayesian derivation could (at least in principle) be creatively re-framed as a Frequentist derivation, since both must map (at some level) onto the basic rules of probability? It seems to me, that as far as the pure "math" is concerned, both frameworks have equal claim to "lawlike" status.
It therefore seems that what drives Eliezer (and many others, myself included) towards Bayesian formulations is a type of (dare I say it?) bias towards a certain kind of beauty which he has cleverly re-labeled as "law."