Posts

Comments

Comment by Ichoran on Pluralistic Moral Reductionism · 2013-11-19T23:17:57.053Z · LW · GW

Although I think this series of posts is interesting and mostly very well reasoned, I find the discussion about objectivity to be strangely crafted. At the risk of arguing about definitions: the hierarchy you lay out about objectivity is only remotely related to what I mean by objective, and my sense is that it doesn't cohere very well with common usage.

First, there seems no better reason to split off objective1 than objectiveA which is "software-independent facts". Okay, so I can't say anything objective about my web browser, just because we've said I can't. Why is this helpful? The only reason to split this out is if you are some sort of dualist; otherwise the mind is a computational phenomenon just like DNA replication or whatnot.

Second, as Emile already pointed out, nowhere in the hierarchy is uniqueness addressed, yet this is the clearest conventional distinction between subjectivity and objectivity. 5+7 = 12 for everyone. Mint chocolate chip ice cream is better than rocky road ice cream is not the case for everyone (in the conventional sense, anyway). So these things are all colloquially objective:

  • Rocky road has more chocolate than mint chocolate chip
  • The author of this post enjoys mint chocolate chip more than rocky road
  • My IPv4 address has a higher value than does lesswrong.org
  • The Bible describes God endorsing the consumption of only certain animals

Referring to God doesn't make things non-objective in the standard sense presuming God exists. Of course, without a way to measure God's preferences, you may lose your theoretical objectivity, but any other single source or self-consistent group can fill in (e.g. the Pope) as an source for objective answers to what would otherwise be subjective questions.

The issue isn't whether that is subjective or objective; it's whether that method of gaining objectivity is practical and useful.

And since humans are the only sentient beings, I really fail to see what the distinction is between 2 and 3 is in a practical way, once you split off God (or any other singularly identifiable entity).

So I strongly suggest that this section ought to be rethought. Objectivity seems central to this sort of moral reductionism, and so it is worth using definitions that are not too misleading. Either the definitions should change, or there should be much more motivation about why we care about the distinctions between any of the definitions you've offered.

Comment by Ichoran on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-10T19:17:45.290Z · LW · GW

This is an awful lot of words to expend to notice that

(1) Social interactions need to be modeled in a game-theoretic setting, not straightforward expected payoff

(2) Distributions of expected values matter. (Hint: p(N) = 1/N is a really bad model as it doesn't converge).

(3) Utility functions are neither linear nor symmetric. (Hint: extinction is not symmetric with doubling the population.)

(4) We don't actually have an agreed-upon utility function anyway; big numbers plus a not-well-agreed-on fuzzy notion is a great way to produce counterintuitive results. The details don't really matter; as fuzzy approaches infinity, you get nonintuitiveness.

It's much more valuable to address some of these imperfections in the setup of the problem than continuing to wade through the logic with bad assumptions in hand.

Comment by Ichoran on Right for the Wrong Reasons · 2013-02-13T21:03:02.508Z · LW · GW

The appropriate thing to do is apply (an estimate of) Bayes rule. You don't need to try to specify every possible outcome in advance; that is hopeless and a waste of effort. Rather, you extract the information that you got about what happened to create an improved prediction of what would have happened, and assign credit appropriately.

First, let's look at what we're trying to do. If you're trying to make good predictions, you want p(X | "X") to be as close to 1 as possible, where X is what happens, and "X" is what you say will happen.

If an unbiased observer initially would have predicted, say, p(you win at fencing) = 0.5, then initially the estimate of your accuracy for that statement would be 0.5; and after winning 14 touches in a row it would probably be somewhere around 0.999, which is nearly as good as it having been true (unless your accuracy is already in the 99.9%+ range, at which point this doesn't help refine the estimate of your accuracy).

So, you don't need to ask more precise questions. You do need to honestly evaluate in aborted trials whether there were dramatic shifts in the apparent probability of the outcome. When doing these things in real life, actually going through Bayesian mathematics is probably not worthwhile, but keeping the gist of it certainly is.