Naming the Highest Virtue of Epistemic Rationality
post by Ronny Fernandez (ronny-fernandez) · 2011-10-24T23:00:37.924Z · LW · GW · Legacy · 28 commentsContents
28 comments
Edit: Looking back at this a few years later. It is pretty embarrassing, but I'm going to leave it up.
Why don't we start treating the log2 of the probability — conditional on every available piece of information — you assign to the great conjunction, as the best measure of your epistemic success? Let's call: log_2(P(the great conjunction|your available information)), your "Bayesian competence". It is a deductive fact that no other proper scoring rule could possibly give: Score(P(A|B)) + Score(P(B)) = Score(P(A&B)), and obviously, you should get the same score for assigning P(A|B) to A, after observing B, and assigning P(B) to B a priori, as you would get for assigning P(A&B) to A&B a priori. The great conjunction is the conjunction of all true statements expressible in your idiolect. Your available information may be treated as the ordered set of your retained stimulus.
If this doesn't make sense, or you aren't familiar with these ideas, checkout Technical Explanation after checking out Intuitive Explanation.
It is standard LW doctrine that we should not name the highest value of rationality, and it is often defended quite brilliantly:
You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.
and of course also:
How can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.
These quotes are from the end of Twelve Virtues
Should we really be wondering if there's a virtue higher than bayesian competence? Is there really a probability worth worrying about that the description of bayesian competence above is misunderstood? Is the description not simple enough to be mathematical? What mistake might I discover in my understanding of bayesian competence by comparing it to that which I did not name, after I've already given a proof that bayesian competence is proper, and that the restrictions: score(P(B)*P(A|B)) = score(P(B)) + score(P(A|B)), and: must be a proper scoring rule, uniquely specify Logb?
I really want answers to these questions. I am still undecided about them; and change my mind about them far too often.
Of course, your bayesian competence is ridiculously difficult to compute. But I am not proposing the measure for practical reasons. I am proposing the measure to demonstrate that degree of rationality is an objective quantity that you could compute given the source code to the universe, even though there are likely no variables in the source that ever take on this value. This may be of little to no value to the most obsessively pragmatic practitioners of rationality. But it would be a very interesting result to philosophers of science and rationality.
Updated to better express view of author, and take feedback into account. Apologies to any commenter who's comment may have been nullified.
The comment below:
The general reason Eliezer advocates not naming the highest virtue (as I understand it) is that there may be some type of problem for which bayesian updating (and the scoring rule referred to) yields the wrong answer. This idea sounds rather improbable to me, but there is a non-negligible probability that bayes will yield a wrong answer on some question. Not naming the virtue is supposed to be a reminder that if bayes ever gives the wrong answer, we go with the right answer, not bayes.
has changed my mind about the openness of the questions I asked.
28 comments
Comments sorted by top scores.
comment by MinibearRex · 2011-10-25T04:17:08.114Z · LW(p) · GW(p)
The general reason Eliezer advocates not naming the highest virtue (as I understand it) is that there may be some type of problem for which bayesian updating (and the scoring rule referred to) yields the wrong answer. This idea sounds rather improbable to me, but there is a non-negligible probability that bayes will yield a wrong answer on some question. Not naming the virtue is supposed to be a reminder that if bayes ever gives the wrong answer, we go with the right answer, not bayes.
Replies from: pengvado, ronny-fernandez↑ comment by pengvado · 2011-10-25T12:11:08.809Z · LW(p) · GW(p)
I think we've already found a type of problem where bayesian updating breaks. Namely, all the anthropic problems that UDT solves. (UDT doesn't say that bayes gives the wrong answer in those cases, but it does say that asking for a probability is the wrong question.)
↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-10-25T19:04:23.959Z · LW(p) · GW(p)
This makes sense to me. I retract my claims.
comment by VincentYu · 2011-10-25T01:22:48.035Z · LW(p) · GW(p)
There is a major issue with the proposed scoring - it is underspecified. In particular, in the definition:
for B in your beliefs that's true
How do we determine if B is in an agent's set of beliefs? We cannot only consider the beliefs that are currently running through the agent's mind, because we'd end up with at most a few. We need a definition of what "B is in your beliefs" means. However, it is very difficult to specify all of an agent's beliefs - humans don't walk around carrying a well-defined sack of beliefs with probabilities attached.
Less importantly, the linearity in the sum can be exploited. For example, I can easily get myself to believe the following sequence of statements in Peano arithmetic:
1=1
2=2
3=3
...
This will give me a favorable score with minimal effort. At least in this case, the proposed scoring is orthogonal to measuring epistemic rationality.
Replies from: ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-10-25T02:11:01.515Z · LW(p) · GW(p)
If when asked "how much do you believe in B?" your neuro net gives an answer by remembering instead of sciencing, then B is in your beliefs. This seems like it will work, but I just thought of it, i'm not sure,
comment by JoshuaZ · 2011-10-24T23:31:05.460Z · LW(p) · GW(p)
It is a deductive fact that no other scoring rule could possibly give: Score(P(A|B)) + Score(P(B)) = Score(P(A&B))
What assumptions are you using here? In the simplest form this is obviously false. Simply let the Score function always be zero. Moreover, any Score function that satisfies this identity can be scaled by any number and still satisfy it. So not only does your log work but a log to any other base will work. Also if I believe in the axiom of choice (I think I need choice here to get a transcendence basis. Can someone more foundationally oriented comment if this is correct?), then there is a function f on the positive reals such that f(ab)=f(a)+f(b) and f(x) is not equal to a constant times log x. So one could just as well use that f as your score function.
I think your statement is true if you want your score function to be continuous and normalized so that one has Score(1/2)= -1.
On a completely different note, the repeated references to the Sequences come off as a bit off. You assume a high degree of detailed familiarity with the various essays that is unjustified. For example, the quote from the Twelve Virtues sounded familiar, but I almost certainly would not have been able to place it. Moreover, the way these quotes are used almost feels like you are quoting religious proof texts or writing a highschool English essay rather than actually using them in a useful way.
Replies from: ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-10-24T23:39:43.981Z · LW(p) · GW(p)
For example, the quote from the Twelve Virtues sounded familiar, but I almost certainly would not have been able to place it.
Twelve virtues is really popular, that's why I wrote that. I'll take it out if it's distracting. (done and done)
Moreover, the way these quotes are used almost feels like you are quoting religious proof texts or writing a highschool English essay rather than actually using them in a useful way.
I was actually disagreeing with them, you understand this right. If not let me know, cause that's important for my readers to get off the bat. Those are the two quotes I remember most clearly saying that we shouldn't name the highest value. i put them there as evidence that not naming the highest value is standard LW doctrine. So if it came off as quoting doctrine perhaps I made my point.
On the Log stuff. I know other bases work fine too. But normalization is nice, actually i think I write log_b somewhere.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-10-24T23:50:37.933Z · LW(p) · GW(p)
Not just other bases. I can construct another function as follows: Fix a basis for R over Q. I can do this if I believe in the axiom of choice. Call the elements of that basis x(i). Consider then the function that takes elements of log_x, writes them with respect to the basis and then zeros the coordinate connected to a fixed basis element x(0). This function will have your desired property and is not a constant times log.
Replies from: ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-10-24T23:58:42.516Z · LW(p) · GW(p)
Interesting. Is it continuous as well?
I may be wrong. But I think EY say's in tech explanation that no other function satisfies that condition and is also proper.
Is this f a proper scoring rule?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-10-25T00:10:40.342Z · LW(p) · GW(p)
No. This is wildly non-continuous. It also isn't proper. This is why specifying what your hypotheses are for your theorems is important.
Replies from: ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-10-25T00:12:51.405Z · LW(p) · GW(p)
good point.But I think I said it had to be proper. I've made that more explicit.
comment by Psychosmurf · 2011-10-24T23:30:04.154Z · LW(p) · GW(p)
Why don't we start treating the sum of log_2 of the probability — conditional on every available piece of information — you assign to every true sentence, as the best measure of your epistemic success?
Wait. Perhaps I'm misunderstanding something here, but how are we going to decide what a true sentence is independently of all of our available information?
Replies from: ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-10-24T23:52:48.857Z · LW(p) · GW(p)
We won't be able to decide. But reality will.
comment by EvelynM · 2011-10-25T03:48:14.941Z · LW(p) · GW(p)
You misspelled Rationality in the subject.
Replies from: ronny-fernandez, prase↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-10-25T19:03:21.423Z · LW(p) · GW(p)
woopsy in both cases
comment by VincentYu · 2011-10-25T00:01:36.124Z · LW(p) · GW(p)
Can't you get the maximum score of 0 by simply setting P(b) = 1 for all beliefs b, regardless of the truth value of b? False beliefs have to be penalized if you want
Replies from: ronny-fernandez, ronny-fernandeza score of 0 [to] represent each of your beliefs being completely and perfectly accurate.
↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-10-25T00:10:15.360Z · LW(p) · GW(p)
No wait, of course not sorry. If P(A) = 1, then P(~A) = 0, and A is false, then your score goes down to negative infinity (kind of, I think).
Replies from: VincentYu↑ comment by VincentYu · 2011-10-25T00:18:27.539Z · LW(p) · GW(p)
If P(A) = 1, then P(~A) = 0
That only works if the agent's beliefs have that kind of consistency. If it is taken for granted that this scoring only applies for agents with completely consistent beliefs (including complete satisfaction of Bayes' theorem), then I don't think this scoring can be applied to any human.
Replies from: ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-10-25T00:22:45.850Z · LW(p) · GW(p)
Hmm, idk bout that. P(a) + P(~a) = 1 seems like something humans do alright with. But of course humans don't really use numbers in the first place. but that does not matter. bayes has been formalized with simple degrees of confidence like: lots, all, not very much, none.
But if you're right then we I'll give up the point and simply penalize for false claims.
But take note that if humans don't have the consistency to satisfy P(a) + P(~a) = 1 they most certainly don't have the consistency to satisfy P(a) = 1 either. So no you could not get a perfect score by setting all your beliefs to 1 because you can't set all your beliefs to 1.
Replies from: VincentYu↑ comment by VincentYu · 2011-10-25T01:32:32.709Z · LW(p) · GW(p)
But take note that if humans don't have the consistency to satisfy P(a) + P(~a) = 1 they most certainly don't have the consistency to satisfy P(a) = 1 either. So no you could not get a perfect score by setting all your beliefs to 1 because you can't set all your beliefs to 1.
I don't follow the argument. Perhaps we mean different things by 'consistency'? By consistent beliefs, I meant a set of beliefs that cannot be used to derive a contradiction with the usual probability axioms. I was not making a claim about how humans come to believe things.
ETA: About this:
P(a) + P(~a) = 1 seems like something humans do alright with.
I think you place too much trust in the consistency of human beliefs. In fact, I wouldn't trust myself with that. Suppose you ask me to assign subjective probabilities to 50 statements. Immediately afterwards, you give me a list of the negations of these 50 statements. I'm pretty sure I'll violate P(a) + P(~a) = 1 at least once.
Replies from: ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-10-25T02:06:51.633Z · LW(p) · GW(p)
But you'll probably violate it within some reasonable error range. I doubt you would ever get anything as high as 150% given to (a or ~a) if you actually performed this test. And still 1/50 isn't bad.
↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-10-25T00:05:52.680Z · LW(p) · GW(p)
Updated.
comment by Shmi (shminux) · 2011-10-24T23:22:03.715Z · LW(p) · GW(p)
A point of math: log(a) < 0 when 0<a<1, so your proposed measure is negative and gets smaller and smaller the more "epistemically successful" you are. Is this really your intention? To clarify a bit, sum of logs = log of product, and product of probabilities tends to zero as you pile on more of them (cf. the Conjunction fallacy)
Replies from: ronny-fernandez, ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-10-24T23:28:55.700Z · LW(p) · GW(p)
I'll change it around a bit to include that. Thanks
↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-10-24T23:27:28.779Z · LW(p) · GW(p)
Yes, it is of course. The more you claim the more opportunity for failure. Obviously the person with the least negative score is the person with the highest bayesian competence. But perhaps that should be weighted by the number of beliefs you assing anything to. However, assuming all English speakers have access to about the same number of sentences, and that they assign probabilities to all of them, I hold by the original formulation.
comment by Jayson_Virissimo · 2011-10-25T06:49:34.734Z · LW(p) · GW(p)
Something about this seems wrong because it doesn't take into account the significance of propositions the agent has as beliefs. What use is having a high proportion of true to false beliefs if they are about things that don't matter?