Assumption of positive rationality

post by AlexMennen · 2011-04-28T16:41:50.161Z · LW · GW · Legacy · 5 comments

Contents

5 comments

Let's pretend for the sake of simplicity that all belief-holding entities are either rational or irrational. Rational entities have beliefs that correlate well with reality, and update their beliefs with evidence properly. Irrational entities have beliefs that do not correlate with reality at all, and update their beliefs randomly. Now suppose Bob wants to know what the probability that he is rational is. He estimates that someone with a thought process that seems like his does from the inside is 70% likely to be rational and 30% likely to be irrational. Unfortunately, this does not help much. If Bob is irrational, then his estimate is useless. If Bob is rational, then, after updating on the fact that a randomly selected Bob-like entity is rational, the we can estimate that the probability of another randomly selected Bob-like entity being rational is higher than 70% (exact value depending on the uncertainty regarding what percentage of Bob-like entities are rational). But Bob doesn't care whether a randomly selected Bob-like entity is rational; he wants to know whether he is rational. And conditional on Bob's attempts to figure it out being effective, the probability of that is 1 by definition. Conditional on Bob being irrational, he cannot give meaningful estimates of the probability of much of anything. Thus, even if we ignore the difficulty of coming up with a prior, if Bob tries to evaluate evidence regarding whether or not he is rational, he ends up with:
P(evidence given Bob is rational) = x (he can figure it out)
P(evidence given Bob is irrational) = ?
I am not aware of any good ways to do Bayesian reasoning with question marks. It seems that Bob cannot meaningfully estimate the probability that he is rational. However, in a decision theoretic sense, this is not really an issue for him, because Bob cannot be an effective decision agent if his beliefs about how to achieve his objectives are uncorrelated with reality, so he has no expected utility invested in the possibility that he is irrational. All he needs are probabilities conditional on him being rational, and that's what he has.

This does not seem to extend well to further increases in rationality. If you act on the assumption that you are immune to some common cognitive bias, you will just fail at life. However, I can think of one real-life application of this principle: the possibility that you are a Boltzmann brain. A Boltzmann brain would have no particular reason to have correct beliefs or good algorithms for evaluating evidence. When people talk about the probability that they are a Boltzmann brain, they often mention things like the fact that our sensory input is way more well-organized that it should be for almost all Boltzmann brains, but if you are a Boltzmann brain, then how are you supposed to know how well-organized your visual field should be? Is there any meaningful way someone can talk about the probability of em being a Boltzmann brain, or does ey just express all other probabilities as conditional on em not being a Boltzmann brain?

5 comments

Comments sorted by top scores.

comment by CuSithBell · 2011-04-28T18:32:54.415Z · LW(p) · GW(p)

I dunno. Bob seems to be in a pickle. My advice for him would probably be to just try to find the Wumpus or whatever he's up to and hope for the best.

However! I think the simplifying assumption you made is what makes his position so helpless. In reality, irrationality is not total, and our states of mind are entangled with the physical world in robust ways that allow us to check things.

comment by Manfred · 2011-04-28T17:19:30.935Z · LW(p) · GW(p)

The trouble with P(evidence given Bob is irrational) isn't that it's inherently impossible to find, it's that you haven't given quite enough information to have a satisfying answer. There are multiple ways that irrational people could work - they could have a truly random set of beliefs, in which case you can just check for consistency. They could have a 50/50 chance of answering yes or no to any self-posed question, in which case they might think they were consistent. If we restrict the ways to be irrational to be based on just those two possibilities (although there are others, these seem most likely form your description), even if we don't know what the real mixture is we can just take the average to get P(E|I)=0.25.

That 0.25 leaves off a literally negligible exponential term, but I should note that the exponential term becomes all there is when talking about Boltzmann brains, and so becomes important.

EDIT: whoops, double-post.

comment by Manfred · 2011-04-28T17:01:37.508Z · LW(p) · GW(p)

The trouble with P(evidence given Bob is irrational) isn't that it's inherently impossible top find, it's that you haven't given quite enough information to have a satisfying answer. There are multiple ways that irrational people could work - they could have a truly random set of beliefs, in which case you can just check for consistency. They could have a 50/50 chance of answering yes or no to any self-posed question, in which case they might think they were consistent. If we restrict the ways to be irrational to be based on just those two possibilities (although there are others, these seem most likely form your description), even if we don't know what the real mixture is we can just take the average to get P(E|I)=0.25.

comment by shokwave · 2011-04-30T15:07:54.680Z · LW(p) · GW(p)

Well, yeah, if you simplify "rational" down to "probability estimates" then attempting to give a probability estimate of how rational you are is going to give some weird results.

(Analogously, if you had a computer program that did Bayesian calculations, and you asked it to take in these pieces of evidence and calculate the posterior probability that it performed a Bayesian calculation to get this posterior probability, you'd get similar weird results.)

It's a case of presupposition. If you want more on Boltzmann brains, I think you need to delve into anthropics.

I much prefer to measure rationality as "probability that a randomly chosen belief matches reality". That way, your rationality is a probability, you update it whenever you get something wrong or right, and it's a good prior for quick judgments.

comment by Caerbannog · 2011-04-30T03:45:32.047Z · LW(p) · GW(p)

As you describe them, an irrational Bob's beliefs are random, and a rational Bob bases and updates his beliefs on evidence. If he is trying to use a systematic method to determine his degree of rationality, or even trying to devise one, doesn't that automatically make him rational (even if it's only just his own definition of 'rational')?

Regarding Boltzmann brains:

If I'm not a Boltzmann brain, then my current understanding of the subject of Boltzmann brains correctly tells me that the overwhelmingly vast majority of Boltzmann brains would not have memories and thoughts with the degree of coherence that I do have.

On the other hand, if I am a Boltzmann brain, then I'm either one of the vanishingly rare coherent ones and can trust my thoughts, or I'm so disconnected with reality that I cannot trust my conclusions about the degree of coherence of my thoughts and experiences (like irrational Bob, I suppose).

But if I am an incoherent Boltzmann brain, I am one that can still think of the concept of a Boltzmann brain (and ponder the degree of coherence of my thoughts). I would say that argues against the likelihood of me being an incoherent Boltzmann brain. This leaves the options that I'm either not a Boltzmann brain, or that I'm a relatively coherent one.