Confusions Concerning Pre-Rationality
post by abramdemski · 2018-05-23T00:01:39.519Z · LW · GW · 29 commentsContents
29 comments
Robin Hanson's Uncommon Priors Require Origin Disputes is a short paper with, according to me, a surprisingly high ratio of does-something-interesting-there per character. It is not clearly right, but it merits some careful consideration. If it is right, it offers strong reason in support of the common prior assumption, which is a major crux of certain modest-epistemology flavored arguments.
Wei Dai wrote two [LW · GW] posts [LW · GW] reviewing the concepts in the paper and discussing problems/implications. I recommend reviewing those before reading the present post, and possibly the paper itself as well.
Robin Hanson's notion of pre-rationality is: an agent's counterfactual beliefs should treat the details of its creation process like an update. If the agent is a Bayesian robot with an explicitly programmed prior, then the agent's distribution after conditioning on any event "programmer implements prior p" should be exactly p.
These beliefs are "counterfactual" in that agents are typically assumed to know their priors already, so that the above conditional probability is not well-defined for any choice of p other than the agent's true prior. This fact leads to a major complication in the paper; the pre-rationality condition is instead stated in terms of hypothetical "pre-agents" which have "pre-priors" encoding the agent's counterfactual beliefs about what the world would have been like if the agent had had a different prior. (I'm curious what happens if we drop that assumption, so that we can represent pre-rationality within the agent's same prior.)
Wei Dai offers an example in which a programmer flips a coin to determine whether a robot believes coin-flips to have probability 2/3rds or 1/3rd. Pre-rationality seems like an implausible constraint to put on this robot, because the programmer's coin-flip is not good reason to form such expectations about other coins.
Wei Dai seems to be arguing against a position which Robin Hanson isn't quite advocating. Wei Dai's accusation is that pre-rationality implies a belief that the process which created you was itself a rational process, which is not always plausible. Indeed, it's easy to see this interpretation in the math. However, Robin Hanson's response [LW(p) · GW(p)] indicates that he doesn't see it:
I just don't see pre-rationality being much tied to whether you in fact had a rational creator. The point is, as you say, to consider the info in the way you were created.
Unfortunately, the discussion in the comments doesn't go any further on this point. However, we can make some inferences about Robin Hanson's position from the paper itself.
Robin Hanson does not discuss the robot/programmer example; instead, he discusses the possibility that people have differing priors due to genetic factors. Far from claiming people are obligated by rationality principles to treat inherited priors as rational, Robin Hanson says that because we know some randomness is involved in Mendelian inheritance, we can't both recognize the arbitrariness of our prior's origin and stick with that prior. Quoting the paper on this point:
Mendel’s rules of genetic inheritance, however, are symmetric and random between siblings. If optimism were coded in genes, you would not acquire an optimism gene in situations where optimism was more appropriate, nor would your sister’s attitude gene track truth any worse than your attitude gene does.
Thus it seems to be a violation of pre-rationality to, conditional on accepting Mendel’s rules, allow one’s prior to depend on individual variations in genetically-encoded attitudes. Having your prior depend on species-average genetic attitudes may not violate pre-rationality, but this would not justify differing priors within a species.
Robin Hanson suggests that pre-rationality is only plausible conditional on some knowledge we have gained throughout our lifetime about our own origins. He posits a sentence B which contains this knowledge, and suggests that the pre-rationality condition can be relativized to B. In the above-quoted case, B would consist of Mendelian inheritance and the genetics of optimism. Robin Hanson is not saying that genetic inheritance of optimism or pessimism is a rational process, but rather, he is saying that once we know about these genetic factors, we should adjust our pessimism or optimism toward the species average. After performing this adjustment, we are pre-rational: we consider any remaining influences on our probability distribution to have been rational.
Wei Dai's argument might be charitably interpreted as objecting to this position by offering a concrete case in which a rational agent does not update to pre-rationality in this way: the robot has no motivation to adjust for the random noise in its prior, despite its recognition of the irrationality of the process by which it inherited this prior. However, I agree with Robin Hanson that this is intuitively quite problematic, even if no laws of probability are violated. There is something wrong with the robot's position, even if the robot lacks cognitive tools to escape this epistemic state.
However, Wei Dai does offer a significant response to this: he complains that Robin Hanson says too little about what the robot should do to become pre-rational from its flawed state. The pre-rationality condition provides no guidance for the robot. As such, what guidance can pre-rationality offer to humans? Robin Hanson's paper admits that we have to condition on B to become pre-rational, but offers no account whatsoever about the structure of this update. What normative structure should we require of priors so that an agent becomes pre-rational when conditioned on the appropriate B?
Here is the text of Wei Dai's sage complaint:
Assuming that we do want to be pre-rational, how do we move from our current non-pre-rational state to a pre-rational one? This is somewhat similar to the question of how do we move from our current non-rational (according to ordinary rationality) state to a rational one. Expected utility theory says that we should act as if we are maximizing expected utility, but it doesn't say what we should do if we find ourselves lacking a prior and a utility function (i.e., if our actual preferences cannot be represented as maximizing expected utility).
The fact that we don't have good answers for these questions perhaps shouldn't be considered fatal to pre-rationality and rationality, but it's troubling that little attention has been paid to them, relative to defining pre-rationality and rationality. (Why are rationality researchers more interested in knowing what rationality is, and less interested in knowing how to be rational? Also, BTW, why are there so few rationality researchers? Why aren't there hordes of people interested in these issues?)
I find myself in the somewhat awkward position of agreeing strongly with Robin Hanson's intuitions here, but also having no idea how it should work. For example, suppose that we have a robot whose probabilistic beliefs are occasionally modified by cosmic rays. These modification events can be thought of as the environment writing a new "prior" into the agent. We cannot perfectly safeguard the agent against this, but we can write the agent's probability distribution such that so long as it is not too damaged, it can self-repair when it sees evidence that its beliefs have been modified by the environment. This seems like an updating-to-pre-rationality move, with "a cosmic ray hit you in this memory cell" playing the role of B.
Similarly, it seems reasonable to do something like average beliefs with someone if you discover that your differing beliefs are due only to genetic chance. Yet, it does not seem similarly reasonable to average values, despite the distinction between beliefs and preferences being somewhat fuzzy [LW · GW].
This is made even more awkward by the fact that Robin Hanson has to create the whole pre-prior framework in order to state his new rationality constraint.
The idea seems to be that a pre-prior is not a belief structure which an actual agent has, but rather, is a kind of plausible extrapolation of an agent's belief structure which we layer on top of the true belief structure in order to reason about the new rationality constraint. If so, how could this kind of rationality constraint be compelling to an agent? The agent itself doesn't have any pre-prior. Yet, if we have an intuition that Robin Hanson's argument implies something about humans, then we ourselves are agents who find arguments involving pre-priors to be relevant.
Alternatively, pre-priors could be capturing information about counterfactual beliefs which the agent itself has. This seems less objectionable, but it brings in tricky issues of counterfactual reasoning. I don't think this is likely to be the right path to properly formalizing what is going on, either.
I see two clusters of approaches:
- What rationality conditions might we impose on a Bayesian agent such that it updates to pre-rationality given "appropriate" B? Can we formalize this purely within the agent's own prior, without the use of pre-priors?
- What can we say about agents becoming rational from irrational positions? What should agents do when they notice Dutch Books against their beliefs, or money-pumps against their preferences? (Logical Induction is a somewhat helpful story about the former, but not the latter.) Can we characterize the receiver of decision-theoretic arguments such as the VNM theorem, who would find such arguments interesting? If we can produce anything in this direction, can it say anything about Robin Hanson's arguments concerning pre-rationality? Does it give a model, or can it be modified to give a model, of updating to pre-rationlity?
It seems to me that there is something interesting going on here, and I wish that there were more work on Hansonian pre-rationality and Wei Dai's objection.
29 comments
Comments sorted by top scores.
comment by RobinHanson · 2018-06-05T08:55:10.909Z · LW(p) · GW(p)
The problem of how to be rational is hard enough that one shouldn’t expect to get good proposals for complete algorithms for how to be rational in all situations. Instead we must chip away at the problem. And one way to do that is to slowly collect rationality constraints. I saw myself as contributing by possibly adding a new one. I’m not very moved by the complaint “but what is the algorithm to become fully rational from any starting point?” as that is just too hard a problem to solve all at once.
Replies from: abramdemski↑ comment by abramdemski · 2018-06-06T23:02:48.589Z · LW(p) · GW(p)
I don't think "what is the algorithm to become fully rational from any starting point?" is a very good characterization. It is not possible to say anything of interest for any starting point whatsoever. I read Wei Dai as instead asking about the example he provided, where a robot is fully rational in the standard Bayesian sense (by which I mean, violates no laws of probability theory or of expected utility theory), but not pre-rational. It is then interesting to ask whether we can motivate such an agent to be pre-rational, or, failing that, give some advice about how to modify a non-pre-rational belief to a pre-rational one (setting the question of motivation aside).
Speaking for myself, I see this as analogous to the question of how one reaches equilibrium in game theory. Nash equilibrium comes from certain rationality assumptions, but the assumptions do not uniquely pin down one equilibrium. This creates the question of how agents could possibly get to be in a state of equilibrium satisfying those rationality assumptions. To this question, there are many interesting answers. However, the consensus of the field seems to be that in fact it is quite difficult to reach a Nash equilibrium in general, and much more realistic to reach correlated equilibria. This suggests that the alternate rationality assumptions underlying correlated equilibria are more realistic, and Nash equilibria are based on rationality assumptions which are overly demanding.
That being said, your paper is analogous to Nash's initial proposal of the Nash equilibrium concept. It would be impractical to ask Nash to have articulated an entire theory of equilibrium selection when he first proposed the equilibrium concept. So, while I do think the question of how one becomes pre-rational is relevant, and the inability to construct such an account would ultimately be evidence against pre-rationality as a rationality constraint, it is not something to demand up-front of proposed rationality constraints.
comment by zulupineapple · 2018-05-23T10:14:03.910Z · LW(p) · GW(p)
The proposition that we should be able to reason about priors of other agents is surely not contentious. The proposition that if I learn something new about my creation, I should update on that information, is also surely not contentious, although there might be some disagreement what that update looks like.
In the case of genetics, if I learned that I'm genetically predisposed to being optimistic, then I would update my beliefs the same way I would update them if I had performed a calibration and found my estimates consistently too high. That is unless I've performed calibrations in the past and know myself to be well calibrated. In that case the genetic predisposition isn't giving me any new information - I've already corrected for it. This, again, surely isn't contentious.
Although I have no idea what this has to do with "species average". Yes, I have no reason to believe that my priors are better than everybody else's, but I also have no reason to believe that the "species average" is better than my current prior (there is also the problem that "species" is an arbitrarily chosen category).
But aside from that, I struggle to understand what, in simple terms, is being disagreed about here.
Replies from: abramdemski, Dacyn↑ comment by abramdemski · 2018-05-24T09:12:37.235Z · LW(p) · GW(p)
The proposition that we should be able to reason about priors of other agents is surely not contentious. The proposition that if I learn something new about my creation, I should update on that information, is also surely not contentious, although there might be some disagreement what that update looks like.
The form is the interesting thing here. By arguing for the common prior assumption, RH is giving an argument in favor of a form of modest epistemology, which Eliezer has recently written so much against.
In the case of genetics, if I learned that I'm genetically predisposed to being optimistic, then I would update my beliefs the same way I would update them if I had performed a calibration and found my estimates consistently too high. That is unless I've performed calibrations in the past and know myself to be well calibrated. In that case the genetic predisposition isn't giving me any new information - I've already corrected for it. This, again, surely isn't contentious.
In Eliezer's view, because there are no universally compelling arguments [LW · GW] and recursive justification has to hit bottom [LW · GW], you don't give up your prior just because you see that there was bias in the process which created it -- nothing can be totally justified by an infinite chain of unbiased steps anyway. This means, concretely, that you shouldn't automatically take the "outside view" on the beliefs you have which others are most likely to consider crazy; their disbelief is little evidence, if you have a strong inside view reason why you can know better than them.
In RH's view, honest truth-seeking agents with common priors should not knowingly disagree (citation: are disagreements honest?). Since a failure of the common prior assumption entails disagreement about the origin of priors (thanks to the pre-rationality condition), and RH thinks disagreement about the origin of priors should rarely be relevant for disagreements about humans, RH thinks honest truth-seeking humans should not knowingly disagree.
I take it RH thinks some averaging should happen somewhere as a result of that, though I am not entirely sure. This would contradict Eliezer's view.
Although I have no idea what this has to do with "species average". Yes, I have no reason to believe that my priors are better than everybody else's, but I also have no reason to believe that the "species average" is better than my current prior (there is also the problem that "species" is an arbitrarily chosen category).
The wording in the paper makes me think RH was intending "species" as a line beyond which the argument might fail, not one he's necessarily going to draw (IE, he might concede that his argument can't support a common prior assumption with aliens, but he might not concede that).
I think he does take this to be reason to believe the species average is better than your current prior, to the extent they differ.
But aside from that, I struggle to understand what, in simple terms, is being disagreed about here.
I see several large disagreements.
- Is the pre-rationality condition a true constraint on rationality? RH finds it plausible; WD does not. I am conflicted.
- If the pre-rationality argument makes sense for common probabilities, does it then make sense for utilities? WD thinks so; RH thinks not.
- Does pre-rationality imply a rational creator? WD thinks so; RH thinks not.
- Does the pre-prior formalism make sense at all? Can rationality conditions stated with use of pre-priors have any force for ordinary agents who do not reason using pre-priors? I think not, though I think there is perhaps something to be salvaged out of it.
- Does the common prior assumption make sense in practice?
- Should honest truth-seeking humans knowingly disagree?
↑ comment by zulupineapple · 2018-05-24T18:55:33.932Z · LW(p) · GW(p)
Does the common prior assumption make sense in practice?
I don't know what "make sense" means. When I said "in simple terms", I meant that I want to avoid that sort of vagueness. The disagreement should be empirical. It seems that we need to simulate an environment with a group of bayesians with different priors, then somehow construct another group of bayesians that satisfy the pre-rationality condition, and then the claim should be that the second group outperforms the first group in accuracy. But I don't think I saw such claims in the paper explicitly. So I continue to be confused, what exactly the disagreement is about.
Should honest truth-seeking humans knowingly disagree?
Big question, I'm not going to make big claims here, though my intuition tends to say "yes". Also, "should" is a bad word, I'm assuming that you're referring to accuracy (as in my previous paragraph), but I'd like to see these things stated explicitly.
you don't give up your prior just because you see that there was bias in the process which created it
Of course not. But you do modify it. What is RH suggesting?
Is the pre-rationality condition a true constraint on rationality? RH finds it plausible; WD does not. I am conflicted.
"True" is a bad word, I have no idea what it means.
If the pre-rationality argument makes sense for common probabilities, does it then make sense for utilities? WD thinks so; RH thinks not.
RH gives a reasonable argument here [LW(p) · GW(p)], and I don't see much of a reason why we would do this to utilities in the first place.
Does pre-rationality imply a rational creator? WD thinks so; RH thinks not.
I see literally nothing in the paper to suggest anything about this. I don't know what WD is talking about.
Replies from: abramdemski↑ comment by abramdemski · 2018-05-26T01:41:11.765Z · LW(p) · GW(p)
I don't know what "make sense" means. When I said "in simple terms", I meant that I want to avoid that sort of vagueness. The disagreement should be empirical. It seems that we need to simulate an environment with a group of bayesians with different priors, then somehow construct another group of bayesians that satisfy the pre-rationality condition, and then the claim should be that the second group outperforms the first group in accuracy. But I don't think I saw such claims in the paper explicitly. So I continue to be confused, what exactly the disagreement is about.
Ah, well, I think the "should honest truth-seeking humans knowingly disagree" is the practical form of this for RH.
(Or, even more practical, practices around such disagreements. Should a persistent disagreement be a sign of dishonesty? (RH says yes.) Should we drop beliefs which others persistently disagree with? Et cetera.)
Big question, I'm not going to make big claims here, though my intuition tends to say "yes".
Then (according to RH), you have to deal with RH's arguments to the contrary. Specifically, his paper is claiming that you have to have some origin disputes about other people's priors.
Although that's not why I'm grappling with his argument. I'm not sure whether rational truth-seekers should persistently disagree, but I'm very curious about some things going on in RH's argument.
Also, "should" is a bad word, I'm assuming that you're referring to accuracy (as in my previous paragraph), but I'd like to see these things stated explicitly.
I think "should" is a good word to sometimes taboo (IE taboo quickly if there seems to be any problem), but I don't see that it needs to be an always-taboo word.
"True" is a bad word, I have no idea what it means.
I'm also pretty unclear on what it could possibly mean here, but nonetheless think it is worth debating. Not only is there the usual problem of spelling out what it means for something to be a constraint on rationality, but now there's an extra weird thing going on with setting up pre-priors which aren't probabilities which you use to make any decisions.
I see literally nothing in the paper to suggest anything about this. I don't know what WD is talking about.
I think I know what WD is talking about, but I agree it isn't what RH is really trying to say.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-26T09:20:43.266Z · LW(p) · GW(p)
the usual problem of spelling out what it means for something to be a constraint on rationality
Is that a problem? What's wrong with "believing true things", or, more precisely, "winning bets"? (obviously, these need to be prefixed with "usually" and "across many possible universes"). If I'm being naive and these don't work, then I'd love to hear about it.
But if they do work, then I really want to see how the idea of pre-rationality is supposed to help me believe more true things and win more bets. I legitimately don't understand how it would.
Should honest truth-seeking humans knowingly disagree?
My intuition says "yes" in large part due to the word "humans". I'm not certain whether two perfect bayesians should disagree, for some unrealistic sense of "perfect", but even if they shouldn't, it is uncertain that this would also apply to more limited agents.
Replies from: abramdemski↑ comment by abramdemski · 2018-05-28T03:36:21.442Z · LW(p) · GW(p)
Is that a problem? What's wrong with "believing true things", or, more precisely, "winning bets"?
- Winning bets is not literally the same thing as believing true things, nor is it the same thing as having accurate beliefs, or being rational. I think Dutch-book arguments are ... not exactly mistaken, but misleading, for this reason. It is not true that the only reason to have probabilistically coherent beliefs is to avoid reliably losing bets. If that were the case, we could throw rationality out the window whenever bets aren't involved. I think betting is both a helpful illustrative thought experiment (dutch books illustrate irrationality) and a helpful tool for practicing rationality, but not synonymous with rationality.
- "Believing true things" is problematic for several reasons. First, that is apparently entirely focused on epistemic rationality, excluding instrumental rationality. Second, although there are many practical cases where it isn't a problem, there is a question of what "true" means, especially for high-level beliefs about things like tables and chairs which are more like conceptual clusters rather than objective realities. Third, even setting those aside, it is hard to see how we can get from "believing true things" to Bayes' Law and other rules of probabilistic reasoning. I would argue that a solid connection between "believe true things" and rationality constraints of classical logic can be made, but probabilistic reasoning requires an additional insight about what kind of thing can be a rationality constraint: you don't just have beliefs, you have degrees of belief. We can say things about why degrees of belief might be better or worse, but to do so requires a notion of quality of belief which goes beyond truth alone; you are not maximizing the expected amount of truth or anything like that.
- Another possible answer, which you didn't name but which might have been named, would be "rationality is about winning". Something important is meant by this, but the idea is still vague -- it helps point toward things that do look like potential rationality constraints and away from things which can't serve as rationality constraints, but it is not the end of the story of what we might possibly mean by calling something a constraint of rationality.
My intuition says "yes" in large part due to the word "humans". I'm not certain whether two perfect bayesians should disagree, for some unrealistic sense of "perfect", but even if they shouldn't, it is uncertain that this would also apply to more limited agents.
Most of my probability mass is on you being right here, but I find RH's arguments to the contrary intriguing. It's not so much that I'm engaging with them in the expectation that I'll change my mind about whether honest truth seeking humans can knowingly disagree. (Actually I think I should have said "can" all along rather than "should" now that I think about it more!) I do, however, expect something about the structure of those disagreements can be understood more thoroughly. If ideal Bayesians always agree, that could mean understanding the ways that Bayesian assumptions break down for humans. If ideal Bayesians need not agree, it might mean understanding that better.
I really want to see how the idea of pre-rationality is supposed to help me believe more true things and win more bets. I legitimately don't understand how it would.
I think I can understand this one, to some extent. Supposing that some version of pre-rationality does work out, and if I, hypothetically, understood pre-rationality extremely well (better than RH's paper explains it)... I would expect more insights into at least one of the following:
- agents reasoning about their own prior (what is the structure of the reasoning? to what extent can an agent approve of, or not approve of, its own prior? are there things which can make an agent decide its own prior is bad? what must an agent believe about the process which created its prior? What should an agent do if it discovers that the process which created its prior was biased, or systematically not truth-seeking, or otherwise 'problematic'?)
- common knowledge of beliefs (is it realistic for beliefs to be common knowledge? when? are there more things to say about the structure of common knowledge, which help reconcile the usual assumption that an agent knows its own prior with the paradoxes of self-reference which prevent agents from knowing themselves so well?)
- what it means for an agent to have a prior (how to we designate a special belief-state to call the prior, for realistic agents? can we do so at all in the face of logical uncertainty? is it better to just think in terms of a sequence of belief states, with some being relatively prior to others? can we make good models of agents who are becoming rational as they are learning, such that they lack an initial perfectly rational prior?)
- reaching agreement with other agents (by an Aumann-like process or otherwise; by bringing in origin disputes or otherwise)
- reasoning about one's own origins (especially in the sense of justification structures; endorsing or not endorsing the way one's beliefs were constructed or the way those beliefs became what they are more generally).
↑ comment by zulupineapple · 2018-05-28T09:54:44.744Z · LW(p) · GW(p)
Winning bets is not literally the same thing as believing true things, nor is it the same thing as having accurate beliefs, or being rational.
They are not the same, but that's ok. You asked about constraints on, not definitions of rationality. This may not be an exhaustive list, but if someone has an idea about rationality that translates neither into winning some hypothetical bets, nor into having even slightly more accurate beliefs about anything, then I can confidently say that I'm not interested.
(Of course this is not to say that an idea that has no such applications has literally zero value)
Supposing that some version of pre-rationality does work out, and if I, hypothetically, understood pre-rationality extremely well (better than RH's paper explains it)... I would expect more insights into at least one of the following: <...>
I completely agree that if RH was right, and if you understood him well, then you would receive multiple benefits, most of which could translate into winning hypothetical bets, and into having more accurate beliefs about many things. But that's just the usual effect of learning, and not because you would satisfy the pre-rationality condition.
I continue to not understand in what precise way the agent that satisfies the pre-rationality condition is (claimed to be) superior to the agent that doesn't. To be fair, this could be a hard question, and even if we don't immediately see the benefit, that doesn't mean that there is no benefit. But still, I'm quite suspicious. In my view this is the single most important question, and it's weird to me that I don't see it explicitly addressed.
↑ comment by Dacyn · 2018-05-23T11:12:10.635Z · LW(p) · GW(p)
What is being disagreed about is whether you should update to the species average. If the optimism is about a topic that can't be easily tested, then all the calibration shows is that your estimates are higher than the species average, not that they are too high in an absolute sense.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-23T12:15:33.853Z · LW(p) · GW(p)
Then the question is entirely about whether we expect the species average to be a good predictor. If there is an evolutionary pressure for the species to have correct beliefs about a topic, then we probably should update to the species average (this may depend on some assumptions about how evolution works). But if a topic isn't easily tested, then there probably isn't a strong pressure for it.
Another example, let's replace "species average" with "prediction market price". Then we should agree that updating our prior makes sense, because we expect prediction markets to be efficient, in many cases. But, if we're talking about "species average", it seems very dubious that it's a reliable predictor. At least, the claim that we should update to the species average depends on many assumptions.
Of course, in the usual Bayesian framework, we don't update to species average. We only observe species average as evidence and then update towards it, by some amount. It sounds like Hanson wants to leave no trace of the original prior, though, which is a bit weird.
Replies from: abramdemski, Dacyn↑ comment by abramdemski · 2018-05-26T00:44:37.139Z · LW(p) · GW(p)
Of course, in the usual Bayesian framework, we don't update to species average. We only observe species average as evidence and then update towards it, by some amount. It sounds like Hanson wants to leave no trace of the original prior, though, which is a bit weird.
Actually, as Wei Dai explained here [LW · GW], the usual Bayesian picture is even weaker than that. You observe the species average and then you update somehow, whether it be toward, away, or even updating to the same number. Even if the whole species is composed of perfect Bayesians, Aumann Agreement does not mean you just update toward each other until you agree; what the proof actually implies is that you dance around in a potentially quite convoluted way until you agree. So, there's no special reason to suppose that Bayesians should update toward each other in a single round of updating on each other's views.
↑ comment by Dacyn · 2018-05-23T12:32:36.095Z · LW(p) · GW(p)
The idea (as far as I understand it) is supposed to be something like: if you don't think there is an evolutionary pressure for the species you are in to have correct beliefs about a topic, then why do you trust your own beliefs? To the degree that your own beliefs are trustworthy, it is because there is such an evolutionary pressure. Thus, switching to the species average just reduces the noise while not compromising the source of trustworthiness.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-23T12:57:45.875Z · LW(p) · GW(p)
If there is no pressure on the species, then I don't particularly trust neither the species average nor my own prior. They are both very much questionable. So, why should I switch from one questionable prior to another? It is a wasted motion.
Consider an example. Let there be N interesting propositions we want to have accurate beliefs about. Suppose that every person, at birth, rolls a six sided die N times and then for every proposition prop_i they set the prior P(prop_i) = dice_roll_i/10. And now you seem to be saying that for me to set P(prop_i) = 0.35 (which is the species average), is in some way better? More accurate, presumably? Because that's the only case where switching would make sense.
Replies from: abramdemski, Dacyn↑ comment by abramdemski · 2018-05-26T01:05:00.777Z · LW(p) · GW(p)
And now you seem to be saying that for me to set P(prop_i) = 0.35 (which is the species average), is in some way better? More accurate, presumably?
If you have no other information, it does reduce variance, while keeping bias the same. This reduces expected squared error, due to the bias-variance tradeoff [LW · GW].
Eliezer explicitly argues that this is not a good argument for averaging your opinions with a crowd [LW · GW]. However, I don't like his argument there very much. He argues that squared error is not necessarily the right notion of error, and provides an alternative error function as an example where you escape the conclusion.
However, he relies on giving a nonconvex error function. It seems to me that most of the time, the error function will be convex in practice, as shown in A Pragmatist's Guide to Epistemic Utility by Ben Levinstein.
I think what this means is that given only the two options, averaging your beliefs with those of other people is better than doing nothing at all. However, both are worse than a Bayesian update.
↑ comment by Dacyn · 2018-05-23T15:04:34.981Z · LW(p) · GW(p)
I am having trouble cashing out your example in concrete terms; what kind of propositions could behave like that? More importantly, why would they behave like that?
I realize I said something wrong in my previous comment: evolutionary pressure is not the only kind of reason that someone might think their / their species' beliefs may be trustworthy. For example, you might think that evolutionary pressure causes beliefs to become more accurate when they are about topics relevant to survival/reproduction, and that the uniformity of logic means that the kind of mind that is good at having accurate beliefs on such topics is also somewhat good at having accurate beliefs on other topics. But if you really think that there is NO reason at all that you might have accurate beliefs on a given topic, it seems to me that you do not have beliefs about that topic at all.
Replies from: abramdemski, zulupineapple↑ comment by abramdemski · 2018-05-26T01:16:17.249Z · LW(p) · GW(p)
But if you really think that there is NO reason at all that you might have accurate beliefs on a given topic, it seems to me that you do not have beliefs about that topic at all.
This doesn't seem true to me.
First, you need to assign probabilities in order to coherently make decisions under uncertainty, even if the probabilities are totally made up. It's not because the probabilities are informative, it's because if your decisions can't be justified by any probability distribution, then you're leaving money on the table somewhere with respect to your own preferences.
Second, recursive justification must hit bottom somewhere [LW · GW]. At some point you have to assume something if you're going to prove anything. So, there has to be a base of beliefs which you can't provide justification for without relying on those beliefs themselves.
Perhaps you didn't mean to exclude circular justification, so the recursive-justification-hits-bottom thing doesn't contradict what you were saying. However, I think the first point stands; you sometimes want beliefs (any beliefs at all!) as opposed to no beliefs, even when there is no reason to expect their accuracy.
Replies from: Dacyn↑ comment by Dacyn · 2018-05-26T09:22:05.226Z · LW(p) · GW(p)
I certainly didn't mean to exclude circular justification: we know that evolution is true because of the empirical and theoretical evidence, which relies on us being able to trust our senses and reasoning, and the reason we can mostly trust our senses and reasoning is because evolution puts some pressure on organisms to have good senses and reasoning.
Maybe what you are saying is useful for an AI but for humans I think the concept of "I don't have a belief about that" is more useful than making up a number with absolutely no justification just so that you won't get Dutch booked. I think evolution deals with Dutch books in other ways (like making us reluctant to gamble) and so it's not necessary to deal with that issue explicitly most of the time.
Replies from: abramdemski↑ comment by abramdemski · 2018-05-28T03:41:34.520Z · LW(p) · GW(p)
I agree. The concept of "belief" comes apart into different notions in such cases; like, we might explicitly say "I don't have a belief about that" and we might internally be unable to summon any arguments one way or another, but we might find ourselves making decisions nonetheless.
I do think this is somewhat relevant for humans rather than only AI, though. If we find ourselves paralyzed and unable to act because we are unable to form a belief, we will end up doing nothing, which in many cases will be worse that things we would have done had we assigned any probability at all. Needing to make decisions is a more powerful justification for needing probabilities than Dutch books are.
↑ comment by zulupineapple · 2018-05-23T16:18:18.467Z · LW(p) · GW(p)
I am having trouble cashing out your example in concrete terms; what kind of propositions could behave like that? More importantly, why would they behave like that?
The propositions aren't doing anything. The dice rolls represent genetic variation (the algorithm could be less convoluted, but it felt appropriate). The propositions can be anything from "earth is flat", to "I will win a lottery". Your beliefs about these propositions depend on your initial priors, and the premise is that these can depend on your genes.
For example, you might think that evolutionary pressure causes beliefs to become more accurate when they are about topics relevant to survival/reproduction, and that the uniformity of logic means that the kind of mind that is good at having accurate beliefs on such topics is also somewhat good at having accurate beliefs on other topics.
Sure, there are reasons why we might expect the "species average" predictions not to be too bad. But there are better groups. E.g. we would surely improve the quality of our predictions if, while taking the average, we ignored the toddlers, the senile and the insane. We would improve even more if we only averaged the well educated. And if I myself am educated and sane adult, then I can expect reasonably well that I'm outperforming the "species average", even under your consideration.
But if you really think that there is NO reason at all that you might have accurate beliefs on a given topic, it seems to me that you do not have beliefs about that topic at all.
If I know nothing about a topic, then I have my priors. That's what priors are. To "not have beliefs" is not a valid option in this context. If I ask you for a prediction, you should be able to say something (e.g. "0.5").
Replies from: Dacyn↑ comment by Dacyn · 2018-05-23T22:20:10.203Z · LW(p) · GW(p)
I think the species average belief for both "earth is flat" and "I will win a lottery" is much less than 0.35. That is why I am confused about your example.
I think Hanson would agree that you have to take a weighted average, and that toddlers should be weighted less highly. But toddlers should agree that they should be weighted less highly, since they know that they do not know much about the world.
If the topic is "Is xzxq kskw?" then it seems reasonable to say that you have no beliefs at all. I would rather say that than say that the probability is 0.5. If the topic is something that is meaningful to you, then the way that the proposition gets its meaning should presumably also let you estimate its likelihood, in a way that bears some relation to accuracy.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-24T07:49:41.007Z · LW(p) · GW(p)
I think the species average belief for both "earth is flat" and "I will win a lottery" is much less than 0.35. That is why I am confused about your example.
Feel free to take more contentious propositions, like "there is no god" or "I should switch in Monty Hall". But, also, you seem to be talking about current beliefs, and Hanson is talking about genetic predispositions, which can be modeled as beliefs at birth. If my initial prior, before I saw any evidence, was P(earth is flat)=0.6, that doesn't mean I still believe that earth is flat. It only means that my posterior is slightly higher than someone's who saw the same evidence but started with a lower prior.
Anyway, my entire point is that if you take many garbage predictions and average them out, you're not getting anything better than what you started with. Averaging only makes sense with additional assumptions. Those assumptions may sometimes be true in practice, but I don't see them stated in Hanson's paper.
I think Hanson would agree that you have to take a weighted average
No, I don't think weighing makes sense in Hanson's framework of pre-agents.
But toddlers should agree that they should be weighted less highly, since they know that they do not know much about the world.
No, idiots don't always know that they're idiots. An idiot who doesn't know it is called a "crackpot". There are plenty of those. Toddlers are also surely often overconfident, though I don't think there is a word for that.
If the topic is "Is xzxq kskw?" then it seems reasonable to say that you have no beliefs at all.
When modeling humans as Bayesians, "having no beliefs" doesn't type check. A prior is a function from propositions to probabilities and "I don't know" is not a probability. You could perhaps say that "Is xzxq kskw?" is not a valid proposition. But I'm not sure why bother. I don't see how this is relevant to Hanson's paper.
Replies from: Dacyn↑ comment by Dacyn · 2018-05-24T09:15:33.504Z · LW(p) · GW(p)
P(earth is flat)=0.6 isn't a garbage prediction, since it lets people update to something reasonable after seeing the appropriate evidence. It doesn't incorporate all the evidence, but that's a prior for you.
I think God and Monty Hall are both interesting examples, In particular Monty Hall is interesting because so many professional mathematicians got the wrong answer for it, and God is interesting because people disagree as to who the relevant experts are, as well as what epistemological framework is appropriate for evaluating such a proposition. I don't think I can give you a good answer to either of them (and just to be clear I never said that I agreed with Hanson's point of view).
Maybe you're right that xzxq is not relevant to Hanson's paper.
Regarding weighting, Hanson's paper doesn't talk about averaging at all so it doesn't make sense to ask whether the averaging that it talks about is weighted. But the idea that all agents would update to a (weighted) species-average belief is an obvious candidate for an explanation for why their posteriors would agree. I realize my previous comments may have obscured this distinction, sorry about that.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-24T18:48:38.960Z · LW(p) · GW(p)
P(earth is flat)=0.6 isn't a garbage prediction, since it lets people update to something reasonable after seeing the appropriate evidence.
What is a garbage prediction then? P=0 and P=1? When I said "garbage", I meant that it has no relation to the real world, it's about as good as rolling a die to choose a probability.
Replies from: Dacyn↑ comment by Dacyn · 2018-05-25T10:03:24.921Z · LW(p) · GW(p)
P(I will win the lottery) = 0.6 is a garbage prediction.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-25T10:08:51.984Z · LW(p) · GW(p)
Why? Are there no conceivable lotteries with that probability of winning? (There are, e.g. if I bought multiple tickets). Is there no evidence that we could see in order to update this prediction? (There is, e.g. the number of tickets sold, the outcomes of past lotteries, etc). I continue to not understand what standard of "garbage" you're using.
Replies from: Dacyn↑ comment by Dacyn · 2018-05-25T11:31:51.287Z · LW(p) · GW(p)
So, I guess it depends on exactly how far back you want to go when erasing your background knowledge to try to form the concept of a prior. I was assuming you still knew something about the structure of the problem, i.e. that there would be a bunch of tickets sold, that you have only bought one, etc. But you're right that you could recategorize those as evidence in which case the proper prior wouldn't depend on them.
If you take this to the extreme you could say that the prior for every sentence should be the same, because the minimum amount of knowledge you could have about a sentence is just "There is a sentence". You could then treat all facts about the number of words in the sentence, the instances in which you have observed people using those words, etc. as observations to be updated on.
It is tempting to say that the prior for every sentence should be 0.5 in this case (in which case a "garbage prediction" would just be one that is sufficiently far away from 0.5 on a log-odds scale), but it is not so clear that a "randomly chosen" sentence (whatever that means) has a 0.5 probability of being true. If by a "randomly chosen" sentence we mean the kinds of sentences that people are likely to say, then estimating the probability of such a sentence requires all of the background knowledge that we have, and we are left with the same problem.
Maybe all of this is an irrelevant digression. After rereading your previous comments, it occurs to me that maybe I should put it this way: After updating, you have a bunch of people who all have a small probability for "the earth is flat", but they may have slightly different probabilities due to different genetic predispositions. Are you saying that you don't think averaging makes sense here? There is no issue with the predictions being garbage, we both agree that they are not garbage. The question is just whether to average them.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-25T12:10:14.318Z · LW(p) · GW(p)
I was assuming you still knew something about the structure of the problem, i.e. that there would be a bunch of tickets sold, that you have only bought one, etc.
If you've already observed all the possible evidence, then your prediction is not a "prior" any more, in any sense of the word. Also, both total tickets sold and the number of tickets someone bought are variables. If I know that there is a lottery in the real world, I don't usually know how many tickets they really sold (or will sell), and I'm usually allowed to buy more than one (although it's hard for me to not know how many I have).
After updating, you have a bunch of people who all have a small probability for "the earth is flat", but they may have slightly different probabilities due to different genetic predispositions. Are you saying that you don't think averaging makes sense here?
I think that Hanson wants to average before updating. Although if everyone is a perfect bayesian and saw the same evidence, then maybe there isn't a huge difference between averaging before or after the update.
Either way, my position is that averaging is not justified without additional assumptions. Though I'm not saying that averaging is necessarily harmful either.
Replies from: Dacyn↑ comment by Dacyn · 2018-05-25T13:08:05.613Z · LW(p) · GW(p)
If you are doing a log-odds average then it doesn't matter whether you do it before or after updating.
Like I pointed out in my previous comment the question "how much evidence have I observed / taken into account?" is a continuous question with no obvious "minimum" answer. The answer "I know that a bunch of tickets will be sold, and that I will only buy a few" seems to me to not be a "maximum" answer either, so beliefs based on it seem reasonable to call a "prior", even if under some framings they are a posterior. Though really it is pointless to talk about what is a prior if we don't have some specific set of observations in mind that we want our prior to be prior to.