Epistemic Tenure
post by Scott Garrabrant · 2019-02-18T22:56:03.158Z · LW · GW · 27 commentsContents
27 comments
In this post, I will try to justify the following claim (which I am not sure how much I believe myself):
Let Bob be an individual that I have a lot intellectual respect for. For example, maybe Bob had a history of believing true things long before anyone else, or Bob has a discovered or invented some ideas that I have found very useful. Now, let's say that Bob expresses a new belief that feels to me to be obviously wrong. Bob has tried to explain his reasons for the belief, and they seem to also be obviously wrong. I think I can see what mistake Bob is making, and why he is making it. I claim that I should continue to take Bob very seriously, try to engage with Bob's new belief, and give Bob a decent portion of my attention. I further claim, that many people should do this, and do it publicly.
There is an obvious reason why it is good to take Bob's belief seriously. Bob has proven to me that he is smart. The fact that Bob believes a thing is strong evidence that that thing is true. Further, before Bob said this new thing, I would not have trusted his epistemics much less than I trust my own. I don't have a strong reason to believe that I am not the one who is obviously wrong. The situation is symmetric. Outside view says that Bob might be right.
This is not the reason I want to argue for. I think this is partially right, but there is another reason I think people are more likely to miss, that I think pushes it a lot further.
Before Bob had his new bad idea, Bob was in a position of having intellectual respect. An effect of this was that he could say things, and people would listen. Bob probably values this fact. He might value it because he terminally values the status. But he also might value it because the fact that people will listen to his ideas is instrumentally useful. For example, if people are willing to listen to him and he has opinions on what sorts of things people should be working on, he could use his epistemic status to steer the field towards directions that he thinks will be useful.
When Bob has a new bad idea, he might not want to share it if he thinks it would cause him to lose his epistemic status. He may prefer to save his epistemic status up to spend later. This itself would not be very bad. What I am worried about is if Bob ends up not having the new bad idea in the first place. It is hard to have one set of beliefs, and simultaneously speak from another one. The external pressures that I place on Bob to continue to say new interesting things that I agree with may back propagate all the way into Bob's ability to generate new beliefs.
This is my true concern. I want Bob to be able to think free of the external pressures coming from the fact that others are judging his beliefs. I still want to be able to partially judge his beliefs, and move forward even when Bob is wrong. I think there is a real tradeoff here. The group epistemics are made better by directing attention away from bad beliefs, but the individual epistemics are made better by optimizing for truth, rather than what everyone else thinks. Because of this, I can't give out (my own personal) epistemic tenure too freely. Attention is a conserved resource, and attention that I give to Bob is being taken away from attention that could be directed toward GOOD ideas. Because of this tradeoff, I am really not sure how much I believe my original claim, but I think it is partially true.
I am really trying to emphasize the situation where even my outside view says that Bob is wrong. I think this points out that it is not about how Bob's idea might be good. It is about how Bob's idea might HAVE BEEN good, and the fact that he would not lose too much epistemic status is what enabled him to make the more high-variance cognitive moves that might lead to good ideas. This is why it is important to make this public. It is about whether Bob, and other people like Bob, can trust that they will not be epistemically ostracized.
Note that a community could have other norms that are not equivalent to epistemic tenure, but partially replace the need for it, and make it not worth it because of the tradeoffs. One such mechanism (with its own tradeoffs) is not assigning that much epistemic status at all, and trying to ignore who is making the arguments. If I were convinced that epistemic tenure was a bad idea for LW or AI safety, it would probably be because I believed that existing mechanisms are already doing enough of it.
Also, maybe it is a good idea to do this implicitly, but a bad idea to do it explicitly. I don't really know what I believe about any of this. I am mostly just trying to point out that a tradeoff exists, that the costs of having to take approval of the group epistemics into account when forming your own beliefs might be both invisible and large, and that there could be some structural ways to fight against those costs.
27 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2019-02-19T21:12:46.945Z · LW(p) · GW(p)
I think there's another tradeoff here. From High Status and Stupidity: Why? [LW · GW]:
Michael Vassar once suggested: "Status makes people effectively stupid, as it makes it harder for them to update their public positions without feeling that they are losing face."
Giving someone epistemic tenure could make someone more willing to generate and speak "risky" ideas, but could also make them feel that they have even higher status and therefore more face to lose and therefore more "stupid" in various ways. (Aside from Michael's explanation, here's mine [LW(p) · GW(p)]: "once you achieve high status, a part of your mind makes you lose interest in the thing that you achieved high status with in the first place. You might feel obligated to maintain an appearance of interest, and defend your position from time to time, but you no longer feel a burning need to know the truth.")
comment by Unnamed · 2019-02-19T21:57:47.387Z · LW(p) · GW(p)
I think that there's a spectrum between treating someone as a good source of conclusions and treating them as a good source of hypotheses.
I can have thoughts like "Carol looked closely into the topic and came away convinced that Y is true, so for now I'm going to act as if Y is probably true" if I take Carol to be a good source of conclusions.
Whereas if I took Alice to be a good source of hypotheses but not a good source of conclusions, then I would instead have thoughts like "Alice insists that Z is true, so Z seems like something that's worth thinking about more."
Giving someone epistemic tenure as a source of conclusions seems much more costly than giving them epistemic tenure as a source of hypotheses.
comment by ryan_b · 2019-02-19T16:26:41.149Z · LW(p) · GW(p)
This may be anchoring the concept too firmly in the community, but I think there is another benefit to giving obviously-wrong ideas from epistemically-sound people attention: it shows how a given level of epistemic mastery is incomplete.
I feel like given an epistemology vocabulary it would be very easy to say that since Bob has said something wrong and his reasons are wrong we should lower our opinion of Bob's epistemic mastery overall. I also feel like that would be both wrong and useless, because it is not as though rationality consisted of some singular epistemesis score that can be raised or lowered. Instead there is a (incomplete!) battery of skills that go in to rationality and we'd be forsaking an opportunity to advance the art by spending time looking at the how and why of the wrongness.
I think the benefit of epistemic tenure is in high value oops! generation.
Replies from: strangepoop↑ comment by a gently pricked vein (strangepoop) · 2019-03-07T11:29:57.774Z · LW(p) · GW(p)
it is not as though rationality consisted of some singular epistemesis score that can be raised or lowered
I feel like this is fighting the hypothesis. As Garrabrant says:
Attention is a conserved resource, and attention that I give to Bob is being taken away from attention that could be directed toward GOOD ideas.
It doesn't matter whether or not you think it is possible to track rationality through some singular epistemesis score. The question is: you have limited attentional resources and the problem OP outlined; "rationality" is probably complicated; what do you do anyway?
How you divvy them is the score. Or, to replace the symbol with the substance: if you're in charge of divvying those resources, then your particular algorithm will decide what your underlings consider status/currency, and can backpropagate into their minds.
Replies from: ryan_b↑ comment by ryan_b · 2019-03-07T15:51:03.661Z · LW(p) · GW(p)
The thing I am trying to point at here is that attention to Bob's bad ideas is also necessarily attention to the good ideas Bob uses in idea generation. Therefore I think the total cost in wasted attention is much lower, which speaks to why we should be less concerned about evaluating them and to why Bob should not sweat his status.
I would go further and say it is strange to me that an idea being obviously wrong from a reliable source should be more likely to be dismissed than one that is subtly wrong. Bob is smart and usually correct - further attention to a mostly-correct idea of his is unlikely to improve it. By contrast I think an obviously wrong idea is a big red flag that something has obviously gone wrong.
I may be missing something obvious, but I'm having a hard time imagining how to distinguish in practice between a policy against providing attention to bad ideas, and a policy against providing attention to idea-generating ideas. This seems self-defeating.
comment by Qiaochu_Yuan · 2019-02-27T01:32:57.477Z · LW(p) · GW(p)
This seems like a bad idea to me; I think people who are trying to have good ideas should develop courage instead. If you don't have courage your ideas are being pushed around by fear in general, and asking for a particular source of that fear to be ameliorated will not solve the general problem.
comment by romeostevensit · 2019-02-18T22:59:01.048Z · LW(p) · GW(p)
another frame might be Kuhnian momentum.
comment by sirjackholland · 2019-02-20T20:43:53.518Z · LW(p) · GW(p)
Something I didn't notice in the comments is how to handle the common situation that Bob is a one-hit wonder. Being a one-hit wonder is pretty difficult; most people are zero-hit wonders. Being a two-hit wonder is even more difficult, and very few people ever create many independent brilliant ideas / works / projects / etc.
Keeping that in mind, it seems like a bad idea to make a precedent of handing out epistemic tenure. Most people are not an ever-flowing font of brilliance and so the case that their one hit is indicative of many more is much less likely than the case that you've already witnessed the best thing they'll do.
Just anecdotally, I can think of many public intellectuals who had one great idea, or bundle of ideas, and now spend most of their time spouting unrelated nonsense. And, troublingly, the only reason people take their nonsense seriously is that there is, at least implicitly, some notion of epistemic tenure attached to them. These people are a tremendous source of "intellectual noise", so to speak, and I think discourse would improve if the Bobs out there had to demonstrate the validity of their ideas from as close to scratch as possible rather than getting an endless free pass.
My biggest hesitation with never handing out intellectual tenure is that might make it harder for super geniuses to work as efficiently. Would von Neumann have accomplished what he did if he had to compete as if he were just another scientist over and over? But I think a lack of intellectual tenure would have to really reduce super genius efficiency for it to make up for all the noise it produces. There's just so many more one-hit wonders than (N>1)-hit wonders.
comment by Scott Garrabrant · 2019-02-18T22:58:09.971Z · LW(p) · GW(p)
I apologize for using the phrase "epistemic status" in a way that disagrees with the accepted technical term.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2019-02-19T09:08:04.702Z · LW(p) · GW(p)
Coincidentally, the accepted technical term is also very relevant to the discussion. One way for Bob to mitigate the problem of others judging him for his bad idea is to write "epistemic status: exploratory" or similar at the top of his post.
comment by Bucky · 2019-02-19T19:44:25.737Z · LW(p) · GW(p)
Putting myself in Bob’s shoes I’m pretty sure I would just want people to just be straight with me and give my idea the attention that they feel it deserves. I’m fairly confident this wouldn’t have a knock on effect to my ability to generate ideas. I’m guessing from the post that Scott isn’t sure this would be true of him (or maybe you’re more concerned for others than you would be for yourself?).
I’d be interested to hear other people’s introspections on this.
Replies from: Scott Garrabrant↑ comment by Scott Garrabrant · 2019-02-19T20:12:21.109Z · LW(p) · GW(p)
So, I feel like I am concerned for everyone, including myself, but also including people who do not think that it would effect them. A large part of what concerns me is that the effects could be invisible.
For example, I think that I am not very effected by this, but I recently noticed a connection between how difficult it is to get to work on writing a blog post that I think it is good to write, and how much my system one expects some people to receive the post negatively. (This happened when writing the recent MtG post.) This is only anecdotal, but I think that posts that seems like bad PR caused akrasia, even when when controlling for how good I think the post is on net. The scary part is that there was a long time before I noticed this. If I believed that there was a credible way to detect when there are thoughts you can't have in the first place, I would be less worried.
I didn't have many data points, and the above connection might have been a coincidence, but the point I am trying to make is that I don't feel like I have good enough introspective access to rule out a large, invisible, effect. Maybe others do have enough introspective access, but I do not think that just not seeing the outer incentives pulling on you is enough to conclude that they are not there.
Replies from: Wei_Dai, Bucky↑ comment by Wei Dai (Wei_Dai) · 2019-02-19T20:57:11.615Z · LW(p) · GW(p)
For example, I think that I am not very effected by this, but I recently noticed a connection between how difficult it is to get to work on writing a blog post that I think it is good to write, and how much my system one expects some people to receive the post negatively. (This happened when writing the recent MtG post.)
How accurate is your system one on this? Was it right about the MtG post? (I'm trying to figure out if the problem is your system one being too cautious, or if you think there's an issue even if it's not.)
This is only anecdotal, but I think that posts that seems like bad PR caused akrasia, even when when controlling for how good I think the post is on net.
Maybe it's actually rational to wait in such cases, and write some other posts to better prepare the readers to be receptive to the new idea? It seems like if you force yourself to write posts when you expect bad PR, that's more likely to cause propagation back to idea generation. Plus, writing about an idea too early can create unnecessary opposition to it, which can be harder to remove than to not create in the first place.
(You could certainly go overboard with PR concerns and end up not writing about anything that could be remotely controversial, especially if you work at a place where bad PR could cause to you lose your livelihood, but that feels like a separate problem from what you're talking about.)
Replies from: Scott Garrabrant↑ comment by Scott Garrabrant · 2019-02-19T21:27:56.142Z · LW(p) · GW(p)
I think it was wrong about the MtG post. I mostly think the negative effects of posting ideas (related to technical topics) that people think are bad is small enough to ignore, except in so far as it messes with my internal state. My system 2 thinks my system 1 is wrong about the external effects, but intends to cooperate with it anyway, because not cooperating with it could be internally bad.
As another example, months ago, you asked me to talk about how embedded agency fits in with the rest of AI safety, and I said something like that I didn't want to force myself to make any public arguments for or against the usefulness of agent foundations. This is because I think research prioritization is especially prone to rationalization, so it is important to me that any thoughts about research prioritization are not pressured by downstream effects on what I am allowed to work on. (It still can change what I decide to work on, but only through channels that are entirely internal.)
Replies from: Pattern↑ comment by Pattern · 2019-02-19T22:48:05.426Z · LW(p) · GW(p)
I enjoyed the MtG post by the way. It was brief, and well illustrated. I haven't seen other posts that talked about that many AI things on that level before. (On organizing approaches, as opposed to just focusing on one thing and all its details.)
↑ comment by Bucky · 2019-02-19T21:23:21.893Z · LW(p) · GW(p)
Thanks, that makes sense.
I completely empathise with worries about social pressures when I’m putting something out there for people to see. I don’t think this would apply to me in the generation phase but you’re right that my introspection may be completely off the mark.
My own experience at work is that I get ideas for improvements even when such ideas aren’t encouraged but maybe I’d get more if they were. My gut says that the level of encouragement mainly determines how likely I am to share the ideas but there could be more going on that I’m unaware of.
comment by Thrasymachus · 2019-02-19T10:39:18.465Z · LW(p) · GW(p)
As you say, Bob's good epistemic reputation should count when he says something that appears wild, especially if he has a track record that endorses him in these cases ("We've thought he was crazy before, but he proved us wrong"). Maybe one should think of Bob as an epistemic 'venture capitalist', making (seemingly) wild epistemic bets which are right more often than chance (and often illuminating even if wrong), even if they aren't right more often than not, and this might be enough to warrant further attention ("well, he's probably wrong about this, but maybe he's onto something").
I'm not sure your suggestion pushes in the right direction in the case where - pricing all of that in - we still think Bob's belief is unreasonable and he is unreasonable for holding it. The right responses in this case by my lights are two-fold.
First, you should dismiss (rather than engage with) Bob's wild belief - as (ex hypothesi) all things considered it should be dismissed.
Second, it should (usually) count against Bob's overall epistemic reputation. After all, whatever it was that meant despite Bob's merits you think he's saying something stupid is likely an indicator of epistemic vice.
This doesn't mean it should be a global black mark to taking Bob seriously ever again. Even the best can err badly, so one should weigh up the whole record. Furthermore, epistemic virtue has a few dimensions, and Bob's weaknesses in something need not mean his strengths in others be sufficient for attention esteem going forward: An archetype I have in mind with 'epistemic venture capitalist' is someone clever, creative, yet cocky and epistemically immodest - has lots of novel ideas, some true, more interesting, but many 'duds' arising from not doing their homework, being hedgehogs with their preferred 'big idea', etc.
I accept, notwithstanding those caveats, this still disincentivizes epistemic venture capitalists like Bob to some degree. Although I only have anecdata, this leans in favour of some sort of trade-off: brilliant thinkers often appear poorly calibrated and indulge in all sorts of foolish beliefs; interviews with superforecasters (e.g.) tend to emphasise things like "don't trust your intuition, be very self sceptical, canvass lots of views, do lots of careful research on a topic before staking out a view". Yet good epistemic progress relies on both - and if they lie on a convex frontier, one wants to have a division of labour.
Although the right balance to strike re. second order norms depends on tricky questions on which sort of work is currently under-supplied, which has higher value on the margin, and the current norms of communal practice (all of which may differ by community), my hunch is 'epistemic tenure' (going beyond what I sketch above) tends disadvantageous.
One is noting the are plausible costs in both directions. 'Tenure'-esque practice could spur on crack pots, have too lax a filter for noise-esque ideas, discourage broadly praiseworthy epistemic norms (cf. virtue of scholarship), and maybe not give Bob-like figures enough guidance so they range too far and unproductively (e.g. I recall one Nobel Laureate mentioning the idea of, "Once you win your Nobel Prize, you should go and try and figure out the hard problem of consciousness" - which seems a terrible idea).
The other is even if there is a trade-off, one still wants to reach the one's frontier on 'calibration/accuracy/whatever'. Scott Sumner seems to be able to combine researching on the inside view alongside judging on the outside view (see). This seems better for Sumner, and the wider intellectual community, than Sumner* who could not do the latter.
comment by Dagon · 2019-02-18T23:24:51.445Z · LW(p) · GW(p)
I can't tell if you're saying to support Bob's bad idea and to (falsely) encourage him that it's actually a good idea. I don't agree, if so. If you're just saying "continue supporting his previous good ideas, and evaluate his future ideas fairly, knowing that he's previously had both good and bad ones", then I agree. But I don't think it's particularly controversial or novel.
I'm not sure if more or different examples would help. I suspect my model of idea generation and support just doesn't fit here - I don't much care whether an idea is generated/supported by any given Bob. I care a bit about the overall level of support for the idea from many Bob-like people. And I care about the ideas themselves.
Also, almost every Bob has topics they're great at and topics they're ... questionable. I strongly advise identifying the areas in which you don't waste time from some otherwise-great thinkers.
Replies from: Scott Garrabrant↑ comment by Scott Garrabrant · 2019-02-19T00:33:21.721Z · LW(p) · GW(p)
I am not saying to falsely encourage him, I think I am mostly saying to continue giving him some attention/platform to get his ideas out in a way that would be heard. The real thing that I want is whatever will cause Bob to not end up back propagating from the group epistemics into his individual idea generation.
Replies from: gjm, Dagon↑ comment by gjm · 2019-02-19T14:43:47.905Z · LW(p) · GW(p)
I think I'm largely (albeit tentatively) with Dagon here: it's not clear that we don't _want_ our responses to his wrongness to back-propagate into his idea generation. Isn't that part of how a person's idea generation gets better?
One possible counterargument: a person's idea-generation process actually consists of (at least) two parts, generation and filtering, and most of us would do better to have more fluent _generation_. But even if so, we want the _filtering_ to work well, and I don't know how you enable evaluations to propagate back as far as the filtering stage but to stop before affecting the generation stage.
I'm not saying that the suggestion here is definitely wrong. It could well be that if we follow the path of least resistance, the result will be _too much_ idea-suppression. But you can't just say "if there's a substantial cost to saying very wrong things then that's bad because it may make people less willing or even less able to come up with contrarian ideas in future" without acknowledging that there's an upside too, in making people less inclined to come up with _bad_ ideas in future.
Replies from: Vaniver↑ comment by Vaniver · 2019-02-19T20:04:57.749Z · LW(p) · GW(p)
I think I'm largely (albeit tentatively) with Dagon here: it's not clear that we don't _want_ our responses to his wrongness to back-propagate into his idea generation. Isn't that part of how a person's idea generation gets better?
It is important that Bob was surprisingly right about something in the past; this means something was going on in his epistemology that wasn't going on in the group epistemology, and the group's attempt to update Bob may fail because it misses that important structure. Epistemic tenure is, in some sense, the group saying to Bob "we don't really get what's going on with you, and we like it, so keep it up, and we'll be tolerant of wackiness that is the inevitable byproduct of keeping it up."
That is, a typical person should care a lot about not believing bad things, and the typical 'intellectual venture capitalist' who backs a lot of crackpot horses should likely end up losing their claim on the group's attention. But when the intellectual venture capitalist is right, it's important to keep their strategy around, even if you think it's luck or that you've incorporated all of the technique that went into their first prediction, because maybe you haven't, and their value comes from their continued ability to be a maverick without losing all of their claim on group attention.
Replies from: gjm↑ comment by gjm · 2019-02-19T23:46:04.130Z · LW(p) · GW(p)
If Bob's history is that over and over again he's said things that seem obviously wrong but he's always turned out to be right, I don't think we need a notion of "epistemic tenure" to justify taking him seriously the next time he says something that seems obviously wrong: we've already established that when he says apparently-obviously-wrong things he's usually right, so plain old induction will get us where we need to go. I think the OP is making a stronger claim. (And a different one: note that OP says explicitly that he isn't saying we should take Bob seriously because he might be right, but that we should take Bob seriously so as not to discourage him from thinking original thoughts in future.)
And the OP doesn't (at least as I read it) seem like it stipulates that Bob is strikingly better epistemically than his peers in that sort of way. It says:
Let Bob be an individual that I have a lot intellectual respect for. For example, maybe Bob has a history of believing true things long before anyone else, or Bob has discovered or invented some ideas that I have found very useful.
which isn't quite the same. One of the specific ways in which Bob might have earned that "lot of intellectual respect" is by believing true things long before everyone else, but that's just one example. And someone can merit a lot of intellectual respect without being so much better than everyone else.
For an "intellectual venture capitalist" who generates a lot of wild ideas, mostly wrong but right more often than you'd expect, I do agree that we want to avoid stifling them. But we do also want to avoid letting them get entirely untethered from reality, and it's not obvious to me what degree of epistemic tenure best makes that balance.
(Analogy: successful writers get more freedom to ignore the advice of their editors. Sometimes that's a good thing, but not always.)
↑ comment by Dagon · 2019-02-19T00:54:31.486Z · LW(p) · GW(p)
I think you're more focused on Bob than I am, and have more confidence in your model of Bob's idea generation/propagation mechanisms.
I WANT Bob to update toward more correct ideas in the future, and that includes feedback when he's wrong. And I want to correctly adjust my prior estimate of Bob's future correctness. Both of these involve recognizing that errors occurred, and reducing (not to zero, but not at the level of the previous always-correct Bob) the expectation of future goodness.
Replies from: Bucky↑ comment by Bucky · 2019-02-19T18:56:49.811Z · LW(p) · GW(p)
Just want to check that whoever downvoted Dagon’s comment sees the irony? :)
(Context: At time of writing the parent comment was at -1 karma)
Replies from: Dagon↑ comment by Dagon · 2019-02-19T21:44:09.674Z · LW(p) · GW(p)
Currently 1, with 4 votes. My other comment on the post is at 2 with 5 votes. This is below target for me, but not enough that I'm likely to change much. Note that I don't care much about Karma totals, more about replies and further discussion. I have in the past announced that I believe that I intend my comments to be true beliefs, but also to provoke further reaction/correction. One measure of this is to seek to comment in ways that attract some number of downvotes.
Also, there's no irony if the downvoters do not believe I've earned any epistemic respect from previous comments, so they do not want to encourage my further commenting.
Replies from: Bucky↑ comment by Bucky · 2019-02-19T22:27:28.565Z · LW(p) · GW(p)
Also, there's no irony if the downvoters do not believe I've earned any epistemic respect from previous comments, so they do not want to encourage my further commenting.
You’re right of course, I just found it amusing that someone would disagree that it’s a good idea to provide negative feedback and then provide negative feedback.
comment by a gently pricked vein (strangepoop) · 2019-03-04T11:00:05.776Z · LW(p) · GW(p)
I think your "attentional resources" are just being Counterfactually Mugged here, so if you're okay with that, you ought to be okay with some attention being diverted away from "real" ideas, if you're reasonably confident in your construction of the counterfactual "Bob’s idea might HAVE BEEN good".
This way of looking at it also says that tenure is a bad metaphor: your confidence in the counterfactual being true can change over time.
(If you then insist that this confidence in your counterfactual is also something that affects Bob, which it kinda does, then I'm afraid we're encountering an instance of unfair problem class in the wild and I don't know what to do)
As an aside, this makes me think: What happens when all consumers in the market are willing to get counterfactually mugged? Where I'm not able to return my defected phone because prediction markets said it would have worked? I suppose this is not very different from the concept of force majeure, only systematized.