If we can't lie to others, we will lie to ourselves
post by paulfchristiano · 2016-11-26T22:29:54.990Z · LW · GW · Legacy · 24 commentsThis is a link post for https://sideways-view.com/2016/11/26/if-you-cant-lie-to-others-you-must-lie-to-yourself/
Contents
25 comments
24 comments
Comments sorted by top scores.
comment by RobinHanson · 2016-11-27T14:00:25.659Z · LW(p) · GW(p)
"the most powerful tool is adopting epistemic norms which are appropriately conservative; to rely more on the scientific method, on well-formed arguments, on evidence that can be clearly articulated and reproduced, and so on."
A simple summary: Believe Less. Hold higher standards for what is sufficient reason to believe. Of course this is in fact what most people actually do. They don't bother to hold beliefs on the kind of abstract topics on which Paul wants to hold beliefs.
"1. What my decisions are optimized for. .. 2. What I consciously believe I want."
No. 2 might be better thought of as "What my talk is optimized for." Both systems are highly optimized. This way of seeing it emphasizes that if you want to make the two results more consistent, you want to move your talk closer to action. As with bets, or other more concrete actions.
Replies from: WhySpace_duplicate0.9261692129075527, paulfchristiano↑ comment by WhySpace_duplicate0.9261692129075527 · 2016-11-28T00:40:47.514Z · LW(p) · GW(p)
Believe Less.
As in, believe fewer things and believe them less strongly? By assigning lower odds to beliefs, in order to fight overconfidence? Just making sure I'm interpreting correctly.
don't bother to hold beliefs on the kind of abstract topics
I've read this sentiment from you a couple times, and don't understand the motive. Have you written about it more in depth somewhere?
I would have argued the opposite. It seems like societal acceptance is almost irrelevant as evidence of whether that world is desirable.
Replies from: RobinHanson↑ comment by RobinHanson · 2016-11-28T00:50:54.596Z · LW(p) · GW(p)
Yes believe fewer things and believe them less strongly. On abstract beliefs I'm not following you. The usual motive for most people is that they don't need most abstract beliefs to live their lives.
Replies from: WhySpace_duplicate0.9261692129075527↑ comment by WhySpace_duplicate0.9261692129075527 · 2016-11-28T02:21:10.101Z · LW(p) · GW(p)
I'd agree with you that most abstract beliefs aren't needed for us to simply live our lives. However, it looks like you were making a normative claim that to minimize bias, we shouldn't deal too much with abstract beliefs when we can avoid it.
Similarly, IIRC, this is also your answer to things like discussion over whether EMs will really "be us", and other such abstract philosophical arguments. Perhaps such discussion isn't tractable, but to me it does still seem important for determining whether such a world is a utopia or a dystopia.
So, I would have argued the opposite: try to develop a good, solid, comprehensive set of abstract principles, and then apply them uniformly to object-level decisions. This should help us optimize for the sorts of things our talk and thoughts are optimized for, and minimize the influence of our other biases. I am my conscious mind, so why should I care much what my subconscious wants?
Here's a bit more detail, if you are (or anyone else is) curious. If you've heard these sorts of arguments a hundred times before, feel free to skip and link to a counterargument.
Predicting how an unreflective society will actually react may be easier than this sort of philosophy, but social acceptance seems necessary but not sufficient here. Under my view, Oedipus Rex's life accomplishments might still have negative utility to him, even if he lived a happy life and never learned who his mother was. Similarly, the Star Trek universe quickly turns from a utopia to a dystopia if teleportation technically counts as death, or the moral equivalent, according to human minds who know all the facts and have heard all the arguments. (Klingons may come to different conclusions, based on different values.) I'm not a vegan, but there appears to be a small chance that animals do have significant moral weight, and we're living in a Soylent Green style dystopia.
I would argue that ignoring the tough philosophical issues now dooms us to the status quo bias in the future. To me, it seems that we're potentially less biased about abstract principles that aren't pressing or politically relevant at the moment. If I've thought about trolley problems and whatnot before, and have formed abstract beliefs about how to act in certain situations, then I should be much less biased when an issue comes up at work or a tough decision needs to be made at home, or there's a new political event and I'm forming an opinion. More importantly, the same should be true of reasoning about the far future, or anything else.
↑ comment by paulfchristiano · 2016-11-27T16:54:41.028Z · LW(p) · GW(p)
No. 2 might be better thought of as "What my talk is optimized for."
I care much more about the fact that "my conscious thoughts are optimized for X" than "my talk is optimized for X," though I agree that it might be easier to figure out what our talk is optimized for.
if you want to make the two results more consistent, you want to move your talk closer to action
I'm not very interested in consistency per se. If we just changed my conscious thoughts to be in line with my type 1 preferences, that seems like it would be a terrible deal for my type 2 preferences.
As with bets, or other more concrete actions.
Sometimes bets can work and I make many more bets than most people, but quantitatively speaking I am skeptical of how much they can do (how large they have to be, on what range of topics they are realistic, what are the other attendant costs). Using conservative epistemic norms seems like it can accomplish much more.
If we want to tie social benefit to accuracy, it seems like it would be much more promising to use "the eventual output of conservative epistemic norms" as our gold standard rather than "what eventually happens," rather than reality, because it is available (a) much sooner, (b) with lower variance, and (c) on a much larger range of topics.
(An obvious problem with that is that it gives people larger motives to manipulate the output of the epistemic process. If you think people already have such incentives then it's not clear this is so bad.)
Replies from: RobinHanson↑ comment by RobinHanson · 2016-11-27T18:23:11.042Z · LW(p) · GW(p)
I meant to claim that in fact your conscious thoughts are largely optimized for good impact on the things you say.
You can of course bet on eventual outcome of conservative epistemic norms, just as you can bet on what actually happens. Not sure what else you can do to create incentives now to believe what conservative norms will eventually say.
comment by sarahconstantin · 2016-11-27T10:42:46.046Z · LW(p) · GW(p)
So, on ways of smoothing the incentive gradient for high-quality reasoning:
This is a reason to have a "rationalist community." Humans are satisficers. We won't really care about the opinion of literally all 7 billion people on Earth if we have the approval of our own tribe. If our tribe has some norms about how conversation and thinking work, then we'll be pretty able to follow those norms, so long as we expect that our needs are meetable within the tribe -- that is, that it's a good place to find friends, mates, careers, etc.
It's also a reason to think about how UX affects discourse. I'm by no means an expert in this, but for instance, what does karma reward? what types of expression get attention? How can we offer rewards for behaviors we like?
Replies from: RobinHanson↑ comment by RobinHanson · 2016-11-27T14:02:07.025Z · LW(p) · GW(p)
That only helps if your "rationalist community" in fact pushes you to more accurate reasoning. Merely giving your community that name is far from sufficient however, and in my experience the "rationalist community" is mostly that in name only.
Replies from: paulfchristiano, sarahconstantin↑ comment by paulfchristiano · 2016-11-27T17:01:13.001Z · LW(p) · GW(p)
This seems too uncharitable (I mean, "mostly" is kind of ambiguous in this context so it might be true, but...). I have plenty of complaints, and certainly things could be much better, but I think the rationalists in fact reward accuracy / high-quality reasoning much more than the surrounding community of bay area engineers, which itself rewards accuracy much more than US elite culture, which itself rewards accuracy much more than US culture more broadly.
For example, we do in fact put an unusual amount of stock on correct logical argument, sound probabilistic reasoning, and scientific inquiry, which do in fact tend to produce more accurate conclusions.
Replies from: RobinHanson, Vladimir_Nesov↑ comment by RobinHanson · 2016-11-27T19:57:10.309Z · LW(p) · GW(p)
"charitable" seems an odd name for the tendency to assume that you and your friends are better than other people, because well it just sure seems that way to you and your friends. You don't have an accuracy track record of this group to refer to, right?
Replies from: paulfchristiano↑ comment by paulfchristiano · 2016-11-28T00:58:38.280Z · LW(p) · GW(p)
What kind of track record do you expect, and what other people are you comparing to? For example, are there academic communities for which you would grant the existence of such a track record, outside of the experimental sciences? For those communities, how would you respond to a comment like yours?
For example, I think that economists also have a set of norms for arriving at truer conclusions about society, but they also don't have an easy-to-point-to track record of success as a community.
If you think economists count, then the bay area rationalists will count simply by virtue of arriving at a set of views that mirror mainstream economic views much more closely than does the average US elite consensus. But realistically, I don't think that you can make the kind of case you are looking for for economists, and if you can then it will involve weakening the standards in a way that lets us make the same case for rationalists.
If you can't name any communities that have such a track record, then this seems like a weak test of whether a community's efforts to promote accurate conclusions are in name only. (Not necessarily a worthless one, but at least one that should be regarded with skepticism.)
I do think that e.g. bay area rationalists have substantially more accurate views about the topics they talk about then the world at large (on the future, AI, economics, politics, aid, cognitive science, etc.). This is largely driven by observing the rationalist views, using what I consider the best epistemic norms available, and finding the rationalist views to better accord with the output of that process. Make of that what you will.
Bay area rationalists appear to make better investments than average (dominated by very profitable bets on bitcoin, but also bets on AI/tech and a reliance on indices / skepticism of market returns), to work in higher paying jobs, to have views that more closely track traditionally recognized experts (which I expect to be more accurate than the median elite view), to make much more extensive quantitative predictions and in cases where comparisons are possible to have better predictive track records than pundits (though this is probably just due to being numerate, an issue that makes it basically impossible to compare quantitative track records to conventional elites).
In most cases, the rationalists' high intelligence and prevalence of mental dysfunction are going to have a larger effect on their thinking than the community's norms, so I don't think that pointing to a strong track record here is even going to be persuasive to you here---you will just (correctly) dismiss it by saying "but we need to compare the rationalists to other people who are similarly smart..."---unless we manage to find a control group that has similar levels of intelligence. And if we do find people with similar levels of intelligence then they will quite plausibly be doing better than rationalists on lots of conventional measures, and I will (correctly) dismiss this by saying "but we need to compare the rationalists to other people who have similar levels of other abilities..."
In general, I feel you should engage more with quantitative detail about the difficulty of establishing the kind of track record that would be persuasive. I have a similar complaint regarding fire-the-CEO markets or other scaled-up field experiments. It looks to me like it is going to take forever to make a compelling case if you are relying on track record rather than the theory (unless people are willing to trust short-term market movements, which (a) they mostly aren't, and (b) in that case it's nearly a tautology that fire-the-CEO markets work, and the empirical data is just showing you that nothing surprising goes wrong). Yes, you can take the line that someone else should publish a criticism along these lines, but if you actually want to get the idea to get adopted it falls to you to do at least a basic power analysis.
Similarly, you can take the line that the rationalists should be in the business of figuring out exactly what kinds of track record would be persuasive to someone with your perspective. But if you actually want to affect the rationalists' behavior, you would probably need to make some argument that the rationalists could stand to benefit by attempting to establish the kind of track record you are interested in, or that they should infer much from the non-existence of such a record, or something like that.
Replies from: RobinHanson, Lumifer↑ comment by RobinHanson · 2016-11-28T13:53:01.508Z · LW(p) · GW(p)
I said I haven't seen this community as exceptionally accurate, and you say that you have seen that, and called my view "uncharitable". I then mentioned a track record as a way to remind us that we lack the sort of particularly clear evidence that we agree would be persuasive. I didn't mean that to be a criticism that you or others have not worked hard enough to create such a track record. Surely you can understand why outsiders might find suspect your standard of saying you think your community is more accurate because they more often agree with your beliefs.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2016-11-28T17:53:42.241Z · LW(p) · GW(p)
You said:
That only helps if your "rationalist community" in fact pushes you to more accurate reasoning... in my experience the "rationalist community" is mostly that in name only.
I find this claim unsettling, since the rationalist community aggressively promotes an unusual set of epistemic norms (e.g. lots of reliance on logic and numeracy, on careful scrutiny of sources and claims, a trade in debunking explanations) which appear to me to be unusually good at producing true beliefs. You presumably have experience with these norms (e.g. you read stuff Eliezer writes, you sometimes talk to at least me and presumably other rationalists, you are sometimes at rationalist parties), and seem to be rejecting the claim that these norms are actually truth-promoting.
I certainly agree that we don't have the kind of evidence that could decisively settle the question to an outsider, and I think skepticism is reasonable. The main reason someone would be optimistic about the rationalists is by actually looking at and reasoning about rationalist discourse. You seem to have done this though, so I read your comment as a strong suggestion that this reasoning is not very weighty given the absence of a track record that might provide more decisive evidence.
Replies from: RobinHanson, Lumifer↑ comment by RobinHanson · 2016-11-28T19:13:33.547Z · LW(p) · GW(p)
Even if you use truth-promoting norms, their effect can be weak enough that other effects overwhelm this effect. The "rationalist community" is different in a great many ways from other communities of thought.
↑ comment by Lumifer · 2016-11-28T21:55:51.192Z · LW(p) · GW(p)
the rationalist community aggressively promotes an unusual set of epistemic norms
Unusual..? How unusual do you think these epistemic norms would be to someone from hard sciences? Or even to, say, a civil engineer?
You keep on setting a low bar. It's really not that hard to be better than the average.
appear to me to be unusually good at producing true beliefs
True beliefs are at best an intermediate, instrumental goal. What you need to do is be good at producing desirable outcomes in reality, not inside your own head.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2016-11-28T22:30:50.998Z · LW(p) · GW(p)
One problem with threads of this form is that I feel inclined to respond even when I don't expect it to be useful. It would be nice to cultivate norms that allow us to wind these things down somewhat more quickly+gracefully; I think this would improve my willingness to comment here and on the EA forum.
I would like to make a response like "I have objections to this comment, but I don't think that continuing this conversation in this medium is likely to be the best use of our time" and for you to have the option of responding "I probably have objections to your objections" and for us to leave it at that, letting readers to infer what they will and to continue the discussion if they want to.
I think the problem with saying nothing is that it feels (probably irrationally) like accepting the last word, which is somewhat unpleasant if you have objections you'd like to express.
I think the problem with just making a dismissive comment like this is that it reads more aggressively than I would like it to read; it also reads like an implicit claim that I have the social position or credibility to justify such dismissiveness. But it's just trying to be a judgment about what disagreements are useful.
For now I might try making the somewhat dismissive response with a link to this discussion:
I am interested in whether people think this is a good policy, or something else would work better.
Replies from: Lumifer↑ comment by Lumifer · 2016-11-29T15:37:33.800Z · LW(p) · GW(p)
In such situations I usually offer to agree to disagree. That's not a put-down, but a clear signal that I don't think the conversation is going anywhere. It also offers the other side an opportunity for parting words.
And if the other party doesn't take the hint, you can just shrug, tap, and bail.
↑ comment by Lumifer · 2016-11-28T16:06:29.145Z · LW(p) · GW(p)
That's a rather long reply to an observation that you don't have any data to back up your claims.
If you are saying you're better, you should explain what do you mean by "better", compared to whom, and which data supports this conclusion. If you don't have data, why should anyone take this claim seriously?
Replies from: paulfchristiano↑ comment by paulfchristiano · 2016-11-28T17:58:57.480Z · LW(p) · GW(p)
Robin's comment irked me and I indulged the impulse to write a response, which resulted in rambling and was probably a mistake.
Also, neither Sarah's nor my comment were mostly asserting "we are better," and the interesting content of Robin's comment was not "we don't have data to back up that claim." (See my response to Robin.)
↑ comment by Vladimir_Nesov · 2016-11-28T01:39:41.347Z · LW(p) · GW(p)
When you appeal to theory that is not conventionally robust, I think the key distinction is between asking intuition and looking for priors. This seems to be the same disagreement as about potential for philosophical progress: intuition may claim something, but does it have the expertise, does it connect with the territory? If in an a priori framing (as in outside view, or antiprediction) something seems unlikely, and intuition shouldn't be expected to know much better, why trust it? Intuition is not the a priori, it merely should be when the mind has no useful data.
(The question where intuition is hard to trust is about real world rationalists, not ideal rationalists. In principle rationality training is useful, but the difficult question is whether it's significant compared to selecting people for the same style of thinking.)
↑ comment by sarahconstantin · 2016-11-27T14:03:26.147Z · LW(p) · GW(p)
I agree. The hypothetical reason-promoting community need not be the one that already exists and is called "the rationalist community."
comment by Unnamed · 2016-11-27T21:44:55.320Z · LW(p) · GW(p)
I think that there are other sources of social incentives for accuracy.
I imagined the "running late for a noon meeting" scenario, with myself in the role of the person who is waiting for you to arrive. At least in my mental simulation, I have a better impression of you if you say "ETA 12:10" and then show up at 12:10 than if you say "ETA 12:05" and then show up at 12:10. I see you as more reliable when your ETA is accurate; the hit in perceived reliability that you take by being later than the originally scheduled time is significantly smaller than the hit you take by being later than your ETA. So my judgments of people seem to be encouraging accurate estimates (at least in my simulation of this scenario). This effect becomes even stronger if I imagine similar interactions with you happening more than once.
Thinking about why this happens, while aiming for reasons that might generalize to other sorts of time estimation and broader sets of biases, I've come up with 4 factors:
Repeated interactions. When someone has a track record, patterns become evident. Oh really, traffic was surprisingly bad yet again? Or, in statistical terms, as the number of data points increases the average noise shrinks and systematic errors stand out.
The process that you're using to make your estimate is partially transparent. Just as you give off signs when you try to lie, you also give off signs when you try to think things through to give an accurate estimate, or when you try to set things up so that you won't be blamed for them going badly. e.g., If I come to you with a task and you say that it will take you 3 hours to do it, the questions that you asked before making that estimate will give me some clue about whether you're using an accuracy-seeking process.
Your stance towards me is partially transparent. When you're running late, is your aim "I value his time and don't want him to have to sit there waiting" or "I want him to think that this mostly wasn't my fault"? If you come to me with a project that needs my approval to move forward, are you thinking "How can I convince him to approve this?" or "Let's think this through together to figure out if it's worth doing"? The cooperative approach, where you see me as an ally and fellow agent in trying to create good outcomes, is well-served by accurate estimates which are communicated clearly. And people give off various signs of whether they have that approach.
Accurate estimation is a valued ability. If someone can consistently say things like "I'll be there in 8 minutes" or "that will take about 3 hours" and be very close to correct, that is an impressive ability which will lead me to rely more on their judgment. If they make systematic errors, or if their estimates are very noisy, then I will rely on them less. Trying to hide some bias in the noise is not such a great strategy, since noise also reflects a lack of skill at prediction. Interval estimates (e.g., "I'll be there in 20-25 minutes") help make the ability to give unbiased low-noise estimates more apparent.
These 4 factors don't solve the problem. Just as there are incentives to become a skilled liar, there are incentives to convincingly pretend that you value someone's input or to make your track record look better than your judgment is. But these 4 factors do seem to help, especially in smallish communities where the same people have repeated interactions (and can gather more dating by sharing impressions with others). To the extent that the rationality community (sometimes) succeeds at encouraging accuracy-seeking, I suspect that much of it comes from creating a social context where these factors are more strongly at play.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2016-11-28T01:11:30.817Z · LW(p) · GW(p)
The relevant comparison is if you know you are going to arrive at either 12:15 or 12:05 equiprobably---do you say "12:10" or "12:07"? Or, if you are giving a distribution, do you say that the two are equiprobable, or claim a 2/3 chance of 12:05?
Consciously, I am thinking "Let's think this through together to figure out if it's worth doing," not "how can I convince him to approve this?" I'm not at all convinced that the difficulty of lying extends to the difficulty of maintaining a mismatch between conscious reasoning and various subconscious processes that feed into estimates.
Replies from: Unnamed↑ comment by Unnamed · 2016-11-28T05:09:08.768Z · LW(p) · GW(p)
Consciously, I am thinking "Let's think this through together to figure out if it's worth doing," not "how can I convince him to approve this?" I'm not at all convinced that the difficulty of lying extends to the difficulty of maintaining a mismatch between conscious reasoning and various subconscious processes that feed into estimates.
I'm imagining signs during the conversation like: If it starts to look like some other project would be more valuable than the idea you came in with, do you seem excited or frustrated? Or: If a new consideration comes up which might imply that your project idea is not worth doing, do you pursue that line of thought with the same sort of curiosity and deftness that you bring to other topics?
These are different from the kinds of tells that a person gives when lying, but they do point to the general rule of thumb that one's mental processes are typically neither perfectly opaque nor perfectly transparent to others. They do seem to depend on the processes that are actually driving your behavior; merely thinking "Let's think this through together" will probably not make you excited/curious/etc. if your subconscious processes aren't in accord with that thought.
The relevant comparison is if you know you are going to arrive at either 12:15 or 12:05 equiprobably---do you say "12:10" or "12:07"? Or, if you are giving a distribution, do you say that the two are equiprobable, or claim a 2/3 chance of 12:05?
These are subtle enough differences so that I don't have clear intuitions on which ETA would lead me to have the most positive impression of the person who showed up late.
I agree with your broader point that there are social incentives which favor various sorts of inaccuracy, and that accuracy won't always create the best impression. My broader point is that there are also social incentives for accuracy, and various indicators of whether a person is seeking accuracy, and it's possible to build a community that strengthens those relative to the incentives for inaccuracy.