Strong stances

post by KatjaGrace · 2019-10-15T00:40:01.286Z · LW · GW · 9 comments

Contents

  I. The question of confidence
  II. Tentative answers
  III. Stances
None
9 comments

I. The question of confidence

Should one hold strong opinions? Some say yes. Some say that while it’s hard to tell, it tentatively seems pretty bad (probably).

A quick review of purported or plausible pros:

  1. Strong opinions lend themselves to revision:
    1. Nothing will surprise you into updating your opinion if you thought that anything could happen. A perfect Bayesian might be able to deal with myriad subtle updates to vast uncertainties, but a human is more likely to notice a red cupcake if they have claimed that cupcakes are never red. (Arguably—some would say having opinions makes you less able to notice any threat to them. My guess is that this depends on topic and personality.)
    2. ‘Not having a strong opinion’ is often vaguer than having a flat probability distribution, in practice. That is, the uncertain person’s position is not, ‘there is a 51% chance that policy X is better than policy -X’, it is more like ‘I have no idea’. Which again doesn’t lend itself to attending to detailed evidence.
    3. Uncertainty breeds inaction, and it is harder to run into more evidence if you are waiting on the fence, than if you are out there making practical bets on one side or the other.
  2. (In a bitterly unfair twist of fate) being overconfident appears to help with things like running startups, or maybe all kinds of things.
    If you run a startup, common wisdom advises going around it saying things like, ‘Here is the dream! We are going to make it happen! It is going to change the world!’ instead of things like, ‘Here is a plausible dream! We are going to try to make it happen! In the unlikely case that we succeed at something recognizably similar to what we first had in mind, it isn’t inconceivable that it will change the world!’ Probably some of the value here is just a zero sum contest to misinform people into misinvesting in your dream instead of something more promising. But some is probably real value—suppose Bob works full time at your startup either way. I expect he finds it easier to dedicate himself to the work and has a better time if you are more confident. It’s nice to follow leaders who stand for something, which tends to go with having at least some strong opinions. Even alone, it seems easier to work hard on a thing if you think it is likely to succeed. If being unrealistically optimistic just generates extra effort to be put toward your project’s success, rather than stealing time from something more promising, that is a big deal.
  3. Social competition
    Even if the benefits of overconfidence in running companies and such were all zero sum, everyone else is doing it, so what are you going to do? Fail? Only employ people willing to work at less promising looking companies? Similarly, if you go around being suitably cautious in your views, while other people are unreasonably confident, then onlookers who trust both of you will be more interested in what the other people are saying.
  4. Wholeheartedness
    It is nice to be the kind of person who knows where they stand and what they are doing, instead of always living in an intractable set of place-plan combinations. It arguably lends itself to energy and vigor. If you are unsure whether you should be going North or South, having reluctantly evaluated North as a bit better in expected value, for some reason you often still won’t power North at full speed. It’s hard to passionately be really confused and uncertain. (I don’t know if this is related, but it seems interesting to me that the human mind feels as though it lives in ‘the world’—this one concrete thing—though its epistemic position is in some sense most naturally seen as a probability distribution over many possibilities.)
  5. Creativity
    Perhaps this is the same point, but I expect my imagination for new options kicks in better when I think I’m in a particular situation than when I think I might be in any of five different situations (or worse, in any situation at all, with different ‘weightings’).

A quick review of the con:

  1. Pervasive dishonesty and/or disengagement from reality
    If the evidence hasn’t led you to a strong opinion, and you want to profess one anyway, you are going to have to somehow disengage your personal or social epistemic processes from reality. What are you going to do? Lie? Believe false things? These both seem so bad to me that I can’t consider them seriously. There is also this sub-con:

    1. Appearance of pervasive dishonesty and/or disengagement from reality
      Some people can tell that you are either lying or believing false things, due to your boldly claiming things in this uncertain world. They will then suspect your epistemic and moral fiber, and distrust everything you say.
  2. (There are probably others, but this seems like plenty for now.)

II. Tentative answers

Can we have the pros without the devastatingly terrible con? Some ideas that come to mind or have been suggested to me by friends:

1. Maintain two types of ‘beliefs’. One set of play beliefs—confident, well understood, probably-wrong—for improving in the sandpits of tinkering and chatting, and one set of real beliefs—uncertain, deferential—for when it matters whether you are right. For instance, you might have some ‘beliefs’ about how cancer can be cured by vitamins that you chat about and ponder, and read journal articles to update, but when you actually get cancer, you follow the expert advice to lean heavily on chemotherapy. I think people naturally do this a bit, using words like ‘best guess’ and ‘working hypothesis’.

I don’t like this plan much, though admittedly I basically haven’t tried it. For your new fake beliefs, either you have to constantly disclaim them as fake, or you are again lying and potentially misleading people. Maybe that is manageable through always saying ‘it seems to me that..’ or ‘my naive impression is..’, but it sounds like a mess.

And if you only use these beliefs on unimportant things, then you miss out on a lot of the updating you were hoping for from letting your strong beliefs run into reality. You get some though, and maybe you just can’t do better than that, unless you want to be testing your whacky theories about cancer cures when you have cancer.

It also seems like you won’t get a lot of the social benefits of seeming confident, if you still don’t actually believe strongly in the really confident things, and have to constantly disclaim them.

But I think I actually object because beliefs are for true things, damnit. If your evidence suggests something isn’t true, then you shouldn’t be ‘believing’ it. And also, if you know your evidence suggests a thing isn’t true, how are you even going to go about ‘believing it’? I don’t know how to.

2. Maintain separate ‘beliefs’ and ‘impressions’. This is like 1, except impressions are just claims about how things seem to you. e.g. ‘It seems to me that vitamin C cures cancer, but I believe that that isn’t true somehow, since a lot of more informed people disagree with my impression.’ This seems like a great distinction in general, but it seems a bit different from what one wants here. I think of this as a distinction between the evidence that you received, and the total evidence available to humanity, or perhaps between what is arrived at by your own reasoning about everyone’s evidence vs. your own reasoning about what to make of everyone else’s reasoning about everyone’s evidence. However these are about ways of getting a belief, and I think what you want here is actually just some beliefs that can be got in any way. Also, why would you act confidently on your impressions, if you thought they didn’t account for others’ evidence, say? Why would you act on them at all?

3. Confidently assert precise but highly uncertain probability distributions “We should work so hard on this, because it has like a 0.03% chance of reshaping 0.5% of the world, making it a 99.97th percentile intervention in the distribution we are drawing from, so we shouldn’t expect to see something this good again for fifty-seven months.” This may solve a lot of problems, and I like it, but it is tricky.

4. Just do the research so you can have strong views. To do this across the board seems prohibitively expensive, given how much research it seems to take to be almost as uncertain as you were on many topics of interest.

5. Focus on acting well rather than your effects on the world. Instead of trying to act decisively on a 1% chance of this intervention actually bringing about the desired result, try to act decisively on a 95% chance that this is the correct intervention (given your reasoning suggesting that it has a 1% chance of working out). I’m told this is related to Stoicism.

6. ‘Opinions’
I notice that people often have ‘opinions’, which they are not very careful to make true, and do not seem to straightforwardly expect to be true. This seems to be commonly understood by rationally inclined people as some sort of failure, but I could imagine it being another solution, perhaps along the lines of 1.

(I think there are others around, but I forget them.)

III. Stances

I propose an alternative solution. Suppose you might want to say something like, ‘groups of more than five people at parties are bad’, but you can’t because you don’t really know, and you have only seen a small number of parties in a very limited social milieu, and a lot of things are going on, and you are a congenitally uncertain person. Then instead say, ‘I deem groups of more than five people at parties bad’. What exactly do I mean by this? Instead of making a claim about the value of large groups at parties, make a policy choice about what to treat as the value of large groups at parties. You are adding a new variable ‘deemed large group goodness’ between your highly uncertain beliefs and your actions. I’ll call this a ‘stance’. (I expect it isn’t quite clear what I mean by a ‘stance’ yet, but I’ll elaborate soon.) My proposal: to be ‘confident’ in the way that one might be from having strong beliefs, focus on having strong stances rather than strong beliefs.

Strong stances have many of the benefits of confident beliefs. With your new stance on large groups, when you are choosing whether to arrange chairs and snacks to discourage large groups, you skip over your uncertain beliefs and go straight to your stance. And since you decided it, it is certain, and you can rearrange chairs with the vigor and single-mindedness of a person who knowns where they stand. You can confidently declare your opposition to large groups, and unite followers in a broader crusade against giant circles. And if at the ensuing party people form a large group anyway and seem to be really enjoying it, you will hopefully notice this the way you wouldn’t if you were merely uncertain-leaning-against regarding the value of large groups.

That might have been confusing, since I don’t know of good words to describe the type of mental attitude I’m proposing. Here are some things I don’t mean by ‘I deem large group conversations to be bad’:

  1. “Large group conversations are bad” (i.e. this is not about what is true, though it is related to that.)
  2. “I declare the truth to be ‘large group conversations are bad’” (i.e. This is not of a kind with beliefs. Is not directly about what is true about the world, or empirically observed, though it is influenced by these things. I do not have power over the truth.)
  3. “I don’t like large group conversations”, or “I notice that I act in opposition to large group conversations” (i.e. is not a claim about my own feelings or inclinations, which would still be a passive observation about the world)
  4. “The decision-theoretically optimal value to assign to large groups forming at parties is negative”, or “I estimate that the decision-theoretically optimal policy on large groups is opposition” (i.e. it is a choice, not an attempt to estimate a hidden feature of the world.)
  5. “I commit to stopping large group conversations” (i.e. It is not a commitment, or directly claiming anything about my future actions.)
  6. “I observe that I consistently seek to avert large group conversations” (this would be an observation about a consistency in my behavior, whereas here the point is to make a new thing (assign a value to a new variable?) that my future behavior may consistently make use of, if I want.)
  7. “I intend to stop some large group conversations” (perhaps this one is closest so far, but a stance isn’t saying anything about the future or about actions—if it doesn’t get changed by the future, and then in future I want to take an action, I’ll probably call on it, but it isn’t ‘about’ that.)

Perhaps what I mean is most like: ‘I have a policy of evaluating large group discussions at parties as bad’, though using ‘policy’ as a choice about an abstract variable that might apply to action, but not in the sense of a commitment.

What is going on here more generally? You are adding a new kind of abstract variable between beliefs and actions. A stance can be a bit like a policy choice on what you will treat as true, or on how you will evaluate something. Or it can also be its own abstract thing that doesn’t directly mean anything understandable in terms of the beliefs or actions nearby.

Some ideas we already use that are pretty close to stances are ‘X is my priority’, ‘I am in the dating market’, and arguably, ‘I am opposed to daschunds’. X being your priority is heavily influenced by your understanding of the consequences of X and its alternatives, but it is your choice, and it is not dishonest to prioritize a thing that is not important. To prioritize X isn’t a claim about the facts relevant to whether one would want to prioritize it. Prioritizing X also isn’t a commitment regarding your actions, though the purpose of having a ‘priority’ is for it to affect your actions. Your ‘priority’ is a kind of abstract variable added to your mental landscape to collect up a bunch of reasoning about the merits of different things, and package them for easy use in decisions.

Another way of looking at this is as a way of formalizing and concretifying the step where you look at your uncertain beliefs and then decide on a tentative answer and then run with it.

One can be confident in stances, because a stance is a choice, not a guess at a fact about the world. (Though my stance may contain uncertainty if I want, e.g. I could take a stance that large groups have a 75% chance of being bad on average.) So while my beliefs on a topic may be quite uncertain, my stance can be strong, in a sense that does some of the work we wanted from strong beliefs. Nonetheless, since stances are connected with facts and values, my stance can be wrong in the sense of not being the stance I should want to have, on further consideration.

In sum, stances:

  1. Are inputs to decisions in the place of some beliefs and values
  2. Integrate those beliefs and values—to the extent that you want them to be—into a single reusable statement
  3. Can be thought of as something like ‘policies’ on what will be treated as the truth (e.g. ‘I deem large groups bad’) or as new abstract variables between the truth and action (e.g. ‘I am prioritizing sleep’)
  4. Are chosen by you, not implied by your epistemic situation (until some spoilsport comes up with a theory of optimal behavior)
  5. therefore don’t permit uncertainty in one sense, and don’t require it in another (you know what your stance is, and your stance can be ‘X is bad’ rather than ‘X is 72% likely to be bad’), though you should be uncertain about how much you will like your stance on further reflection.

I have found having stances somewhat useful, or at least entertaining, in the short time I have been trying having them, but it is more of a speculative suggestion with no other evidence behind it than trustworthy advice.

9 comments

Comments sorted by top scores.

comment by Richard_Kennaway · 2019-10-15T08:43:46.377Z · LW(p) · GW(p)

What you're calling a stance seems to me a case of decisions. "What sizes of groupings work best at a party?" is something one can form a belief about in some continuous space of possible beliefs. "I will arrange the seating in groups of no more than five" is a decision.

Beliefs are continuous. Decisions and actions are discrete. Deciding to do this rather than that does not require one to deceive oneself [LW · GW] into certainty that this is better than that. It only requires, well, deciding. Decision should screen off belief from action. If it does not, it was not a decision.

comment by cousin_it · 2019-10-16T10:01:52.216Z · LW(p) · GW(p)

Are successful people unusually confident and optimistic (less rational than average) or unusually good at noticing and taking opportunities (more rational than average)?

comment by Connor_Flexman · 2019-10-15T21:51:20.490Z · LW(p) · GW(p)

Not core, but when you say

(I don’t know if this is related, but it seems interesting to me that the human mind feels as though it lives in ‘the world’—this one concrete thing—though its epistemic position is in some sense most naturally seen as a probability distribution over many possibilities.)

It's notable that it seems like some plausible probabilistic models of neuroscience are formatted such that only one path actually fires (is experienced), and the probability only comes in at the level of the structure weighting the probability of which path might fire.

comment by Vanessa Kosoy (vanessa-kosoy) · 2019-10-15T21:16:23.071Z · LW(p) · GW(p)

This makes me think of Thompson sampling. There, on each round/episode you sample one hypothesis out of your current belief state and then follow the optimal action/policy for this hypothesis. In fact, Thompson sampling seems like one of the most natural computationally efficient algorithms for approximating Bayes-optimal decision making, so perhaps it is not surprising if it's useful for real life decision making too.

comment by areiamus · 2019-10-16T20:12:56.996Z · LW(p) · GW(p)

Meta: are you republishing this piece from somewhere else? I subscribe to LW (and EAF) with RSS and over the past few days I've had all of your previous posts inserted into my feed three times. Is this likely to be some issue with LW, or an integration with your personal blog?

Replies from: habryka4
comment by habryka (habryka4) · 2019-10-17T00:39:33.187Z · LW(p) · GW(p)

It's an issue with LW. Since RSS doesn't provide unique IDs for posts, we are currently determining whether a post is new in an RSS feed on the basis of its link-field, which seems to have changed two times on Katja's blog for some reason in the past two days (a bunch of wordpress settings influence this, so she likely changed some setting somewhere).

This definitely isn't Katja's fault, and we should improve our algorithm to figure out whether a post in an RSS feed has already been marked as imported.

Replies from: pjeby
comment by pjeby · 2019-10-17T01:52:46.828Z · LW(p) · GW(p)

Since RSS doesn't provide unique IDs for posts,

I notice that I am confused, as RSS has a guid field for precisely this purpose. Is it that LW's RSS generation does not include it, or is it some other site producing the RSS?

Replies from: habryka4
comment by habryka (habryka4) · 2019-10-17T03:31:02.130Z · LW(p) · GW(p)

Oh yeah, I remember experimenting with that, though ended up running into similar problems as comparing links for the wordpress case. I remember the ID changing depending on some kind of context, though I don't remember the exact thing (this code was some of the first code I wrote for the new LessWrong, so it's been a while).

I do think this is a pretty straightforwardly solvable problem, we just haven't put much effort into it, since it hasn't been much of a problem in the past.

Is it that LW's RSS generation does not include it, or is it some other site producing the RSS?

This is talking about RSS imports, so we are consuming an RSS feed from an external site, and parsing it into a LessWrong post. So we don't really have control over what data is available.

comment by Pattern · 2019-10-16T19:01:18.678Z · LW(p) · GW(p)
Some people can tell that you are either lying or believing false things, due to your boldly claiming things in this uncertain world. They will then suspect your epistemic and moral fiber, and distrust everything you say.

At most only the subset of people for which this changes depending on what you do/say should be taken into account.