Opinionated Uncertainty

post by DirectedEvolution (AllAmericanBreakfast) · 2021-06-29T00:11:32.183Z · LW · GW · 1 comments

Contents

  Real Life: It's More Than Mere Algebra!
  Identity: The Cilantro Of Decision-Making
  In Search Of Ramekins
None
1 comment

Real Life: It's More Than Mere Algebra!

The expected value (E) of an event is the product of its value (V) and its probability (P):

E = VP.

Considered algebraically, both V and P play equal roles in determining E. It wouldn't make sense to say that "the magnitude of the consequences of an event are more important than its probability." However, the E = VP equation draws attention away from other questions where this statement makes more sense. For example, we will always have uncertainty in estimating the values of V and P. If it is cheaper to reduce our uncertainty of V than P by a certain percentage through further study, then it's more important to study V.

Let's use two examples, one to show when we might be agnostic about whether it's easier to study V or P, and another where we might have strong intuitions that we can gain more by studying one than the other.

For an example of the "agnostic" situation (we have no strong intuitions about whether V or P will be easier to study), imagine that your friend is going to drive to the pub to have a couple beers. His plan is to sober up, then drive home. This is based on his intuition that he won't drink more than a couple beers, and that even if he did, he'd just call a cab. Fortunately, your friend is very responsive to rational argument, and very open to answering your questions honestly. He understands that his risk/benefit calculation is subject to uncertainty, and would change his mind if further investigation changed the result.

You are afraid that he's making a flawed risk/benefit calculation, but you only have five minutes to do your research before he heads out the door. Would you spend that five minutes trying to research the pros and cons of driving himself vs. taking a cab (V)? Or would you try to better estimate the likelihood that your friend gets into an accident on the way home, despite his prediction that he'll have "only a couple beers" (P)?

Determining which is easier to research might depend on the specific situation. But it's easy to see that they're not likely to be equally easy. Your brain might first quickly consider the various factors you might research. Is your friend concerned about factors beyond the cost of cab fare? Is he perhaps hoping that driving himself will act as a commitment device to prevent himself from drinking too much? Will he have to drive any notoriously dangerous roads along the way? How far will he be driving, and how fast? Has your friend ever gotten more drunk than he intended in the past? Might he feel too embarrassed to admit he was too drunk to drive, and thus refuse to call a cab? Or might he feel it would be too inconvenient to return to the pub the next day to pick up his car, should he drive to the pub but take a cab home?

Notice that this brainstorm includes both questions about consequences, and questions about likelihoods. This is my authentic brainstorm conducted in the natural course of writing this article. It might be cheap to gather information on them, simply by asking them one by one to your friend and seeing if they provoke an "oh yeah, hadn't thought of that!" response.

As a second example, imagine that you are considering running a clinical study of a new drug. If you don't run the study, you don't gain any information about what percentage of patients experience a benefit or side effect (P), nor the extent of the healing or the harmfulness of the side effects are (V). If you do run the study, then it's relatively cheap and easy to just measure both. Information about P and V are bundled.

Let's also consider a couple of examples where we have stronger intuitions about whether it's cheaper to reduce our uncertainty about value or probability. First, imagine that you are organizing a poker night with friends. Buy-in is $100 each, and six people will attend. You know that you could gain as much as $500, or lose as much as $100, and you know that the likelihood of a randomly selected player winning is 1/6. Beyond that, there is little you can do to improve your understanding of value, but much you can do to reduce your uncertainty about probabilities. You might try to guess whether your friends are experienced poker players, or whether they're any good at bluffing. You might also study the strategy of poker, under the assumption that this will increase your chances of winning.

By contrast, imagine that you have cancer. Your doctor tells you that of people with your diagnosis, about 20% survive without a bone marrow transplant and chemotherapy. Those who forego it almost always die within a year, but are able to be relatively comfortable in hospice in the intervening time. 50% survive with the therapy. However, the therapy itself is very harsh, and you'll most likely be permanently disabled due to the previous effects of the cancer.

In this situation, it might be very hard to determine whether you're more likely than the average patient to experience spontaneous remission, survive the therapy, or suffer a permanent disability. What you can do is try to learn more about what each of these experiences are like. How do elderly people who suffer a permanent disability that puts them in a wheelchair tend to cope with it? How does it feel to undergo this therapy? What would you miss the most if you died sometime in the next year? These sorts of questions all address the value of these events, while accepting the given level of uncertainty about their probabilities.

We can also imagine a situation in which it seems like we should be agnostic between researching P or V, but where in reality it is much easier to reduce uncertainty about one or the other. Perhaps there are ways we can research the meta-question whether or not P or V is easier to understand better. There are reasons why we might shy away from focusing on this meta-question. Such an analysis takes time away from researcing P or V directly. Furthermore, the way we research this question might involve doing some research on both P and V, and forming a judgment about which seems to be more productive. We might even progress in iterative cycles, repeatedly "sampling" the returns on investing in both P and V and comparing the value of the information gained in each line of inquiry. If one or the other seems to offer especially large historical returns to investment, then we might shift more research time to that area for the next round of investigation.

However, it might sometimes be cheaper to spend time determining whether P or V is a more useful variable to research. Here, we are looking for cases where P and V aren't easily bundled, and possibly where directly "sampling" them enough to extrapolate is costly. For example, imagine Bellamy is at a party, considering who to flirt with. As they survey the room, Bellamy worries that if they're seen to flirt with one person, it will make others jealous and less likely to be interested. Observing the room gives Bellamy a little bit of information about how attractive each person seems (V), though they know not to judge a book by its cover. Significant uncertainty still remains in V. Bellamy is also uncertain about how attractive others will find them to be (P). However, Bellamy reflects that the party's host (who Bellamy's not interested in dating) knows who at the party is single and who is not. Bellamy can obtain additional information on P without risking jealousy via a futile flirtation with somebody who's already in a relationship!

Bellamy's thought process here is addressing the meta-question of whether P or V is easier to understand better (by identifying an additional piece of non-obvious, but easy-to-obtain and useful information), prior to actually doing any direct research (flirting, or interrogating the host). This distinction between research and meta-research is a little artificial. After all, in retrospect, wasn't Bellamy's thought process just a way of researching P directly, by identifying methods of gaining more information about P? The reason I find this distinction useful is that Bellamy could continue their brainstorm indefinitely, standing frozen in the doorway in an analytical trance, coming up with more and more ideas of how to gain information about the dateability of the partygoers. Yet until they put one of those plans into action, if prompted, they would have no additional information about P or V than when they started. By contrast, if they asked the host who at the party is single, or started flirting, they would gain information about P and V.

Identity: The Cilantro Of Decision-Making

This whole discussion is merely putting words to intuitive choices that people make all the time, hardly thinking about it at all. Is there a mechanical way to make these distinctions? My guess is that some of our facility with these choices has to do with accumulated cultural knowledge about how to deal with the uncertainty present in specific situations. We decide how to research by imitation, taking advice on how to strategically reduce our uncertainty from people who seem to be more experienced. If you've never run a business, you probably at least know that you should make a business plan first. You know this because the phrase "business plan" has been uttered often enough that it's strongly associated with the idea of starting a new business. Some of our facility with uncertainty reduction strategy must also have to do with our ability to imagine how scenarios play out.

In novel and relatively abstract situations, how do we choose a strategy for how to reduce our uncertainty? When there is little culture to guide us, our imagination fails, and we have little faith in our ability to sort out good advice from bad, how do we decide whether to focus on P and/or V, and how to go about it? I can't really say. Clearly, there is a prospect of infinite recursion here, resulting in analysis paralysis. We plan how to plan how to plan. Yet people do ultimately move from planning, to decision, to action, in ways that they reflectively endorse. Time presses us to act based on what we know, and this is as true when we are "planning how to plan" as when we are planning for action. Bellamy can only stay in the doorway for so long before she'll begin to feel awkward, or her worry that the party is moving along without her supercedes her worry about a suboptimal flirtation strategy. Perhaps we can include this in the list of tasks that are easy for humans and hard for computers.

For me, one of the most interesting facets of the issue is the point that planning, and meta-planning, have costs, and not just in terms of time. In various aspects of life, people seem to find various forms of apparently cheap and useful information-gathering downright distasteful, and will act on that distaste even when the objective they are pursuing is very important to them. For example, some people find the idea of using a spreadsheet to evaluate and compare potential romantic partners to be equal parts horrible and ridiculous. Sometimes, they'll justify this by saying that this approach is likely to erase valuable but intuitive information, that it puts you in a non-romantic mindset, or that they're afraid of how the people they're dating would react to discover they were being evaluated in this way. Sometimes, it's just a hatred of putting numbers on experiences.

Similarly, using online dating services as a relatively fast way to find potential partners also strikes some people to lack romance, and not just because it selects for the sort of people who use online dating services. They don't like describing themselves. They don't like the idea of "swiping" on people. They don't enjoy the prospect of a relationship whose origin story is "Tinder." And they especially don't like re-imagining their relationship with online dating.

Those who happily use such strategies to find and evaluate potential partners might find this reluctance to be silly. Why forego information on something that could be useful to you? Yet if information has a cost, and if it's even more costly to change the reason that information has a cost, then that's the end of the matter. Without wanting to push this argument too hard, we should perhaps have some respect for other people's taste in decision-making strategies, just as we might respect their preferences in what to eat, what to read, or who to love. Likewise, if we side against people's preferences in some area - if our sister tends to date awful men, or our father has bought Jordan Peterson's books and is now eating nothing but steak - then we might equally be said to be against their taste in decision-making. This idea also applies to our relationship with our own decision-making. Examination comes with costs that may outweigh the benefits, and some forms of inquiry might even work directly against our goals!

This is not to say that information-gathering is always, or even commonly harmful. If one line of inquiry is not helpful, that usually just means that some other area is worth our consideration. If all else fails, there is always Wikipedia's article on deep sea gigantism. It is simply to deny three simplistic ideas. One is the notion that P and V are "equally important," at least when it's functioning as a thought-stopper. The second is the idea that there is a single, best decision-making process for a certain person (including ourselves). We may not understand the full range of their goals, or may be failing to empathize with them completely. We're like an unaligned superintelligence, smart enough to help them achieve the goals we think they have, but not smart enough to help them achieve the other goals we didn't realize they also had. The third is that we should be relativists, respecting people's preferences no matter what they are. Sometimes, people are possessed by goals that they do not want to have, or that are manifestly bad for them. They want to think long-term about their health, but they are trapped in a short-term value system that makes them want to eat loads of ice cream every day. Sometimes, we might like that the superintelligence is "unaligned," at least for a certain definition of our "goals." We like that our personal trainer disapproves of the ice cream that makes us salivate.

Remember that aligning AI with human values and goals is an open dilemma, one of the primary challenges of the coming century. That's not just because an artificial superintelligence could be much smarter than us. It's also because giving an account of what our human values and goals are, or should be, is extremely difficult. Yet figuring out how to pursue your goals effectively - reducing your uncertainty about V and P - is something that we do every day. Whether you know it or not, you're routinely working on an unsolved mathematical problem!

We could call this perspective a state of being uncertain, but opinionated. The reason that people so often seem to have strong opinions, despite apparently lacking information, is that they are expressing their taste - not just for the decision at hand, but for the process of thought and action that lead to this state of affairs in the first place. This is one reason why people clash, and clash again, without progressing toward mutual understanding. The problem is not a failure to think rationally. Instead, clashes are due to rational actions motivated by the participants' identities and the circumstances at hand. The identities have little to do with rationality, and encompass epistemics, morality, and strategy.

This sounds like an argument for relativism, but once again, it is not. We can be quite uncertain about the truth, while being very confident about a course of action given what we do know. We can chalk up a difference of opinion to other people's lack of access to information or culture, to their thoughtlessness or stress, to different constellations of goals surrounding the point of difference, or simply to different choices of words and little time to sort them out. At some time, most thoughtful people with a strong-yet-controversial opinion have the thought, "if so many people disagree with me, then why do I hold this opinion as strongly as I do?" Given extreme differences in human experience, the difficulty of articulating our inner experience, and the unfathomable amount there is to know, it should not be surprising at all that clashes are common and so often fruitless. It's not that there is no right answer. There is a right answer, but we simply don't have time to sort it out any better. At some point, whoever has agency will have to commit to a plan, and then work on the next round of information-gathering about how best to execute it.

In Search Of Ramekins

Another interesting question is how much we can gain over intuition by improving our skills in information-gathering. In many of the examples above (drunken driving, poker, clinical testing, poker) it seems fairly obvious to me whether one should prioritize P or V, or take an agnostic approach. Other times, as in the base of Bellamy at the party, there seems to be value in standing back and asking what sorts of information you could pursue, and imagining the costs and methods by which you might carry out such a search.

It might be easier to "win" arguments by prophylaxis rather than by persuasion. Of course, there is no such thing as epistemic "victory," in the sense of one person triumphing over another. The only victory is convergence on the truth. If you believe, as you should, that the information you have on a topic warrants your level of confidence on that topic, then it makes some sense to work to persuade others to accept both the information you have and update their confidence accordingly.

Perhaps, however, it is easier to sidestep this by debating how one should seek more information on a topic or decision, rather than by directly trying to persuade the other person of your beliefs. Would this work?

One scenario from my own life was my halfhearted attempt to persuade some men from Florida who I hiked with on a guided climb of Mt. Baker during the 2021 heatwave that climate change is a serious problem. What I actually did was tell them that the trail guides were seriously worried about the heat (which was true), and that what we were experiencing is not "normal weather." Their response was that they sort of... grunted? And that was the end of that. Could I have, somehow, drawn them into a conversation about how we should seek more information on climate change? In this situation, I was merely appealing to an authority who represented a view that I also shared. Perhaps instead, I could have asked them how people in Florida think about climate change. That might have offered enough plausible deniability that they could talk about their own views without being "held responsible" for them, while pointing away from matters of science and toward social gossip, which many people love to share. Clearly, I did not think of this strategy at the time, and I find it unlikely that I'd have thought of it should a similar situation occur in the future. Stepping back from the problem and analyzing it has at least given me something new to try, should the opportunity strike.

It seems that one of the great skills of life is the ability to take an inchoate sense of confused possibilities and goals, turn it into a question that's defined enough to try answering, and then stepping back to brainstorm how to answer it. That brainstorm might just turn up culturally normal or intuitive choices, and that can be fine. The difference is between Bellamy at the door staring into the room full of strangers in a swirling state of desire and doubt, and Bellamy transforming those emotions into a series of clear questions and answers. Personally, I find that brainstorming like this at my computer is much easier than thinking on my feet. But it also helps to have a lightweight framework like E = VP, and know how to use it.

1 comments

Comments sorted by top scores.

comment by Pattern · 2021-07-02T16:44:17.321Z · LW(p) · GW(p)
Fortunately, your friend is very responsive to rational argument, and very open to answering your questions honestly. He understands that his risk/benefit calculation is subject to uncertainty, and would change his mind if further investigation changed the result.

The solution isn't information, it's modifying the situation through action:

  • Limit the number of beers: take cash, and only so much. (Issues: calling that cab.)*
  • 'Risk of taking dumb action after drinking beers, in the form of driving drunk': Go. You don't drink, so you can drive or call the cab. (May have issues.)

*Similar style technique: pay for a cab over, so there isn't a car to drive back while drunk?

(The above doesn't make reference to expected utility explicitly.)


You might also study the strategy of poker, under the assumption that this will increase your chances of winning.

Not necessarily about probability. And yet it's a working heuristic. (I could say the heuristic works, if monotonicity holds, but someone who thinks of that and uses that, doesn't need to 'know about monotonicity'.)


By contrast, imagine that you have cancer.

The way that conclusion is arrived at might be illuminating, for the purpose you have in mind.


almost always

The probability might be useful here - though the information is not actually in the form of a probability.


In this situation, it might be very hard to determine whether you're more likely than the average patient to experience spontaneous remission, survive the therapy, or suffer a permanent disability.

Actually, you can examine risk factors, or things you have that might be. For example, maybe if you have diabetes, then that changes the risk - specifically increases it. There's also factors like your age.


and we have little faith in our ability to sort out good advice from bad, how do we decide whether to focus on P and/or V, and how to go about it?

That part seems rather key to your argument. Maybe people normally solve the problem by...conducting research, and seeing if they think they have a better idea of what to do after doing the research/a better understanding.


For example, some people find the idea of using a spreadsheet to evaluate and compare potential romantic partners to be equal parts horrible and ridiculous.

How do they usually feel about...taking action and acquiring more information? i.e. finding out more about said potential partners, doing stuff together, finding out who also has an interest in them, etc.?

(Spreadsheet repulsion doesn't sound like it's about gathering information, imo.)


or our father has bought Jordan Peterson's books and is now eating nothing but steak
...
then we might equally be said to be against their taste in decision-making.

Does eating only steak cause gout?

If someone you know likes going to a fast food restaurant and you don't want to (and suggest not going) because recent events make you think that the risk of food poisoning is high, and they go anyway and get sick, they might change their mind about the risks. 'I've never had food poisoning before, but having it now, I understand why you avoid it. This is awful.'


It is [simple] to deny three simplistic ideas.

ice cream

A desire to eat delicious foods, versus a desire to eat healthy foods, might be resolved by finding more foods that are both.


The identities have little to do with rationality, and encompass epistemics, morality, and strategy.

identity? Sounds more like strategy.

Arguably, the point of a strategy is saving time. Arguing with other people about strategy may just lose time, with nothing to show for it.


At some point, whoever has agency will have to commit to a plan, and then work on the next round of information-gathering about how best to execute it.

At some point, you have to stop planning and execute the plan. (Not strictly true, in that planning can occur within and after action, but there is a difference between action and not action that often shows up around 'planning'.)


The only victory is convergence on the truth.

There can be 'victory in the world'. If different people disagree about what strategy to use, all go out and use their strategy, and all succeed, then arguably there isn't a truth issue. Some things can be accomplished multiple ways. Disagreements about the 'best way' may be meaningless.


If you believe, as you should, that the information you have on a topic warrants your level of confidence on that topic, then it makes some sense to work to persuade others to accept both the information you have and update their confidence accordingly.

Why? Do you need others help to execute the plan, or is 'consensus' your goal? (It need not be a value that everyone has, though depending on circumstance it may be useful.)


Bellamy

This was a good example because the 'planning' and 'gaining information' reduced the available possibilities in a useful way for the goal. (While action also had this property, the two routes were arguably somewhat outcome equivalent, while one was faster - talking to the host.)