Local Trust

post by ben_levinstein (benlev), Daniel Herrmann (Whispermute), Aydin Mohseni (aydin-mohseni) · 2025-02-24T19:53:26.953Z · LW · GW · 2 comments

Contents

  Local Trust Without Modesty
    The Surprising Gap Between Trust and Reflection
  Why This Matters
  Looking Ahead
None
2 comments

In the last post [LW · GW], we developed a new way of thinking about deference: you defer to someone when you'd prefer to let them make decisions on your behalf. This helped solve some puzzles about modest experts—experts who express uncertainty about their own expertise.

But even if you're skeptical of expert modesty, there's something else important about real expertise that we need to understand: it's almost always limited in scope. The weather forecaster is an expert about tomorrow's weather but not about your schedule. The doctor is an expert about your treatment but not about your career choices. The AI researcher is an expert about technical developments but not about your personal relationships.

This is obvious in practice, but it raises a theoretical question: How does deference work when we want to listen to someone about some questions but not others? We've so far focused on global deference—listening on all of an expert's opinions about everything. But most real expertise isn't like that. We need a theory of local deference that explains when and why it makes sense to trust someone about specific topics.

It turns out that our "deference as decisions" framework gives us exactly that. And surprisingly, the distinction between trust and reflection (principles we discussed last time [LW · GW]) matter even if experts are completely certain of their own expertise. You can totally trust someone about weather without reflecting their opinions about weather—and this has nothing to do with whether they're modest about their expertise[1].

Local Trust Without Modesty

Let's start with a clear example. The National Weather Service is excellent at predicting the weather. When they say there's a 70% chance of rain tomorrow, that estimate incorporates complex meteorological models, vast amounts of data, and deep knowledge about weather systems. You should probably use their forecasts when deciding whether to bring an umbrella.

But if, for some reason, the Weather Service started offering forecasts on events totally disconnected from the weather—like NVDIA stock prices in ten years—you should probably ignore them.

This is local deference in action: using someone's probabilities for decisions that depend on their domain of expertise, while using your own probabilities for everything else. Notice that this has nothing to do with whether the Weather Service is modest[2] about their expertise. They might be completely confident they're the relevant experts about weather predictions. The point is simply that their expertise is bounded in scope.

What we need is a precise account of what this kind of local deference involves. This is where Total Trust comes in—the local version of Total Trust turns out to capture exactly what we want for outsourcing decision-making.

The Surprising Gap Between Trust and Reflection

Recall from last time that you totally trust someone if given that their estimate of some quantity (the number of inches of rain tomorrow, for example) is above (below) some threshold, yours is also above (below) that threshold. The global version requires you to defer in this way for any quantity. But the local version is restricted to particular domains like the weather, or rain, or anything else.[3] Reflection, on the other hand, requires exact agreement. If you learn that their estimate of some quantity is exactly , then your estimate of that quantity is exactly  as well. In the global case, total trust collapses to reflection when experts are immodest. But in the local case, as we'll see, it does not. 

Let's look at a concrete example of how local trust differs from reflection, even without any modesty. Consider a weather forecaster Alice making predictions about whether it will rain tomorrow. She's an excellent forecaster who can be either 100%, 75%, 25%, or 0% confident about rain. Suppose you have the following information about her predictions:

Your overall probability of rain is 40%. But notice: When you learn Alice is at least 75% confident of rain, your probability becomes 75%.[4] You trust her high confidence—when she's confident, you become confident too.

But you don't reflect her opinions exactly. When you learn she's 75% confident, you become 70% confident—not 75%. Interestingly, this isn't because Alice is modest (she might be completely certain of her expertise). It's simply because total trust requires less than reflection.

Here's why this makes sense in terms of decisions: Imagine you know there will be some bet on whether it rains—you'll win $x if it rains and lose $y if it doesn't. You have two options:

  1. Decide for yourself whether to take the bet when it's offered
  2. Let Alice make the decision on your behalf. 

You have to make this first choice—whether to delegate or not—before finding out what Alice's credences are. Even though Alice isn't perfectly calibrated when she's 75% confident (she's actually overconfident), you'd still be better off letting her decide than deciding for yourself regardless of the values of x and y. No matter what the payoffs x and y turn out to be, you'll do at least as well in expectation letting her choose and sometimes strictly better.

This is what total trust amounts to: You prefer to use her probabilities for decisions even though you wouldn't match them exactly.

Why This Matters

This gap between trust and reflection in the local context even when no modesty is involved has several implications for how we think about expert deference.

First, it connects naturally to accuracy: You totally trust someone about a question if and only if you expect their estimates to be at least as accurate as yours, no matter how we reasonably measure accuracy.[5] (We touched on this connection to accuracy in the first post, but here it's true merely locally.) This means trust formalizes a natural notion of epistemic superiority—you trust someone's judgment precisely when you expect them to be closer to the truth than you are.

But here's something surprising: Even when dealing with complete epistemic superiors, we can sometimes add value by combining their judgment with our own. In our weather example, even though Alice is clearly better at forecasting than we are, when we learn her forecast is 75%, our best estimate is 70%. We'd rather use Alice's probabilities than our own unconditional ones for making decisions—but even better is combining them together and using our own posterior credence.

This is only possible because we trust without reflecting. If we reflected Alice's opinions exactly, we'd have no value to add—we'd just copy her 75% probability. But with trust, we can see total epistemic superiority does not mean that our own credence has no usable information.

This matches how we actually consult domain experts. When a doctor tells you there's a 70% chance a treatment will work, you might combine that with your knowledge of your own medical history to make a more informed estimate. Although the doctor's credence is, by your own lights, strictly better for action than your prior credence, combining your own insights and perspective still adds value. Your posterior credence updated on the doctor's credence is better, in your view, than the doctor's credence, which is better in turn than your prior credence. 

This is the main difference with reflection. If you reflect the doctor, your posterior credence updated on the doctor's credence just is the doctor's credence. Your own perspective has no additional value. In effect, you think your own connection to the truth is screened off by the doctor's if you reflect her.  

The key insight is that trusting someone's judgment doesn't require completely deferring to their view. It just requires that you expect to do better by using their probabilities than by using your own while still leaving room to add value by combining perspectives.

Looking Ahead

We've seen how Total Trust gives us a precise framework for bounded deference—one that allows us to formalize what it means to trust someone's judgment about specific questions while maintaining independence on others. This matters not just for everyday expert consultation, but for fundamental questions about delegation and control.

When we defer to experts, we're often facing a principal-agent problem: We want to leverage their superior knowledge while still maintaining appropriate oversight. Sometimes we'll want to fully delegate decisions to them, but other times we'll do better by combining their expertise with our own information. Total Trust helps us understand exactly when each approach makes sense—when to defer completely, and when to use our own conditional probabilities instead of the expert's raw ones.

In our final post, we'll explore the mathematical structure underlying these ideas, showing exactly what makes Total Trust work and what it requires. 

  1. ^

    As with the other posts in the sequence, most of what is said here comes from this paper by Ben, Kevin Dorst, and Bernhard Salow as main authors. 

  2. ^

    Recall from last time that a modest expert is one who is uncertain that she's the expert. 

  3. ^

    More explicitly, if  represents your estimate, and  represents the expert's estimate, whatever it is, then you totally trust the expert globally if for any random variable  and threshold . For the local version, we restrict to some particular random variable(s), and then also require that 

  4. ^

    Let  refer to the event that it rains. Let  refer to Alice's credence in rain. 

  5. ^

    See theorem 3.2 of the paper.

2 comments

Comments sorted by top scores.

comment by JBlack · 2025-02-25T03:17:15.260Z · LW(p) · GW(p)

In the rain forecaster example, it appears that the agent ("you") is more of an expert on Alice's calibration than Alice is. Is this intended?

comment by Kenny2 (kenny-2) · 2025-02-25T00:36:35.244Z · LW(p) · GW(p)

The example is really helpful for me getting a concrete understanding of what it looks like to satisfy Trust without Reflection, and why that goes along with deferring to someone else for decisions - but I don't see what this example of Alice has to do with locality. It looks like the only relevant propositions are whether it rains tomorrow, and what Alice's credences are, and there don't seem to be any propositions we don't defer to her on.