On Trust

post by johnswentworth · 2023-12-06T19:19:07.680Z · LW · GW · 24 comments

Contents

  The Weirdness Of “Trust”
  What’s This “Trust” Thing?
  Why I Don’t Trust “Trust”
  What I Do Instead
None
24 comments

“Trust”, as the word is typically used, is a… weird concept, to me. Like, it’s trying to carve the world in a way which I don’t naturally carve it myself. This post is my attempt to convey what I find weird about “trust”, and what adjacent concepts I personally find more natural instead.

The Weirdness Of “Trust”

Here are some example phrases which make “trust” feel like a weird/unnatural concept to me:

To me, the phrase “I decided to trust her” throws an error. It’s the “decided” part that’s the problem: beliefs are not supposed to involve any “deciding”. There’s priors, there’s evidence, and if it feels like there’s a degree of freedom in what to do with those, then something has probably gone wrong. (The main exception here is self-fulfilling prophecy, but that’s not obviously centrally involved in whatever “I decided to trust her” means.)

Similarly with “trust me”. Like, wat? If I were to change my belief about some arbitrary thing, just because somebody asked me to change my belief about that thing, that would probably mean that something had gone wrong.

“Should I trust him?” is a less central example, but… “should” sounds like it has a moral/utility element here. I could maybe interpret the phrase in a purely epistemic way - e.g. “should I trust him?” -> “will I end up believing true things if I trust him?” - but also that interpretation seems like it’s missing something about how the phrase is actually used in practice? Anyway, a moral/utility element entering epistemic matters throws an error.

The thing which is natural to me is: when someone makes a claim, or gives me information, I intuitively think “what process led to them making this claim or giving me this information, and does that process systematically make the claim/information match the territory?”. If Alice claims that moderate doses of hydroxyhopytheticol prevent pancreatic cancer, then I automatically generate hypotheses for what caused Alice to make that claim. Sometimes the answer is “Alice read it in the news, and the reporter probably got it by misinterpreting/not-very-carefully-reporting a paper which itself was some combination of underpowered, observational, or in vitro/in silico/in a model organism”, and then I basically ignore the claim. Other times the answer is “Alice is one of those friends who’s into reviewing the methodology and stats of papers”, and then I expect the claim is backed by surprisingly strong evidence.

Note that this is a purely epistemic question - simplifying somewhat, I’m asking things like “Do I in fact think this information is true? Do I in fact think that Alice believes it (or alieves it, or wants-to-believe it, etc)?”. There’s no deciding whether I believe the person. Whether I “should” trust them seems like an unnecessary level of meta-reasoning. I’m just probing my own beliefs: not “what should I believe here”, but simply “what do I in fact believe here”. As a loose general heuristic, if questions of belief involve “deciding” things or answering “should” questions, then a mistake has probably been made. The rules of Bayes inference (or logical uncertainty, etc) do not typically involve “deciding” or “shouldness”; those enter at the utility stage, not the epistemic stage.

What’s This “Trust” Thing?

Is there some natural thing which lines up with the concept of “trust”, as it’s typically used? Some toy model which would explain why “deciding to trust someone” or “asking for trust” or “offering trust” make sense, epistemically? Here’s my current best guess.

Core mechanism: when you “decide to trust Alice”, you believe Alice’ claims but, crucially, if you later find that Alice’ claims were false then you’ll be pissed off and probably want to punish Alice somehow. In that case she’s “breached trust” - a wording which clearly evokes breach of contract.

“Trust”, in other words, is supposed to work like a contract: Alice commits to tell you true things, you commit to believe her. Implicitly, you’ll punish Alice (somehow, with some probability) if her claim turns out to be false. This gives Alice an incentive to make true claims, and you therefore assign higher credence to Alice’ claims.

Trust-as-contract matches up nicely with standard phrasings like “offer/accept trust”, “breach of trust”, and our examples above like “trust me”.

Epistemically, this trust-contract makes sense from a “buyer’s perspective” (i.e. you, choosing to trust Alice) insofar as you expect Alice to make true claims if-and-only-if operating under the trust-contract. And that’s where I usually get off the train.

Why I Don’t Trust “Trust”

It is very rare that I expect someone to make true claims if-and-only-if operating under some kind of implicit “trust contract”.

For starters, people largely just have pretty crap epistemics, at least when it comes to stuff where there’s any significant uncertainty about the truth of claims in the first place. (Of course for the vast majority of day-to-day information, there isn’t much uncertainty [LW · GW]  - the sky is blue, 2*2 = 4, I’m wearing black socks, etc.) My mother makes a lot of claims based on what she read in the news, and I automatically discard most such claims as crap. I “don’t trust my mother” on such matters, not because I think she’ll “betray my trust” (i.e. intentionally breach contract), but because I expect that she is simply incapable of reliably keeping up her end of such a trust-contract in the first place. She would very likely breach by accident without even realizing it.

Second, this whole trust-contract thing tends not to be explicitly laid out, so people often don’t expect punishment if their claims are false. Like, if I just say “would you like to bet on that claim being true?” or better yet “would you like to insure against that claim turning out to be false?”, I expect claimants to suddenly be much less confident of their claims, typically. (Though to some extent that’s because operationalizations for such agreements are tricky.) And even when it is clear that blatant falsehoods will induce punishment, it’s still a lot of work to demonstrate that a claim is false, and a lot of work to punish someone.

Third problem: the trust-contract sets up some terrible incentives going forward. If you trust Alice on some claim, and then Alice finds out her claim was false, she’s now incentivized to hide that information, or to avoid updating herself so that she can credibly claim that she thinks she was telling the truth.

What I Do Instead

As mentioned earlier, when I hear a claim, I tend to automatically hypothesize what might have caused the claimant to make that claim. Where did they get it from? Why that claim, rather than some other? Usually, thinking about that claim-producing-process is all I need to decide how much to believe a claim. It’s just another special case of the standard core rationalist’s question: “what do you believe, and what caused you to believe it?”.

… and then that’s it, there mostly just isn’t much additional need for “trust”.

24 comments

Comments sorted by top scores.

comment by Dweomite · 2023-12-06T22:50:34.698Z · LW(p) · GW(p)

In security engineering, a trusted component of a system is a component that has the ability to violate the system's security guarantees.  For instance, if a security engineer says "Alice is trusted to guard the cookie jar", that means "Alice has the ability to render the cookie jar unguarded".

I notice that the four examples at the beginning of this post all seem to slot pretty nicely into this definition:

  • "I decided to trust her" => I assigned Alice to guard the cookie jar by herself
  • "Should I trust him?" => Should I allow Bob to guard the cookie jar?
  • "Trust me" => Please allow me to access the cookie jar
  • "They offered me their trust" => They granted me access to the cookie jar

If you think about them as being about security policies, rather than epistemic states, then they seem to make a lot more sense.

I think the layperson's informal concept of "trust" is more muddled than this, and conflates "I'm giving you the power to violate my security" with "I am comfortable with you having the power to violate my security" and maybe some other stuff.

Replies from: Raemon
comment by Raemon · 2023-12-07T02:18:32.189Z · LW(p) · GW(p)

Oh, this actually feels related to my relational-stance take on this. When I decide to trust a friend, colleague or romantic partner, I'm giving them the power to hurt me in some way (possibly psychologically). There's practical versions of this, but part of it is something like "we are choosing to be the sort of people who going to share secrets and vulnerabilities to each other."

Replies from: Sune
comment by Sune · 2023-12-07T06:29:34.643Z · LW(p) · GW(p)

This is also just another way of saying “willing to be vulnerable” (from my answer below) or maybe “decision to be vulnerable”. Many of these answers are just saying the same thing in different words.

comment by Raemon · 2023-12-06T19:41:51.930Z · LW(p) · GW(p)

This feels kinda straw-vulcany, sort of missing the point about what people are often using trust for.

I'm not actually sure what trust is, but when I imagine people saying the sentences at the beginning, at least 35% and maybe 75% of what's going on is more about managing a relational stance [LW · GW], i.e something like "do you respect me?". 

I do expect you'll followup with "yeah, I am also just not down to respect people the particular way they want to be respected." 

So a major part of how I handle this sort of thing is usually conveying somehow "I don't believe that, and generally don't trust your epistemics enough to believe-you-by-default in this domain, but, also, I [respect, in whatever way I do happen to respect] the person." (Usually this is managed more by vibes than by words, though if I found myself in a situation where someone I cared about said "trust me" it probably means something is fragile enough to require some careful linguistic work as well)

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2023-12-08T05:34:04.071Z · LW(p) · GW(p)

I've been really frustrated in the past with folks who equate trust with respect.

My ex-wife frequently complained that I didn't trust her. Why? Because she'd ask me to do something and, rather than simply do it, I'd ask why. Most of the time I was just curious (can you imagine? someone who posts on LessWrong was curious?) and wanted to know more, but she read it as me distrusting and thus not respecting or being committed to her.

Mostly I just select myself out of relationships with people who are like this now.

The flip side of this, though, is that there's a part of my life where I do things without asking why, which is as part of my Zen practice. Our rituals exist because someone created them, but the intent is intentionally not communicated. This is to create the experience of not knowing why you do something and having to live with not knowing. You might eventually come up with your own reason for why you do a particular ritual, but then that's something you added that you can explore rather than something you've taken as given from a teacher, senior student, etc. For example, new people often ask why we bow. The answer: because that's what we do. If bowing is to mean something, it's up to you to figure out what it means.

Replies from: Sune
comment by Sune · 2023-12-08T12:05:08.241Z · LW(p) · GW(p)

What does respect mean in this case? That is a word I don’t really understand and seems to be a combination of many different concepts being mixed together.

comment by Richard_Kennaway · 2023-12-08T09:37:23.530Z · LW(p) · GW(p)

"Do I trust this rope bridge to carry my weight?" = "Having estimated its strength relative to my weight as best I can, am I now going to cross it?"

comment by Gordon Seidoh Worley (gworley) · 2023-12-08T05:26:50.339Z · LW(p) · GW(p)

There's a popular position that goes something like "trust people when they tell you how they are feeling, what it's like to be them, etc.". I have a hard time with this, because I am so often wrong about what I think it's like to be me, and my experience is that many people are similarly deluded into a false sense of confidence that they understand themselves well. But also because talk is cheap evidence, and thus must be discounted when updating beliefs.

Now first off I have to say that the opposite position also seems wrong to me. People clearly have some insight into what's going on with themselves. It's not as if when someone says "ouch, you hurt me" they're just saying random words uncorrelated with reality. But also talk is cheap and you can say the words "ouch, you hurt me" when in fact you were not hurt. If there's incentives to say "ouch, you hurt me" even when not hurt, you may utter those words even if not hurt because doing so is likely to generate a good outcome for you. For bonus points, to make your utterance really convincing and thus more likely to effectively get you what you want, you might even convince yourself that you really were hurt even when you weren't.

This means that when someone makes a self-claim, I'm forced to discount that claim against the probability that the self-claim reflects hallucination, confusion, delusion, or outright deception. Self-claims become more reliable when backed by costly signals, but without such costly signals it's hard to update much on most self-claims.

comment by Thane Ruthenis · 2023-12-06T20:33:54.385Z · LW(p) · GW(p)

The main exception here is self-fulfilling prophecy, but that’s not obviously centrally involved in whatever “I decided to trust her” means.

I think it might be.

I think the core of the contractual interpretation is right, but I'd phrase it differently. I think "deciding to trust" someone implies a deal where you ally yourself with them in a way that leaves you vulnerable to hostile actions from their side – you're choosing "cooperate" in the Prisoner's Dilemma in expectation of greater mutual gain. In exchange, they (1) commit not to take said hostile actions, (2) potentially "reciprocate trust" by showing symmetrical vulnerability (either to ensure MAD, or because pooling resources improves your ability to work together), and (3) bias their policy towards satisfying your preferences, which includes being more responsible when interacting with you and giving you some veto on their actions (and if they fail that, they've "betrayed" your trust).

And there's some degree of acausal culture-based negotiation going on. E. g., it's not that people explicitly track all of the above consciously, it's that we have subconscious pre-computed scripts of "if someone utters 'I trust you' and your model marks them as an ally, modify the prior used by your plan-making processes towards satisfying their preferences more". Which is implemented by automatically-activated emotional responses; e. g., the feeling of responsibility automatically flaring up when you're thinking about someone who put their trust in you.

So to some extent it is a self-fulfilling prophecy. If you've decided to trust someone, and their learned-instinct systems recognize that, they'll automatically reciprocate and become more trustworthy. (Or so the underlying calculations go. It might not work: if they jailbroke out of these systems, if they've never had them installed (psychopaths), or if other systems overrode that.)

Not that there isn't all kinds of sloppy language involved, e. g. when "I trust her" actually just means "I've evaluated that the process she uses for [whatever] is reliable" or "leaving myself vulnerable here is a risk, but the CBA is in favour", instead of "I've initiated the 'trust' social dance with her and I don't think she's defecting".

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2023-12-07T02:12:28.843Z · LW(p) · GW(p)

And there's some degree of acausal culture-based negotiation going on

Actually, this seems like an extremely clever instance of that. What evolution (either biological or cultural/memetic) has on its hands is a bunch of scheming, myopic mesa-optimizers walking around in suits of autonomous heuristics [? · GW]. It wants them to be able to cooperate, but the mesa-optimizers are at once too myopic and too clever for that – they pretend to ally then backstab each other.

So what it does is instill a joint social and cognitive protocol which activates whenever someone shows vulnerability to someone else: it initiates a process in the trustee's mind which immediately strongly biases their short-term plans towards reciprocating that trust by showing vulnerability in turn (which reads to us as, e. g., warm feelings of empathy), which establishes mutually assured destruction, which sets up a lasting incentive structure that ensures the scheming optimizer won't immediately betray even if the biochemistry stops actively brainwashing them (which it must stop, because that impairs clear thinking).

Or at least that's the angle on the issue that just occurred to me. May be missing some important pieces, obviously.

Seems fascinating, regardless.

comment by TekhneMakre · 2023-12-06T21:19:08.134Z · LW(p) · GW(p)

"Trust" is like "invest". It's an action-policy; it's related to beliefs, such as "this person will interpret agreements reasonably", "this person will do mostly sane things", "this person won't breach contracts except in extreme circumstances", etc., but trust is the action-policy of investing in plans that only make sense if the person has those properties.

comment by Sune · 2023-12-06T20:00:38.365Z · LW(p) · GW(p)

My favourite definition of trust is “willingness to be vulnerable” and I think this answers most of the questions in the post. For example it explains why trust is a decision that can exist independently from your beliefs: if you think someone is genuinely on your side with probability 95%, you can choose to trust them, by doing something that benefit you in 95% of cases and hurt you on the 5% of cases, or you can decide not to, by taking actions that are better in the 5% of cases. Similar for trusting a statement about the world.

I think this definition comes from psychology, but I also found it useful when talking about trusted third parties in cryptography. Also in this case, we don’t care about the probability that the third part is malicious, what matters is that you are vulnerable if and only if they are malicious.

Replies from: g-w1
comment by Jacob G-W (g-w1) · 2023-12-07T00:34:21.190Z · LW(p) · GW(p)

Yes, this is pretty much how I see trust. It is an abstraction over how much I would think that the other person will do what I would want them to do.

Trusting someone means that I don't have to double-check their work and we can work closer and faster together. If I don't trust someone to do something, I have to spend much more time verifying that the thing that they are doing is correct.

comment by JBlack · 2023-12-07T02:18:47.062Z · LW(p) · GW(p)

To me, the phrase “I decided to trust her” throws an error. It’s the “decided” part that’s the problem: beliefs are not supposed to involve any “deciding”. There’s priors, there’s evidence, and if it feels like there’s a degree of freedom in what to do with those, then something has probably gone wrong.

Trust isn't about having a particular belief, it's about acting on a particular type of hypothesis despite lack of convincing evidence supporting it.

"I decided to trust her" does not mean that you believe that she absolutely won't do some bad X, but that you decided to act as if she won't. That is, you were prepared to take the risk.

Likewise to "trust me" doesn't mean believing that I will never do a bad thing, but to take the risk despite the lack of sufficient evidence.

Replies from: polytope
comment by polytope · 2023-12-07T02:50:56.289Z · LW(p) · GW(p)

> lack of sufficient evidence. 

Perhaps more specifically, evidence that is independent from the person that is to be trusted or not. Presumably when trusting someone else that something is true, often one does so due to believing that the other person is being honest and reliable enough such that that their word is sufficient evidence to then take some action. It's just that there isn't sufficient evidence without that person's word.

comment by Yoav Ravid · 2023-12-06T20:38:01.913Z · LW(p) · GW(p)

Hmm... Perhaps trust is an action and not a belief?

comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2023-12-06T19:45:18.919Z · LW(p) · GW(p)

Another curious thing about trust. The median LW reader is probably better informed about a great many things than their immediate family, friends from their home town. Yet, I suspect you will agree, family members almost never defer to them epistemically.

This seems to be a general phenomenon - many people could plausibly be epistemically better off by finding and deferring to their epistemic superiors. This doesn't need to entail blank-acceptance of any claim that somebody that seems smarter / better informed on a topic - just putting effort in finding who are the best informed on certain domains and comparing them to other would-be experts. In many cases this would be much more effective than trying to figure out things oneself. People generally seem not so interested in finding and comparing experts.

Society has created institutions to combat this curious phenomenon. The magic of credentialism endows a select group of (hopefully) relatively highly informed people with a magic paper that compells peasants to submit to the scholar-clergy.

comment by David Lorell · 2023-12-06T20:38:59.652Z · LW(p) · GW(p)

1.  "Trust" does seem to me to often be an epistemically broken thing that rides on human-peculiar social dynamics and often shakes out to gut-understandings of honor and respect and loyalty etc.

2. I think there is a version that doesn't route through that stuff. Trust in the "trust me" sense is a bid for present-but-not-necessarily-permanent suspension of disbelief, where the stakes are social credit. I.e. When I say, "trust me on this," I'm really saying something like, "All of that anxious analysis you might be about to do to determine if X is true? Don't do it. I claim that using my best-effort model of your values, the thing you should assume/do to fulfill them in this case is X. To the extent that you agree that I know you well and want to help you and tend to do well for myself in similar situations, defer to me on this. I predict you'll thank me for it (because, e.g., confirming it yourself before acting is costly), and if not...well I'm willing to stake some amount of the social credit I have with you on it." [Edit: By social credit here I meant something like: The credence you give to it being a good idea to engage with me like this.]

Similarly:

  • "I decided to trust her" -> "I decided to defer to her claims on this thing without looking into it much myself (because it would be costly to do otherwise and I believe-- for some reason-- that she is sufficiently likely to come to true conclusions on this, is probably trying to help me, knows me fairly well etc.) And if this turns out badly, I'll (hopefully) stop deciding to do this." 
  • "Should I trust him?" -> "Does the cost/benefit analysis gestured at above come out net positive in expectation if I defer to him on this?"
  • "They offered me their trust" -> "They believe that deferring to me is their current best move and if I screw this up enough, they will (hopefully) stop thinking that."

So, I feel like I've landed fairly close to where you did but there is a difference in emphasis or maybe specificity. There's more there than asking “what do they believe, and what caused them to believe it?” Like, that probably covers it but more specifically the question I can imagine people asking when wondering whether or not to "trust" someone is instead, "do I believe that deferring these decisions/assumptions to them in this case will turn out better for me than otherwise?" Where the answer can be "yes" because of things like cost-of-information or time constraints etc. If you map "what do they believe" to "what do they believe that I should assume/do" and "what caused them to believe it" to "how much do they want to help me, how well do they know me, how effective are they in this domain, ..." then we're on the same page.

Replies from: AnthonyC
comment by AnthonyC · 2023-12-06T23:07:59.678Z · LW(p) · GW(p)

I think this is a big part of it. It can also include, "I have information I'm not supposed to share, don't know how to share, or don't have time to share."

comment by Adam Zerner (adamzerner) · 2023-12-07T19:17:36.601Z · LW(p) · GW(p)

I've always felt basically the same about trust. It's nice to see that I'm not the only one.

I'd add/emphasize that trust is really a 2-place word [LW · GW]. It doesn't really make sense to say "I trust Alice". Instead, it'd make sense to say "I trust Alice to do X" or "I trust Alice's belief about X" or "I expect Alice's prediction about X to be true". Ie. instead of trust(person), it's trust(person, thing).

(And of course, as mentioned in the post, it isn't binary either. The output of the function is a probability. A number between zero and one. Not a boolean.)

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-12-07T19:36:10.233Z · LW(p) · GW(p)

That would make it a 3-place word ("I trust" is 1-place, "I trust Alice" is 2-place, "I trust Alice to do X" is 3-place").

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2023-12-07T19:49:40.824Z · LW(p) · GW(p)

Ah, yeah. I was thinking about it as like adamTrusts(person, thing) but the more general concept of just trust, I agree about it being 3-place.

comment by Htarlov (htarlov) · 2023-12-08T22:24:43.591Z · LW(p) · GW(p)

I think that in an ideal world where you could review all priors to very minute details having as much time as needed, and where people were fully rational, then "trust" as a word would not be needed.

We don't live in such a world though. 

If someone says "trust me" then in my opinion it conveys two meanings on two different planes (usually both, sometimes only one):

  1. Emotional. Most people base their choices on emotions and relations, not rational thought. Words like "trust me" or "you can trust me" convey an emotional message asking for an emotional connection or reconsideration, usually because of some contextual reason (like the other person being in a position that on an emotional level seems to be trustworthy, f.ex. a doctor).
  2. Rational. Time for reconsideration. The person asks you to take more time to reconsider your position because she or he thinks you didn't consider well enough why she or he is to be trusted in a given scope or that person just presented some new information (like "trust me, I'm an engineer").

"I decided to trust her about ..." - for me, it is a short colloquial term for "I took time to reconsider if things she says on the topic ... are true and now I think that is more likely that they are". 

For many people, it also has emotional and bonding components.

Another thing is that people tend to trust or mistrust another person in general broad scope. They don't go into detail and think separately on every topic or thing someone says and decide separately for each of them. That's an easy heuristic that usually is good enough, so our minds are wired to operate like that. So people usually say that they trust a person generally, not trust that person within some subject/scope.

P.S. I'm from a different part of the world (central EU, Poland). We don't use phrases like "accept trust" here - which is probably an interesting difference in how differences in language create different ways of thinking. For us here "trust" is not like a contract. It is more a one-way thing (but with some expectation of mutuality in most circumstances).

comment by David Zeller · 2023-12-08T08:55:41.416Z · LW(p) · GW(p)

The thing which is natural to me is: when someone makes a claim, or gives me information, I intuitively think “what process led to them making this claim or giving me this information, and does that process systematically make the claim/information match the territory?”.

Ah, that's what trust is to me! That resonates strongly. I hadn't properly thought through the concept before. 

The big question floating around my mind is - to what extent should I let *this* kind of trust (or lack of trust) determine who I'm close to?