Uninformed Elevation of Trust
post by Shmi (shminux) · 2020-12-28T08:18:07.357Z · LW · GW · 14 commentsContents
14 comments
I don't know if there is a standard name for this phenomenon (other than the related Gell-Mann amnesia effect, e.g. "The Sequences are great, except in my area of expertise, where they are terrible.").
Here is the gist: we trust the data as much as we trust the source, regardless of how much the source trusts the data.
This sounds unobjectionable on the surface. We tend to equate the reliability of the data with the subjectively perceived trustworthiness of the source of data whenever we have no independent means of checking the veracity of the data. What is lost in this near-automatic logic is one small piece: the credence the source itself assigns to the data.
The (faulty) Bayesian math here is pretty straightforward and is left as an exercise to the reader.
A few examples:
- An investment tip from an insider is perceived as reliable even if the insider themselves would consider it a speculation at best.
- Life on Venus!
- US election fraud!
- A tupperware party.
- A potential hire introduced by a trusted friend or a coworker.
A CFAR workshop. Just kidding, those are super trustworthy.
I think the titular description, Uninformed Elevation of Trust, tends to capture the essence of what happens, quickly and naturally, whenever we forget or neglect to engage our critical reasoning full on. It's an elevation of trust, not just adjustment of it, because the scenario where we skip the critical evaluation happens more the more we trust the source of data. We just naturally assume that whatever they tell us is as trustworthy as they themselves are, even though it's obvious that it should be discounted by the factor equal to the source's degree of belief in what they tell us.
Please feel free to offer your own examples and counter examples.
14 comments
Comments sorted by top scores.
comment by Pattern · 2021-05-18T00:51:57.841Z · LW(p) · GW(p)
A CFAR workshop. Just kidding, those are super trustworthy.
I don't know, has anyone been to a CΓAR workshop? I've heard they're oddly pro-smoking, but the website says that is 'only in worlds where smoking doesn't have negative effects. (EDIT: of which this is not one)'.
comment by Garrett Baker (D0TheMath) · 2020-12-28T18:52:58.311Z · LW(p) · GW(p)
The Sequences are great, except in my area of expertise, where they are terrible
Can I get an example of a section of The Sequences where someone with the relevant area of expertise would say that that it's terrible?
Replies from: shminux, TAG↑ comment by Shmi (shminux) · 2020-12-28T20:07:51.794Z · LW(p) · GW(p)
That's not relevant to the subject of the article, but, since you asked, the pattern is that if you talk to a philosopher, they point out a million holes in the philosophical aspects of the Sequences, if you talk to a mathematician, they will point to the various mathematical issues, if you talk to a statistician, they will rant about the made-up divide between Bayesian and frequentist approaches, if you talk to a physicist, they will point out the relevant errors and mischaracterizations in the parts that deal with physics. Basically, pick a subject area and talk to a few experts, and you will see it.
Replies from: philh↑ comment by philh · 2020-12-29T00:30:35.873Z · LW(p) · GW(p)
Roll to disbelieve.
The value of specific examples is that we can check whether the critics seem to know what they're talking about, both in their field (do they understand the ground truth) and regarding the sequences themselves (do they know what Eliezer is saying). Simply telling us there are many examples does not, I believe, fulfill the intent of the question. Which is fine, you have no obligation to answer it, but I think it's worth pointing out.
To be clear, I'm sure you can find people in each of those groups making reach of those criticisms. I do not believe those criticisms would be consensus in each of those groups. Certainly not all of them and on the level of "this is terrible". I remember, for example, physicists talking about the quantum mechanics sequence like "yeah, it's weirdly presented and I don't agree with the conclusion, but the science is basically accurate".
Replies from: shminux↑ comment by Shmi (shminux) · 2020-12-29T01:12:27.664Z · LW(p) · GW(p)
In retrospect, I should not have mentioned the Sequences as an example, it's a sensitive topic here. I personally learned a lot from them in the areas outside my expertise.
Replies from: Viliam↑ comment by Viliam · 2021-01-02T17:30:32.347Z · LW(p) · GW(p)
I'm confused... is this supposed to be an ironic demonstration of Gell-Mann amnesia, or...?
Replies from: shminux↑ comment by Shmi (shminux) · 2021-01-02T18:48:33.151Z · LW(p) · GW(p)
The bit in the title about the Sequences? Yes.
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-05-17T23:17:49.591Z · LW(p) · GW(p)
Here's my attempt to give a concise definition both for Gell-Mann Amnesia (GMA) and your hypothesis, which I'll call Shminux Amnesia (SA). I'll present them in soft form ("adequately"). For the hard form, replace "fail to adequately update" with "fail to update"
GMA: People systematically fail to adequately update their priors on a source's general credibility based on their knowledge of its credibility in their field of expertise.
- Example: A COVID-19 epidemiologists considers the reporting on COVID-19 epidemiology in a certain newspaper to be terrible, but that its reportage is generally adequate. They mildly downgrade their assessment of the reliability of its economic reportage from "adequate" to "mildly problematic." However, economists generally consider that newspaper's economic reportage to be terrible, not just mildly problematic.
SA: People systematically fail to adequately update their priors on the credibility of specific statements based on their knowledge of the credibility the source itself assigns them.
- Example: A reader reads a newspaper reporting what it calls "totally off-base speculation" that a tax increase is about to be announced. The newspaper also refers to a starfish die-off as "extremely likely to be caused by a fungal infection." They regard the newspaper as moderately reliable in general. Prior to reading this article, they had no specific information about the likelihood of a tax increase or the cause of the starfish die-off.
- After reading the two articles, they believe the prediction of a tax increase to be "unlikely, but not too out there," and the prediction of a fungal cause of the starfish die-off to be "fairly likely, but nowhere close to a sure thing."
It's not super clear to me that either GMA or SA are examples of poor epistemic strategies, or that they're especially common. Even if they're true, important, and common, it's not clear what their magnitude is. My personal experience is that I form fairly detailed models of how reliable certain sources are on the topics I care about, and don't bother with the topics I don't care about. I also do neglect to take into account reliability claims to the full extent that would be ideal.
To add on to this, I'd throw one more form of amnesia - Assumption Amnesia. This form of amnesia means that people tend to ignore or neglect the assumptions that motivate theorems and inferences, in areas ranging from mathematics to politics. If they are presented with the following statement:
"We tend to equate the reliability of the data with the subjectively perceived trustworthiness of the source of data whenever we have no independent means of checking the veracity of the data."
What they will remember is this:
"This sounds unobjectionable on the surface. We tend to equate the reliability of the data with the subjectively perceived trustworthiness of the source of data."
And neglect this:
"Whenever we have no independent means of checking the veracity of the data."
The conclusions differ radically if we ignore that assumption.
- With the assumption, we take away the idea that, people lean on the general credibility of the source if they have nothing else to go on.
- Without the assumption (due to assumption amnesia), we take away the idea that people will believe whatever they're told as long as it comes from a source they think is credible, even if it contradicts their own senses.
In general, this cluster of "amnesias" point to an overall tendency of the human mind to radically simplify its models of the world. This can be beneficial to prevent overfitting. But if a key assumption or constraint gets lost, it can lead to major misfires of cognition.
Replies from: shminux↑ comment by Shmi (shminux) · 2021-05-18T01:32:58.849Z · LW(p) · GW(p)
That's... a surprisingly detailed and interesting analysis, potentially worthy of a separate post. My prototypical example would be something like
- Your friend who is a VP at public company XCOMP says "this quarter has been exceptionally busy, we delivered a record number of widgets and have a backlog of new orders enough to last a year. So happy about having all this vested stock options"
- You decide that XCOMP is a good investment, since your friend is trustworthy, has the accurate info, and would not benefit from you investing in XCOMP.
- You plunk a few grand into XCOMP stock.
- The stock value drops after the next quarterly report.
- You mention it to your friend, who says "yeah, it's risky to invest in a single stock, no matter how good the company looks, I always diversify."
What happened here is that your friend's odds of the stock going up was maybe 50%, while you assumed that, because you find them 99% trustworthy, you estimated the odds of XCOMP going up as 90%. That is the uninformed elevation of trust I am talking about.
Another example: Elon Musk says "We will have full self-driving ready to go later this year". You, as an Elon fanboy, take it as a gospel and rush to buy the FSD option for your Model 3. While, if pressed, Elon would say that "I am confident that we can stick to this aggressive timeline if everything goes smoothly" (which it never does).
So, it's closer to what you call the Assumption Amnesia, as I understand it.
Replies from: AllAmericanBreakfast, AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2021-05-18T04:54:32.774Z · LW(p) · GW(p)
As one further remark, I actually think it's often good to practice Gell-Mann Amnesia.
Just because someone is an expert in one domain, does not mean they should be assumed an expert in other domains. Likewise, just because someone lacks knowledge in one domain, does not meant they should be assumed to lack knowledge in others.
It seems epistemically healthy to practice identifying the specific areas in which a particular person or source is expert, and distinguishing them carefully from the areas where they are not.
One of the tricky bits is that a newspaper makes this somewhat difficult. By purporting to cover all topics, yet actually aggregating the views of a wide range of journalists and editors, it makes it very hard to build stable knowledge about the newspapers' epistemics. It would be better to pick a particular journalist and get a sense of how much they know about a particular topic that they cover frequently, but this isn't easy to do in a newspaper format.
Ultimately, possession of a sophisticated prior on the credibility of any source on any topic is an achievement not lightly obtained.
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2021-05-18T04:48:25.226Z · LW(p) · GW(p)
I think there's a difference between ignoring a stated assumption, and failing to infer an unstated assumption. In the example I generated from your OP as an illustration of Assumption Amnesia, the problem was ignoring a stated assumption ("Whenever we have no independent means of checking the veracity of the data.").
By contrast, in the hypothetical cases you present, the problem is failing to infer an unstated assumption ("it's risky to invest in a single stock, no matter how good the company looks, I always diversify" and "if everything goes smoothly, which it never does").
My central case for Assumption Amnesia is the former - ignoring a stated assumption. I think the latter is at least as important, but is also more forgivable. It depends on sheer expertise/sophisticated application of heuristics. Taken literally, the hypothetical Musk statement would justify buying the FSD option. It seems related to the problem of knowing when to take a religious, political, poetic, or joking statment literally, figuratively, or as an exaggeration; and when it's meant specifically and seriously.
In any case, all these seem to be component challenges of the overall problem of interpreting statements in context. It does seem quite useful to break that skill up into factors that can be individually examined and practiced.
comment by cozy · 2020-12-28T19:27:28.446Z · LW(p) · GW(p)
Here is the gist: we trust the data as much as we trust the source, regardless of how much the source trusts the data.
Rant incoming, apologies. This is, sadly, not correct from the get-go. In general, besides your example which more closely is attributed to some form of psychological bias, we tend to lend source importance based on the lack of trust from a source. However, that implies there is any source vetting whatsoever.
I am sure there are a number of individuals here who have worked intelligence, and I am lucky enough to have both worked intelligence and ditched intelligence, so I'm not very interested in my NDA.
There is something incredible about being source of raw intelligence to the point where I have trouble trusting anything I do not hear and verify for myself so many years later. This election drove me absolutely insane. Not only for a particular side's tendencies, but everyone's failure at source vetting. Abandon the 'source'. Find out the true source. I ordered 50 year old magazines to double check a transcription. Nice collector's piece, though. The transcription was right btw.
The source itself is not relevant unless you are the collector; then what matters is not how you present it, but who it goes to, and how it goes to them. However, when you are consuming collected intelligence/information, you have to weigh the medium against the data. All information is neccesarily useful, when used correctly, even misinfo. Misinformation and propaganda neccesarily tells the bias of the consumer/creator, and can lead you in trends, since propaganda tends to be single or sparsely-sourced.
For instance: You can get a reasonable understanding of an event minus all the important bits through the news media.
You can get zero understanding of an event, and a great amount of confusion, from social media.
When I read a news article, depending on the importance of the event (ie: Trump signs stimulus bill!), I will go make sure it is consistent across reporters. If it is, it tells me one of two possibilities:
a. The source all the reporters got was similar or the same. Specific details can tell you this quite easily.
b. All the reporters are in league together and conspiring against this particular news item
Since b is highly improbable (aside from the possibility of accidental conspiring, which is completely in realms of possibility), I generally stick with a.
If it's not very important, I don't waste my time. It was probably a waste of time reading the article; news reporters are very droll. Thankfully, they have mastered the art of the thesis statement.
Given my experience with classified info, how can I rate news media's accuracy or otherwise on relevant subjects they have reported on related to my career?
Absolutely awful, and generally mischaracterizing, if not completely libel. They are easily one of the most dangerous groups that can be unleashed on anything that has the word secret anywhere near it. It is really hard to manipulate the media; they would prefer to not report something if it is not potentially breaking. It's why I think the social media conspiracy is a particularly good one; I can completely believe the or a algorithm can be trained to filter out specific posts, because it's not very hard to do, and those posts are very predictable. I absolutely think it's happening. Was there voter fraud? Probably. Who did it? Probably not us. To bet on there being no foreign meddling in the USA's elections is already a lost bet.
Trump referred to this as more secure than Afghanistan's election. Well, we designed that for them. It was so bad, they agreed to just both be president. I wish I was joking. Ghani just bullied the other individual out and he got the title Peace Negotiator. Thankfully, Trump will get no such title.
This sounds unobjectionable on the surface. We tend to equate the reliability of the data with the subjectively perceived trustworthiness of the source of data whenever we have no independent means of checking the veracity of the data. What is lost in this near-automatic logic is one small piece: the credence the source itself assigns to the data.
You have no idea how the source assigns credence to the data. It is easy to obfuscate or lie. Lying happens even with people you trust. Imagine how it goes for everyone you don't care about. Well, we all know.
This all neglects source protections, which aren't as important unclassified. Suffice to say, there is nothing more important than the source itself, yet nothing as completely worthless. The importance lies in not understanding why the source is how it is, but rather in keeping it consistent.
Bad information is just as useful as good, what matters is whether you are cognizant of its quality without too much due effort (given the expiry date on data). That is most characterized by subject matter expertise.
/rant
Replies from: shminux↑ comment by Shmi (shminux) · 2020-12-28T20:16:28.012Z · LW(p) · GW(p)
I think we are talking about different phenomena here. My point is that, if an average person trusts a source, they tend to assume that the validity of the data are the same as the validity of the source (high!) even if the source themselves takes their own data with the grain of salt, and advises others to do so. You are not an average person by any means, so your approach is different, and much better.
My personal experience with anything published is that it is 50% lie and 50% exaggeration. And yet I still rely on the news to get a picture of something I have no first-hand access to. But that's the usual Gell-Mann amnesia. I am describing something different: forgetting to account of the source's own uncertainty even if it is spelled out (like in the case of potential life artifacts on Venus) or easily queried or checked (e.g. for a stock tip from a trusted source: "how much did you personally invest in it based on your data?").