What is the right phrase for "theoretical evidence"?
post by Adam Zerner (adamzerner) · 2020-11-01T20:43:38.747Z · LW · GW · 15 commentsThis is a question post.
Contents
Effectiveness of masks Foxes vs hedgehogs Harry's dark side Instincts vs A/B tests Posting up in basketball None Answers 8 lionhearted 8 AnthonyC 7 Gurkenglas 3 Mary Chernyshenko 2 Slider 1 D0TheMath 1 Idan Arye 1 Ericf -7 frontier64 None 15 comments
I mean "theoretical evidence" as something that is in contrast to empirical evidence. Alternative phrases include "inside view evidence" and "gears-level evidence".
I personally really like the phrase "gears-level evidence". What I'm trying to refer to is something like, "our knowledge of how the gears turn would imply X". However, I can't recall ever hearing someone use the phrase "gears-level evidence". On the other hand, I think I recall hearing "theoretical evidence" used before.
Here are some examples that try to illuminate what I am referring to.
Effectiveness of masks
Iirc, earlier on in the coronavirus pandemic there was empirical evidence saying that masks are not effective. However, as Zvi talked about, "belief in the physical world" would imply that they are effective.
Foxes vs hedgehogs
Consider Isaiah Berlin’s distinction between “hedgehogs” (who rely more on theories, models, global beliefs) and “foxes” (who rely more on data, observations, local beliefs).
- Blind Empiricism [LW · GW]
Foxes place more weight on empirical evidence, hedgehogs on theoretical evidence.
Harry's dark side
Then I won't do that again! I'll be extra careful not to turn evil!
"Heard it."
Frustration was building up inside Harry. He wasn't used to being outgunned in arguments, at all, ever, let alone by a Hat that could borrow all of his own knowledge and intelligence to argue with him and could watch his thoughts as they formed. Just what kind of statistical summary do your 'feelings' come from, anyway? Do they take into account that I come from an Enlightenment culture, or were these other potential Dark Lords the children of spoiled Dark Age nobility, who didn't know squat about the historical lessons of how Lenin and Hitler actually turned out, or about the evolutionary psychology of self-delusion, or the value of self-awareness and rationality, or -
"No, of course they were not in this new reference class which you have just now constructed in such a way as to contain only yourself. And of course others have pleaded their own exceptionalism, just as you are doing now. But why is it necessary? Do you think that you are the last potential wizard of Light in the world? Why must you be the one to try for greatness, when I have advised you that you are riskier than average? Let some other, safer candidate try!"
The Sorting Hat has empirical evidence that Harry is at risk of going dark. Harry's understanding of how the gears turn in his brain makes him think that he is not actually at risk of going dark.
Instincts vs A/B tests
Imagine that you are working on a product. A/B tests are showing that option A is better, but your instincts, based on your understanding of how the gears turn, suggest that B is better.
Posting up in basketball
Over the past 5-10 years in basketball, there has been a big push to use analytics more. Analytics people hate post-ups (an approach to scoring). The data says that they are low-efficiency.
I agree with that in a broad sense, but I believe that a specific type of posting up is very high efficiency. Namely, trying to get deep-position post seals when you have a good height-weight advantage. My knowledge of how the gears turn strongly indicates to me that this would be high efficiency offense. However, analytics people still seem to advise against this sort of offense.
Answers
First, I love this question.
Second, this might seem way out of left field, but I think this might help you answer it —
https://en.wikipedia.org/wiki/B%C3%BCrgerliches_Gesetzbuch#Abstract_system_of_alienation
One of the BGB's [editor: the German Civil Law Code] fundamental components is the doctrine of abstract alienation of property (German: Abstraktionsprinzip), and its corollary, the separation doctrine (Trennungsprinzip). Derived from the works of the pandectist scholar Friedrich Carl von Savigny, the Code draws a sharp distinction between obligationary agreements (BGB, Book 2), which create enforceable obligations, and "real" or alienation agreements (BGB, Book 3), which transfer property rights. In short, the two doctrines state: the owner having an obligation to transfer ownership does not make you the owner, but merely gives you the right to demand the transfer of ownership.
I have an idea of what might be going on here with your question.
It might be the case that there's two fairly-tightly-bound — yet slightly distinct — components in your conception of "theoretical evidence."
I'm having a hard time finding the precise words, but something around evidence, which behaves more-or-less similarly to how we typically use the phrase, and something around... implication, perhaps... inference, perhaps... something to do with causality or prediction... I'm having a hard time finding the right words here, but something like that.
I think it might be the case that these components are quite tightly bound together, but can be profitably broken up into two related concepts — and thus, being able to separate them BGB-style might be a sort of solution.
Maybe I'm mistaken here — my confidence isn't super high, but when I thought through this question the German Civil Law concept came to mind quickly.
It's profitable reading, anyways — BGB I think can be informative around abstract thinking, logic, and order-of-operations. Maybe intellectually fruitful towards your question or maybe not, but interesting and recommended either way.
What makes the thing you're pointing at different than just "deduction" or "logic"?
You have empirical evidence.
You use the empirical evidence to generate a theory edifice, and further evidence has so far supported it. (induction)
You use the theory to make a prediction (deduction), but that is not itself evidence, it only feels like it because we aren't logically omniscient and didn't already know what our theory implied. Whatever probability our prediction has comes from the theory, which gets its predictive value from the empirical evidence that went into creating and testing it.
The early discussions about mask effectiveness during COVID were often between people not trained in physics at all, that just wasn't part of their thinking process, so a physics-based response was new evidence because of the empirical evidence behind the relevant physics. Also, there were lots of people talking past each other because "mask," "use," and "effective" are all underspecified terms that don't allow for simple yes/no answers at the level of discourse we seem able to publicly support as a society, and institutions don't usually bother trying to make subtler points to the public for historical, legal, and psychological reasons (that we may or may not agree with in specific cases or in general).
↑ comment by Adam Zerner (adamzerner) · 2020-11-02T18:21:48.187Z · LW(p) · GW(p)
What makes the thing you're pointing at different than just "deduction" or "logic"?
Good question. Maybe one of those is the correct term for what I am pointing at.
You use the theory to make a prediction (deduction), but that is not itself evidence, it only feels like it because we aren't logically omniscient and didn't already know what our theory implied. Whatever probability our prediction has comes from the theory, which gets its predictive value from the empirical evidence that went into creating and testing it.
I may be misinterpreting what you're saying, but it sounds to me like you are saying that evidence is only in the territory, not in our maps. Consider the example of how the existence of gravity would imply that aerosol particles containing covid will eventually fall towards the ground, and so the concentration of such particles will decrease as you get further from the source. My understanding of what you're saying is that gravity, the theory, isn't evidence. Apples falling from a tree, the empirical observations that allowed us to construct the theory of gravity, that is the actual evidence.
But this would violate how the term is currently used. It seems normal to me to say that gravity is evidence that aerosol particles will dissipate as they get further from their source. In the sense that it feels correct, and in the sense that I recall hearing other people use the term that way.
Replies from: AnthonyC, Ericf↑ comment by AnthonyC · 2020-11-03T21:45:20.007Z · LW(p) · GW(p)
Then maybe I'm mixing up terms and should make a better mental separation between "evidence" and "data." In that case "data" is in the territory (and the term I should have used in my previous post), while "evidence" can mean different things in different contexts. Logical evidence, empirical evidence, legal evidence, and so on, all have different standards. In that case I don't know if there is necessarily a consistent definition beyond "what someone will accept as a convincing reason to reach a conclusion to a certain kind of question," but I'm not at all confident in that.
↑ comment by Ericf · 2020-11-02T23:28:32.371Z · LW(p) · GW(p)
Can you cite someone else using the word evidence to refer to a theory or explanation? I can't recall ever seeing that, but it might be a translation or regional thing. As a souther california Jewish native American English speaker, saying "gravity is evidence that" just sounds wrong, like saying "a red, fast, clever fox"
Could be "framing conditions". I mean, it's one think to say "masks should help to not spread or receive viral particles", but it's another thing to say "masks can't not limit convection". Even if you are interested in the first, you have to separate it into the second and similar statements. Things should resemble pieces of an empirical model besides intuitive guesses, to be updateable.
I mean, it's fine to stick to the intuition, but it doesn't help with modifying the model.
There are such things as "theorem", "finding" and "understanding".
However the word evidence is heavily reserved for theory-distant pieces of data that are not prone to be negotiable. There is the sense that "evidence" is something that shifts beliefs. but this comes from the connection that a brain should be informed by the outside world. We don't call all persuasive things evidence.
If you are doing theorethical stuff and think in a way where " evidence" factors heavily you are somewhat likely to do things a bit backwards. Weighting evidence is connected to cogent argumens which are in the realm of inductive reasoning. In the realm of theory we can use proper deductive methods and definitely say stuff about things. A proof either carries or not - there is no "we can kinda say".
↑ comment by Adam Zerner (adamzerner) · 2020-11-02T02:05:24.451Z · LW(p) · GW(p)
However the word evidence is heavily reserved for theory-distant pieces of data that are not prone to be negotiable. There is the sense that "evidence" is something that shifts beliefs. but this comes from the connection that a brain should be informed by the outside world. We don't call all persuasive things evidence.
This seems to me like something that is important to change, and a big part of why I am asking this question.
I've always been a believer that having a word/phrase for something makes it a lot easier to incorporate it into your thinking. For example, since coming across the term "slack" [? · GW], I've noticed that it is something I incorporate into my thinking a lot more, despite the fact that the concept is something that wasn't new to me.
I also share the same worry that Eliezer expresses in Blind Empiricism [LW · GW]:
I worry, however, that many people end up misusing and overapplying the “outside view” concept very soon after they learn about it, and that a lot of people tie too much of their mental conception of what good reasoning looks like to the stereotype of the humble empiricist fox.
Having an easily accessible term for theoretical evidence would make it easier to combine the ways of the Fox with the ways of the Hedgehog. To say "I shift my beliefs this way according to the empirical evidence X. And then I shift my beliefs that way according to the theoretical evidence Y." Even if you aren't as bullish about inside view thinking as me or Eliezer, combining the two seems like an undoubtedly good thing, but one that is currently a little difficult to do given the lack of terminology for "theoretical evidence".
Replies from: Slider↑ comment by Slider · 2020-11-02T12:51:02.924Z · LW(p) · GW(p)
I understand the need to have a usable word for the concept. However trying to hijack meanings of existing words just seems like recipe to have conflicting meanings.
In a court, for example a medical examiner can be asked what was the cause of death. The act of doing this is "opining" and the result is "an opinion". Only experts can opine and the standing for a expert to be an expert on the issue can be challenged. Asking a non-expert to opine can be objected to, eye-witnesses can be taken to be credible about their experience but far disconnected conclusions are not allowed (it is a separate job of the lawyer to argue those inferences or the fact finder to think it is suffiently shown).
Like "theory" can in folk language mean guess but in science terms means a very regimented and organised set of hypotheses sometimes a term "expert opinion" is used to distinguish for findings that people are willing to back up even under pressure to distinguish between "mere" "personal opinion"
It is true that expert wittness testimony "are among the evidence". "word against word" kind of cases might be felt tricky because it is pretty easy to lie, that is to fabricate that kind of evidence.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2020-11-02T17:28:37.540Z · LW(p) · GW(p)
I understand the need to have a usable word for the concept. However trying to hijack meanings of existing words just seems like recipe to have conflicting meanings.
I agree. However, in the rationality community the term evidence is assumed [? · GW] to refer to Bayesian evidence (ie. as opposed to scientific or legal evidence [LW · GW]). And I've always figured that this is also the case in various technical domains (AI research, data science). So then, at least within the context of these communities there wouldn't be any hijacking or conflict. Furthermore, more and more people/domains are adopting Bayesian thinking/techniques, and so the context where it would be appropriate to have a term like "theoretical evidence" is expanding.
Replies from: Slider↑ comment by Slider · 2020-11-02T17:53:13.725Z · LW(p) · GW(p)
I am not worried that evidence is too broad. However on that short definition I have a real hard time identifying what is the "event" that happens or not that alters the probabilities.
I get that for example somebody might be worried that when this and neighbouring galaxy merge whether stars will collide. Understanding of scales means this will essentially not happen, even without knowing any positions of stars. Sure it is cognitively prudent. But I have a hard time phrasing it in terms of taking into account evidence. What is the evidence I am factoring in when I come to the realization that 2+2=4? To me it seems that it is a core property of evidence that it is not theorethical, that is the umph that drives towards truth.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2020-11-02T18:28:39.836Z · LW(p) · GW(p)
What is the evidence I am factoring in when I come to the realization that 2+2=4?
Check out How to Convince Me That 2 + 2 = 3 [LW · GW] :)
Replies from: SliderI'm a bit late to the game here, but you may be thinking of a facet of "logical induction". Basically, logical induction is changing your hypotheses based on putting more thought into an issue, without necessarily getting more Bayesian evidence.
The simplest example is when deciding whether a mathematical proof is true. Technically, you already have a hypothesis that perfectly predicts your data---ZFC set theory---but proving the proof is highly computationally expensive using this hypothesis, so if you want a probability estimate of whether the proof is true you need some other prediction mechanism.
See the Consequences of Logical Induction [? · GW] sequence for more information.
I'm basing this answer on a clarifying example from the comments section [LW(p) · GW(p)]:
I believe that what I am trying to point at is indeed evidence, in the Bayesian sense of the word. For example, consider masks and COVID. Imagine that we empirically observe that they are effective 20% of the time and ineffective 80% of the time. Should we stop there and take it as our belief that there is a 20% chance that they are effective? No!
Suppose now that we know that when someone with COVID breathes, particles containing COVID remain in the air. Further suppose that our knowledge of physics would tell us that someone standing two feet away is likely to breathe in these particles at some concentration. And further suppose that our knowledge of how other diseases work tell us that when that concentration of virus is ingested, it is likely that you will get infected. When you incorporate all of this knowledge about physics and biology, it should shift your belief that masks are effective. It shouldn't stay put at 20%. We'd want to shift it upward to something like 75% maybe.
When put like this, these "evidence" sound a lot like priors. The order should be different though:
- First you deduce from the theory that masks are, say, 90% effective. These are the priors.
- Then you run the experiments that show that masks are only effective 20% of the time.
- Finally you update your beliefs downward and say that masks are 75% effective. These are the posteriors.
To a perfect Bayesian the order shouldn't matter, but we are not perfect Bayesians and if we try to do it the other way around and apply the theory to update the probabilities we got from the experiments, we would be able to convince ourselves the probability is 75% no matter how much empirical evidence that says otherwise we have accumulated.
↑ comment by Adam Zerner (adamzerner) · 2020-11-02T18:36:57.981Z · LW(p) · GW(p)
if we try to do it the other way around and apply the theory to update the probabilities we got from the experiments, we would be able to convince ourselves the probability is 75% no matter how much empirical evidence that says otherwise we have accumulated.
If this were true, I would agree with you. I am very much on board with the idea that we are flawed and that we should take steps to minimize the impact of these flaws, even if those steps wouldn't be necessary for a perfect Bayesian.
However, it isn't at all apparent to me that your assumption is true. My intuition is that it wouldn't make much of a difference. But this sounds like a great idea for a psychology/behavioral economics experiment!
Replies from: Idan Arye↑ comment by Idan Arye · 2020-11-02T21:01:14.130Z · LW(p) · GW(p)
The difference can be quite large. If we get the results first, we can come up with Fake Explanations [LW · GW] why the masks were only 20% effective in the experiments where in reality they are 75% effective. If we do the prediction first, we wouldn't predict 20% effectiveness. We wouldn't predict that our experiment will "fail". Our theory says masks are effective so we would predict 75% to begin with, and when we get the results it'll put a big dent in our theory. As it should [? · GW].
↑ comment by Slider · 2020-11-02T12:35:54.061Z · LW(p) · GW(p)
If the order doesn't matter then it seems a kind of "accumulation of priors" should be possible. It is not obviously evident to me how the perfectness of the bayesian would protect it from this. That is for a given posterior and constant evidence there exists a prior that would give that conclusion. Normally we think of the limit where the amount and weight of the observations dominates. but there might be atleast a calculation where we keep the observation constant and more and more reflect on it, changing or adding new priors.
Then the result that a bayesian will converge on the truth with additional evidende flips to mean that any evidence can be made to fit a sufficiently complex hypothesis ie that with enough reflection there is asymptotic freedom of belief that evidence can't restrain.
In the face of a very old and experienced bayesian allmost all things it encounters will shift its beliefs very little. If the beliefs were of unknown origin one might be tempted to assume that it would be stubborness of stupidity to not be open to evidence. If you know that you know it seems such stubborness might be justifiable. But how do you know whether you know? And what kind of error is being committed when you are understubborn?
Replies from: Idan Arye↑ comment by Idan Arye · 2020-11-02T14:38:12.927Z · LW(p) · GW(p)
I think you may be underestimating the impact of falsifying evidence. A single observation that violates general relativity, assuming we can perfectly trust its accuracy and rule out any interference from unknown unknowns - would shake our understanding of physics if it comes tomorrow, but had we encountered the very same evidence a century ago our understanding of physics would have already been shaken (assuming the falsified theory wouldn't be replaced with a better one). To a perfect Bayesian, the confidence at general relativity in both cases should be equal - and very low. Because physics are lawful - the don't make "mistakes" - we are the ones who are mistaken at understanding them, so a single violation is enough to make a huge dent no matter how many confirming evidence we have managed to pile up.
Of course, in real life we can't just say "assuming we can perfectly trust its accuracy and rule out any interference from unknown unknowns". The accuracy of our observations is not perfect, and we can't rule out unknown unknowns, so we must assign some probability to our observation being wrong. Because of that, a single violating evidence is not enough to completely destroy the theory. And because of that, newer evidence should have more weight - our instruments keep getting better so our observations today are more accurate. And if you go far enough back you can also question the credibility of the observations.
Another issue, which may not apply to physics but applies to many other fields, is that the world does change. A sociology experiment form 200 years ago is evidence on society from 200 years ago, so the results of an otherwise identical experiment from recent years should have more weight when forming a theory of modern society, because society does change - certainly much more than physics change.
But to the hypothetical perfect Bayesian the chronology itself shouldn't matter - all they have to do is take all that into account when calculating how much they need to update their beliefs, and succeeding to do so it doesn't matter in which order they apply the evidences.
Replies from: Slider↑ comment by Slider · 2020-11-02T14:49:43.138Z · LW(p) · GW(p)
The act of a single falsification shatter the whole theory seems like a calculation where the prior just gets tossed. However in most calculations the prior still affects things. If you start from somewhere and then either don't see or see relativistic patterns for 100 years and then see a relativity violation a perfect bayesian would not end with the same end belief. Using the updated prior or the ignorant prior makes a difference and the outcome is geniunely a different degree of belief. Or I guess another way of saying that is that if you suddenly gain access to the middle-time evidence that you missed it still impacts a perfect reasoner. Gaining 100 years worth of relativity pattern increases credence for relativity even if it is already falsified.
Replies from: Idan Arye↑ comment by Idan Arye · 2020-11-02T16:24:15.554Z · LW(p) · GW(p)
Maybe "destroying the theory" was not a good choice of words - the theory will more likely be "demoted" to the stature of "very good approximation". Like gravity. But the distinction I'm trying to make here is between super-accurate sciences like physics that give exact predictions and still-accurate-but-not-as-physics fields. If medicine says masks are 99% effective, and they were not effective for 100 out of 100 patients, the theory still assigned a probability of that this would happen. You need to update it, but you don't have to "throw it out". But if physics says a photon should fire and it didn't fire - then the theory is wrong. Your model did not assign any probability at all to the possibility of the photon not firing.
And before anyone brings 0 And 1 Are Not Probabilities [LW · GW], remember that in the real world:
- There is a probability photon could have fired and our instruments have missed it.
- There is a probability that we unknowingly failed to set up or confirm the conditions that our theory required in order for the photon to fire.
- We do not assign 100% probability to our theory being correct, and we can just throw it out to avoid Laplace throwing us to hell for our negative infinite score [LW · GW].
This means that the falsifying evidence, on its own, does not destroy the theory. But it can still weaken it severely. And my point (which I've detoured too far from) is that the perfect Bayesian should achieve the same final posterior no matter at which stage they apply it.
I think the word you are looking for is analysis. Consider the toy scenario: You observe two pieces of evidence:
- A = B
- B = C
Now, without gathering any additional evidence, you can figure out (given certain assumptions about the gears level working of A, B, and C) that A = C. Because that takes finite time for your brain to realize, it feels like a new piece of information. However, it is merely the result of analyzing the existing evidence to generate additional equivalent statements. Of course, those new ways of describing the territory can be useful, but they shouldn't result in Baysean updates. Just like getting redundant evidence (eg 1. A = B 2. B = A) shouldn't move your estimate further than just getting one bit of evidence.
↑ comment by Adam Zerner (adamzerner) · 2020-11-02T06:47:54.199Z · LW(p) · GW(p)
I think the word you are looking for is analysis.
I see what you mean. However, I don't see how that would fit in a sentence like "The theoretical evidence made me update slightly towards X."
However, it is merely the result of analyzing the existing evidence to generate additional equivalent statements. Of course, those new ways of describing the territory can be useful, but they shouldn't result in Baysean updates.
Ah, but your brain is not a Bayes net! If it were a Bayes net your beliefs would always be in perfect synchrony with the data you've observed over time. Every time you observe a new piece of data, the information gets propagated and all of your beliefs get updated accordingly. The only way to update a belief would be to observe a new piece of data.
However, our brains are far from perfect at doing this. For example, I recently realized [LW(p) · GW(p)] that the value side of the expected value equation of voting is crazy large. Ie. the probability side of the equation is the chances of your vote being decisive (well, for argument's sake) and the value side is how valuable it is for your vote to be decisive. At $100/citizen and 300M citizens, that's $30B in value. Probably much more IMO. So then, in a lot of states the EV of voting is pretty large.
This realization of mine didn't come from any new data, per se. I already knew that there were roughly 300M people in the US and that the impact of my candidate being elected is somewhere in the ballpark of $100/citizen. I just hadn't... "connected the dots" until recently. If my brain were a perfect Bayes net the dots would get connected immediately every time I observe a new piece of data, but in reality there are a huge amount of "unconnected dots".
(What an interesting phenomena, having a lot of "unconnected dots" in your head. That makes it sound like a fun playground to explore.
And it's interesting that there is a lot of intellectual work you can do without "going out into the world". Not that you shouldn't "go out into the world", just that there is a lot you can do without it. I think I recall hearing that the ancient Greek philosophers thought that it was low-status to "go out into the world". That was the job for lower class people. High class philosophers were supposed to sit in a chair and think.)
Another phrase for Theoretical Evidence or Instincts is No Evidence At All. What you're describing is an under-specified rationalization made in an attempt to disregard which way the evidence is pointing and let one cling to beliefs for which they don't have sufficient support. Zvi's response wrt masks in light of the evidence that they aren't effective butting up against his intuition that they are has no evidentiary weight. He was not acting as a curious inquirer, he was a clever arguer [LW · GW].
The point of Sabermetrics is that the "analysis" that baseball scouts used to do (and still do for the losing teams) is worthless when put up against hard statistics taken from actual games. As to your example, even the most expert basketball player's opinion can't hold a candle to the massive computational power required to test these different techniques in actual basketball games.
↑ comment by Adam Zerner (adamzerner) · 2020-11-02T22:00:45.652Z · LW(p) · GW(p)
Theoretical evidence can be used that way, but it can also be used appropriately.
15 comments
Comments sorted by top scores.
comment by Idan Arye · 2020-11-02T00:10:30.220Z · LW(p) · GW(p)
These are not evidence at all! They are the opposite of evidence. Evidence are something from the territory that you use to update your map - what you are describing goes the opposite direction - it comes from the map to say something specific about the territory.
"Using the map to say something about the territory" sounds like "predictions", but in this case it does not seem like you intend to update your beliefs based on whether or not the predictions come true - in fact, you specify that the empirical evidence is already going against these predictions, and you seem perfectly content with that.
So... maybe you could call it "application"? Since you are applying your knowledge?
Or, since they explicitly go against the empirical evidence, how about we just call it "stubbornness"?
Replies from: adamzerner, johnswentworth↑ comment by Adam Zerner (adamzerner) · 2020-11-02T01:11:03.002Z · LW(p) · GW(p)
These are not evidence at all! They are the opposite of evidence.
I believe that what I am trying to point at is indeed evidence, in the Bayesian sense of the word. For example, consider masks and COVID. Imagine that we empirically observe that they are effective 20% of the time and ineffective 80% of the time. Should we stop there and take it as our belief that there is a 20% chance that they are effective? No!
Suppose now that we know that when someone with COVID breathes, particles containing COVID remain in the air. Further suppose that our knowledge of physics would tell us that someone standing two feet away is likely to breathe in these particles at some concentration. And further suppose that our knowledge of how other diseases work tell us that when that concentration of virus is ingested, it is likely that you will get infected. When you incorporate all of this knowledge about physics and biology, it should shift your belief that masks are effective. It shouldn't stay put at 20%. We'd want to shift it upward to something like 75% maybe.
Evidence are something from the territory that you use to update your map - what you are describing goes the opposite direction - it comes from the map to say something specific about the territory.
"Using the map to say something about the territory" sounds like "predictions", but in this case it does not seem like you intend to update your beliefs based on whether or not the predictions come true - in fact, you specify that the empirical evidence is already going against these predictions, and you seem perfectly content with that.
I agree that evidence comes from the territory. And that from there, you can use that to update your map. For example, an apple falling from a tree is evidence for gravity. From there you have a model of how gravity works in your map. From there, you can then use this model of how gravity works to say something about the territory, eg. to make predictions.
Relating this back to masks example, perhaps our model of how gravity works would imply that these aerosol particles would start falling to the ground and thus be present at a much lower concentration six feet away from the person who breathed them compared to two feet away. From there, we should use that prediction to update our belief about how likely it is that masks should be effective.
but in this case it does not seem like you intend to update your beliefs based on whether or not the predictions come true - in fact, you specify that the empirical evidence is already going against these predictions, and you seem perfectly content with that.
That is not the case. It's just that I believe that theoretical evidence should be used in addition to empirical evidence, not as a replacement. They should both be incorporated into your beliefs. Ie. in this example with masks, we should factor in both the (hypothetical?) empirical evidence that masks aren't effective with the theoretical evidence that I described.
So... maybe you could call it "application"? Since you are applying your knowledge?
That sounds like a promising idea. It seems like it needs some tweaking though. I want be able to say something like "the theoretical evidence suggests". If you replace "theoretical evidence" with "application", it wouldn't make sense. You'd have to replace it with something like "application of what we know about X", but that is too wordy.
(I feel like my explanation for why theoretical evidence is in fact evidence didn't do it justice. It seems like an important thing and I can't think of a place where it is explained well, so I'm interested in hearing explanations from people who can explain/articulate it well.)
Replies from: supposedlyfun, Idan Arye, frontier64↑ comment by supposedlyfun · 2020-11-02T01:37:23.600Z · LW(p) · GW(p)
Imagine that you are working on a product. A/B tests are showing that option A is better, but your instincts, based on your understanding of how the gears turn, suggest that B is better.
Imagining it now. "are showing" makes it sound like your A/B tests are still underway, in which case wait for the study to end (presumably you designed a good study with enough power that the end results would give you a useful answer on A vs. B). But if the tests show A > B, why would you hold on to your B > A prior? Or if you think the tests are only 50% conclusive, why would you not at least update the certainty or strength of your B > A prior?
I think this is why Idan said, "Or, since they explicitly go against the empirical evidence, how about we just call it 'stubbornness'?"
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2020-11-02T01:47:31.571Z · LW(p) · GW(p)
Or if you think the tests are only 50% conclusive, why would you not at least update the certainty or strength of your B > A prior?
I would.
But if the tests show A > B, why would you hold on to your B > A prior?
I wouldn't necessarily do that. The test results are empirical evidence in favor of A > B. The intuition is theoretical evidence in favor of B > A. My position is that they both count and you should update your beliefs according to how strong each of them is.
Replies from: supposedlyfun↑ comment by supposedlyfun · 2020-11-02T02:09:40.985Z · LW(p) · GW(p)
Okay, thank you for engaging. Those answers weren't clear to me from the parent piece.
Maybe I reacted strongly because my current prior on my own intuitions is something like "Your intuition is just savannah-monkey-brain cognitive shortcuts and biases layered over your weird life experiences". One thing I've been thinking about lately is how often that prior is actually justified versus how often it's merely a useful heuristic (or a shortcut/bias? ha!) to remind me to shut up and Google/multiply.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2020-11-02T02:23:43.661Z · LW(p) · GW(p)
Sure thing :)
Speaking generally, not assuming that you are doing this, but I think that there is a bit of a taboo [LW · GW] against hedgehog-thinking. Perhaps there is a tendency for people to overuse that type of thinking, so perhaps it can make sense to be weary of it.
But it is clear that some situations call for us to be more like foxes, and other situations to be more like hedgehogs. I don't think anyone would take the position that hedgehogs are to be completely dismissed in 100% of situations. So then, it would be helpful to have the right terminology at your disposal for when you do find yourself in a hedgehog situation.
↑ comment by Idan Arye · 2020-11-02T12:01:53.858Z · LW(p) · GW(p)
This clarification gave me enough context to write a proper answer [LW(p) · GW(p)].
That sounds like a promising idea. It seems like it needs some tweaking though. I want be able to say something like "the theoretical evidence suggests". If you replace "theoretical evidence" with "application", it wouldn't make sense. You'd have to replace it with something like "application of what we know about X", but that is too wordy.
Just call it "the theory" then - "the theory suggests" is both concise and conveys the meaning well.
↑ comment by frontier64 · 2020-11-02T22:08:53.607Z · LW(p) · GW(p)
Should we stop there and take it as our belief that there is a 20% chance that they are effective? No!
You need not stop there, but getting an answer that is in conflict with your intuitions does not give you free reign to fight it with non-evidence. If you think there's a chance the empirical evidence so far may have some bias you can look for the bias. If you think the empirical evidence could be bolstered by further experimentation you perform further experimentation. Trying to misalign your prior in light of the evidence with the goal of sticking to your original intuitions however is not ok. What you're doing is giving in to motivated reasoning and then post-hoc trying to find some way to say that's ok. I would call that meta-level rationalization.
↑ comment by johnswentworth · 2020-11-02T00:49:40.240Z · LW(p) · GW(p)
They are definitely evidence; the "theory" or knowledge being applied itself came from the territory. As long as a map was generated from the territory in the first place, the map provides evidence which can be extrapolated into other parts of the territory.
Replies from: Idan Arye↑ comment by Idan Arye · 2020-11-02T12:15:42.812Z · LW(p) · GW(p)
You need to be very careful with this approach, as it can easily lead to circular logic where map X is evidence for map Y because they both come from the same territory, and may Y is evidence for map X because they both come from the same territory, so you get a positive feedback loop that updates them both to approach 100% confidence.
comment by Shmi (shminux) · 2020-11-02T06:46:22.524Z · LW(p) · GW(p)
What you are describing is models, not observations. If you confuse the two, you end up with silly statements like "MWI is obviously correct".
Replies from: adamzernercomment by Measure · 2020-11-02T04:10:02.868Z · LW(p) · GW(p)
Maybe Direct Evidence (something you directly observe or measure) vs. Indirect Evidence (something you infer from previously collected evidence).
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2020-11-02T07:00:26.648Z · LW(p) · GW(p)
Hm, that sounds promising!
Replies from: jmh