Worst Commonsense Concepts?
post by abramdemski · 2021-11-15T18:22:31.465Z · LW · GW · 10 commentsThis is a question post.
Contents
Answers 28 Angela Pretorius 27 Anon User 22 romeostevensit 19 Alexander 17 G Gordon Worley III 15 cousin_it 12 abramdemski 10 SKEM 9 abramdemski 8 Alexander 7 elriggs 3 robertc 3 Mateusz Bagiński 3 Ape in the coat 2 seed None 10 comments
Perhaps the main tool of rationality is simply to use explicity reasoning where others don't, as Jacob Falcovich suggests [LW · GW]:
New York Times reporter Cade Metz interviewed me and other Rationalists mostly about how we were ahead of the curve on COVID and what others can learn from us. I told him that Rationality has a simple message: “people can use explicit reason to figure things out, but they rarely do”
However, I also think a big chunk of the value of rationality-as-it-exists-today is in its corrections to common mistakes of explicit reasoning. (To be clear, I'm not accusing Jacob of ignoring that.) For example, bayesian probability theory is one explicit theory which helps push a lot of bad explicit reasoning to the side.
The point of this question, however, is not to point to the good ways of reasoning. The point here is, rather, to point at bad concepts which are in widespread use.
For example:
- Fact vs opinion. There are several reasons why this is an awful concept.
- The common usage suggests that there are "matters of fact" vs "matters of opinion"; eg, I like hummus (opinion) but 1+1=2 (matter of fact). But common usage also suggests that probabilistic reasoning gives mere opinions, while other modes of reasoning (such as direct observation, and logical reasoning) yield facts. This is inconsistent; it suggests that we can tell whether a belief is an opinion or a fact by examining what it is about (beliefs about subjective things = opinions; beliefs about objective things = facts), while also seeming to need the mode of reasoning by which we arrived at the belief (eg, if I saw a black hole myself, it would be a fact, but if I derived one's existence from unproven physics, it would be opinion).
- Calling something a fact generally indicates that others are epistemically obligated to believe it. But if it is contentious, then this is precisely what's at issue. So calling something a fact like this is generally useless.
- We could take "fact" to mean something like "true opinion". But from the inside, this is no different from a strong belief. So again, to call something a fact rather than a strong opinion seems to add no information (whereas, I take it, it's supposed to according to common usage).
- "Purpose" as an inherent property. In common usage, it makes sense to ask "the purpose of life" because a purpose is a property which lots of objects have. In reality, it only makes sense to think of "purpose" relative to some agent, as in "I made this for this purpose". Common usage allows purpose to be agent-independent because there are lots of things (tables, chairs, silverware, etc) which have purposes largely independent of agent (most people use tables to set things on for convenient reach, chairs to sit on, silverware to eat, etc). However, in cases which aren't like that, the language doesn't make sense without explanation (but people treat it like it does).
These are intended to be the sort of thing which people use unthinkingly -- IE, not popular beliefs like astrology. While astrology has some pretty bad concepts, it is explicitly bundled as a belief package which people consider believing/disbelieving. Very few people have mental categories like "fact-ist" for someone who believes in a fact/opinion divide. It's therefore useful to make explicit belief-bundles for these things, so that we can realize when we are choosing whether to use that belief-bundle.
My hope is that when you encounter a pretty bad (but common) concept out there in the wild, you'll think to return here and add it to the list as a new answer. (IE, as with all LW Questions, I hope this can become a timeless list, rather than just something people interact with once when it is on the front page.)
Properly dissolving [LW · GW] the concept by explaining why people (mis)use it is encouraged, but not required for an entry.
Feel free to critique entries in the comments (and critique my above two proposals in the comments to this post), but as a contributor, don't stress out about responding to critiques (particularly if stressing about this makes you not post suggestions -- the voting should keep the worst ones at the top, so don't worry about submitting concepts that aren't literally the worst!).
Ideally, this would become a useful resource for beginners to come and get de-confused about some of the most common confusions.
Answers
‘Justice’ has got to be one of the worst commonsense concepts.
It is used to ‘prove’ the existence of free will and it is the basis of a lot of suboptimal political and economic decision making.
Taboo ‘justice’ and talk about incentive alignment instead.
↑ comment by JenniferRM · 2023-05-21T00:44:51.176Z · LW(p) · GW(p)
Huh. That's weird. My working definition of justice is "treating significantly similar things in appropriately similar ways, while also treating significantly different things in appropriately different ways". I find myself regularly falling back to this concept, and getting use from doing so.
Also, I rarely see anyone else doing anything even slightly similar, so I don't think of myself as using a "common tactic" here? Also, I have some formal philosophic training, and my definition comes from a distillation of Aristotle and Plato and Socrates, and so it makes sense to me that since most people lack similar exposure they would lack the concept by default.
Both "incentive alignment" and "justice" feel like something a slave might beg a master to give them so that the slave was punished and rewarded in less insane ways, so I can see how they might be conflated, but I don't see how "incentive alignment" would serve Robinson Crusoe if he was trying to figure out a good approach to which fruits to eat, or which parts of an island to use in different ways.
What do you think normal people mean by "justice" when you say it is something they can somehow use to prove the existence of free will and justify bad politics?
"Some truths are outside of science's purview" (as exemplified by e.g. Hollywood shows where a scientist is faced with very compelling evidence of supernatural, but claims it would be "unscientific" to take that evidence seriously).
My favorite way to illustrate this would be that approximately around the end of 19th century/beginning of 20th century [time period is from memory, might be a bit off] belief in ghosts was commonplace with a lot of interest in doing spiritual seanses, etc, while rare stories of hot rocks falling from the sky were mostly dismissed as tall tales. Then scientists followed the evidence, and now most everybody knows that meteorites are real and "scientific", while ghosts are not, and are "unscientific".
↑ comment by Alexander (alexander-1) · 2021-11-17T04:04:06.607Z · LW(p) · GW(p)
I tend to agree but only to an extent. To our best understanding, cognition is a process of predictive modelling. Prediction is an intrinsic property of the brain that never stops. A misprediction (usually) causes you to attend to the error and update your model.
Suppose we define science as any process that achieves better map-territory convergence (i.e. minimise predictive error). In that case, it is uncontroversial to say that we are all, necessarily, engaged in the scientific process at all times, whether we like it or not. Defining science this way, it is reasonable to say that no claim about reality is, in principle, outside the purview of science.
Moral Uncertainty [? · GW] claims that even with perfect epistemic and ontological certainty, we still have to deal with uncertainty about what to do. However, I've always struggled to see how the above claim about map-territory convergence applies to goal selection and morality. I am not claiming that goal selection and morality are necessarily outside the purview of science. I am just puzzled by this.
How can we make scientific claims about selecting goals? Can we derive an ought from an is? Is it nonsensical to try and apply science to goal selection and morality? I subscribe to physicalism, and I thus believe that goals, decisions and purposes are absurd notions when we boil them down to physics. My puzzlement could be pure illusory but, still, I am puzzled.
Replies from: anon-user↑ comment by Anon User (anon-user) · 2021-11-18T03:42:21.134Z · LW(p) · GW(p)
Right, something like "Some objective truths are outside of science's purview" might have been a slightly better phrasing, but as the goal is to stay at the commonsense level, trying to parse this more precisely is probably out of scope anyway, so can as well stay concise...
↑ comment by Gordon Seidoh Worley (gworley) · 2021-11-17T01:59:21.031Z · LW(p) · GW(p)
But some stuff is explicitly outside of science's purview, though not in the way you're talking about here. That is, some stuff is explicitly about, for example, personal experience, which science has limited tools for working with since it has to strip away a lot of information in order to transform it into something that works with scientific methods.
Compare how psychology sometimes can't say much of anything about things people actually experience because it doesn't have a way to turn experience into data.
Replies from: adam-selker↑ comment by Adam Selker (adam-selker) · 2021-11-18T00:41:14.807Z · LW(p) · GW(p)
I think this might conflate "science" with something like "statistics". It's possible to study things like personal experience, just harder at scale.
The Hollywood-scientist example illustrates this, I think. Dr. Physicist finds something that wildly conflicts with her current understanding of the world, and would be hard to put a number on, so she concludes that it can't and shouldn't be reasoned about using the scientific method.
Commonsense ideas rarely include information about the domain of applicability, leading to the need for explicit noting of the law of equal and opposite advice and how to evaluate what sorts of people and situations need the antidote to that commonsense advice.
The tendency towards fallacy of the single cause where explanations feel more true because they are more compact representations and thus easier to think about and generate vivid examples of for further confirmation bias. Modal fallacy also related.
Local Optimisation Leads to Global Optimisation
The idea that if everyone takes care of themselves and acts in their own parochial best interest, then everyone will be magically better off sounds commonsensical but is fallacious.
Biological evolution, as Dawkins has put it, is an example of a local optimisation process that "can drive a population to extinction while constantly favouring, to the bitter end, those competitive genes destined to be the last to go extinct."
Parochial self-interest is indirectly self-defeating, but I keep getting presented with the same commonsense-sounding and magical argument that it is somehow :waves-hands: a panacea.
Probably the most persistent and problem-causing is the common sense way to treating things as having essences.
By this I mean that people tend to think of things like people, animals, organizations, places, etc. etc. as having properties or characteristics as if they had a little file inside them with various bits of metadata set that define their behavior. But this is definitely not how the world works! The property like this is at best a useful fiction or abstraction that allows simplified reasoning about complex systems, but it also leads to lots of mistakes because most people don't seem to realize these are like aggregations over complex interactions in the world rather than real things themselves.
You might say this is mistaking the map for the territory, but I think framing it this way makes it a little clearer just what is going on. People act as if there was some essential properties of things and think that's how the world actually is and as a result make mistakes when that model fails to correspond to what actually happens.
To me some of worst commonsense ideas come from the amateur psychology school: "gaslighting", "blaming the victim", "raised by narcissists", "sealioning" and so on. They just teach you to stop thinking and take sides.
Logical fallacies, like "false equivalence" or "slippery slope", are in practice mostly used to dismiss arguments prematurely.
The idea of "necessary vs contingent" (or "essential vs accidental", "innate vs constructed" etc) is mostly used as an attack tool, and I think even professional usage is more often confusing than not.
↑ comment by Yoav Ravid · 2021-11-16T05:16:37.114Z · LW(p) · GW(p)
I think it would be useful if you edited the answer to add a line or two explaining each of those or at least giving links (for example, Schelling fences on slippery slopes [LW · GW]), cause these seem non-obvious to me.
Replies from: tomcatfish↑ comment by Alex Vermillion (tomcatfish) · 2021-12-01T19:17:42.090Z · LW(p) · GW(p)
I actively disagree with the top-level comment as I read it.
I do, however, think this may be a difference of interpretation on the question and it's domain.
I don't think it makes very much sense to say that "gaslighting" is a bad idea. It describes a harmful behavior that you may observe in the world.
I think that @cousin_it may be saying, in no conflict with what I said above, that the verbal tag "gaslighting" is frequently used in inappropriate ways, like to shut out a person who says 'I didn't do that' by putting them in a bucket labeled "abuser". I think this is a reasonable observation [1], but I don't think this is what the question-asker meant. I think they were seeking bad concepts, not concepts that are used in bad ways.
"weird" vs "normal". This concept seems to bundle together "good" and "usual", or at least "bad" with "unusual".
- If most people are mostly strategic most of the time, then common actions will indeed be strategic ones, so uncommon actions are probably unstrategic. However, in reality, we all have severely bounded rationality (compared to a bayesian superintelligence). "To be human is to make ten thousand errors. No one in this world achieves perfection." [? · GW] This limits the utility of the absurdity heuristic [? · GW] for judging the utility of actions.
- Even if you aren't making this conflation, "weird" vs "normal" can encourage a map/territory error, where "weird" things are thought of as inherently low-probability, and "normal" things inherently high-probability. Bayesians think of probabilities as a property of observing agents rather than inherent to things-in-themselves. To avoid this mistake, people sometimes say things like "since the beginning, not one unusual thing has ever happened" [LW · GW] (which can be interpreted as saying that, if we insist on attaching weirdness/normalness as an inherent property of events, we should consider everything that actually happens as normal).
- I think the label "weird" also seems to serve as a curiosity-stopper [? · GW], in some cases, because the inherent weirdness "explains" the unusual observations. EG, "they're just weird".
I dislike when people talk about someone "deserving" something when what they mean is they would like that to happen. The word seems to imply that the person may make a demand on reality (or reality's subcategory of other people!)
I suggest we talk about what people earn and what we wish for them instead of using this word that imbues them with a sense of "having a right to" things they did not earn.
That is, of coure, not saying we should stop wishing others or ourselves well.
Just saying we should be honest that that is what we are doing and use "deserving" only in the rare cases when we want to imbue our wish of opinion with a cosmic sense of purpose or imply in some other way the now common idea. When no longer commonly used in cases where an expression of goodwill (or "badwill" for that matter) will do, it may stand out in such cases and have the proper impact.
Of course we are not going to make that change and we wouldn't, even if this reached enough people, because people LOVE to mythically "deserve" things, and it makes them a lot easier to sell to or infuriate too. We may, however, just privately notice when someone tries to sell us something we "deserve", adress the thanks to the person wishing us well instead of some nebulous "Universe" when someone tells us we "deserve" something good and consider our actual moral shortcomings when the idea creeps up we might "deserve" something bad.
Copenhagen interpretation of ethics.
The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more. Even if you don’t make the problem worse, even if you make it slightly better, the ethical burden of the problem falls on you as soon as you observe it. In particular, if you interact with a problem and benefit from it, you are a complete monster. I don’t subscribe to this school of thought, but it seems pretty popular.
- This heuristic probably derives partly from the (potentially useful) idea that "good behavior" has to be a function of what you know; "doing the best you can" always has to be understood in the context of what you have/haven't learned.
- It may also arise from a heuristic that says, if you're involved in a bad situation, you have probably done something wrong yourself. This may be useful for preventing common forms of blame-dodging, much like anti-mafia laws help arrest kingpins who would otherwise not be directly liable for everything done by their organization.
- However, this ends up rewarding ignorance, and punishing people who are doing as much as they can to help (see article for examples; also see Asymmetric Justice [LW · GW]).
Other common problems with blame.
- People often reason as if blame is a conserved quantity; if I'm to blame, then the blame on you must somehow be lessened. This is highly questionable.
- A problem can easily have multiple important causes. For example, if two snipers attempt to assassinate someone at the same time, should we put more blame on the one whose bullet struck first? Should one be tried for murder, and the other be tried merely for attempted murder?
- Blaming things outside of your control. Quoting HPMOR, chapter 90 [LW · GW]:
- "That's not how responsibility works, Professor." Harry's voice was patient, like he was explaining things to a child who was certain not to understand. He wasn't looking at her anymore, just staring off at the wall to her right side. "When you do a fault analysis, there's no point in assigning fault to a part of the system you can't change afterward, it's like stepping off a cliff and blaming gravity. Gravity isn't going to change next time. There's no point in trying to allocate responsibility to people who aren't going to alter their actions. Once you look at it from that perspective, you realize that allocating blame never helps anything unless you blame yourself, because you're the only one whose actions you can change by putting blame there. That's why Dumbledore has his room full of broken wands. He understands that part, at least."
- See Heroic Responsibility. [? · GW]
The concept of blame is not totally useless. It can play several important roles:
- Providing proper incentives. In some situations, it can be important to assign blame and punishment in order to shape behavior, especially in contexts where people who do not share common goals are trying to cooperate.
- Fault analysis. When people do share common goals, it can still be important to pause and think what could have been done differently to get a better result, which is a form of assigning blame (although not with accompanying punishment).
- Emotional resolution. Sometimes an admittance of guilt is what's needed to repair a relationship, or otherwise improve some social situation.
- Norm enforcement. Sometimes an apology (especially a public apology) serves the purpose of reinforcing the norm that was broken. For example, if you fail to include someone in an important group decision, apologizing shows that you think they should be included in future decisions. Otherwise, making decisions without that person could become normal.
However, I find that blame discussions often serve none of these purposes. In such a case, you should probably question whether the discussion is useful, and try to guide it to more useful territory.
Self-Fulfilling Prophecy
The idea is that if you think about something, then it is more likely to happen because of some magical and mysterious "emergent" feedback loopiness and complex chaotic dynamics and other buzzwords.
This idea has some merit (e.g. if your thoughts motivate you to take effective actions). I don't deny the power of ideas. Ideas can move mountains. Still, I've come across many people who overstate and misapply the concept of a self-fulfilling prophecy.
I was discussing existential risks with someone, and they confidently said, "The solution to existential risks is not to think about existential risks because thinking about them will make them more likely to happen." This is the equivalent of saying, "Don't take any precautions ever because by doing so, you make the bad thing more likely to happen."
↑ comment by abramdemski · 2021-11-19T17:41:40.532Z · LW(p) · GW(p)
I don't want to do without the concept. I agree that it is abused, but I would simply contest whether those cases are actually self-fulfilling. So maybe what I would point to, as the bad concept, would be the idea that most beliefs are self-fulfilling. However, in my experience, this is not common enough that I would label it "common sense". Although it certainly seems to be something like a human mental predisposition (perhaps due to confirmation bias, or perhaps due to a confusion of cause and effect, since by design, most beliefs are true).
Replies from: alexander-1↑ comment by Alexander (alexander-1) · 2021-11-20T03:42:07.967Z · LW(p) · GW(p)
You're right. As romeostevensit pointed out, "commonsense ideas rarely include information about the domain of applicability." My issue with self-fulfilling prophecy is that it gets misapplied, but I don't think it is an irretrievably bad idea.
This insightful verse from the Tao Te Ching is an exemplary application of the self-fulfilling prophecy:
If you don't trust the people, you make them untrustworthy.
It explicitly states a feedback loop.
Replies from: Yoav Ravid↑ comment by Yoav Ravid · 2021-11-20T11:16:07.990Z · LW(p) · GW(p)
You can add it to Self Fulfilling/Refuting Prophecies [? · GW] as an example
Metric words (eg "good", "better", "worse") with an implicit privileged metric. A common implicit metric is "social praise/blame", but people can also have different metrics in mind and argue past each other because "good" is pointing at different metrics. Usually, just making the metric explicit or asking "better in what way?" clears it up.
Similar for goal words ("should", "ought", "must", "need", etc) with an implicit privileged goal. Again, you can ask "You say you 'have to do it', but for what purpose?"
Btw, I'm not against vague goals/metrics that are hard to make legible, just the implicit, privileged ones.
"I like hummus" is a fact, not an opinion
Qualitative vs. quantitative differences / of kind vs. of degree
It's not like the distinction is meaningless (in some sense liquid water certainly isn't "just ice but warmer") but most of the times in my life I recall having encountered it, it was abused or misapplied in one way or another:
(1) It seems to be very often (usually?) used to downplay some difference between A and B by saying "this is just a difference of degree, not a difference of kind" without explaining why one believes so or pointing out an example of an alternative state of the world in which a difference between A and B would be qualitative.
(2) It is often ignored that differences of degree can become differences of kind after crossing some threshold (probably most, if not all, cases of latter are like that). At some point ice stops just becoming warmer and melts, a rocket stops just accelerating and reaches escape velocity, and a neutron start stops just increasing in mass and collapses into a black hole.
(3) Whenever this distinction is being introduced, it should be clear what is meant by qualitative and quantitative difference in this particular domain of discourse, either with reference to some qualitativeness/quantitativeness criteria or by having sets of examples of both. For example, when comparing intelligence between species, one could make a case that we see a quantitative difference between ravens and new Caledonian crows but qualitative between birds and hookworms. We may not have a single, robust metric for comparing average intelligence between taxa but in this case we know it when we see it and we can reasonably expect other to see the distinction as well. (TL;DR it shouldn't be based on gut feeling when gut feeling about what is being discussed is likely to differ between individuals)
Related to facts vs opinions but not quite the same is objective/subjective dichotomy, popular in conventional philosophy. I find it extremely misleading and contributing a lot to asking wrong questions and accepting ridiculous non sequiturs.
For instance, it's commonly assumed that things are either subjective or objective. Moreover, if something is subjective it's arbitrary, not real and not meaningful. To understand why this framework is wrong, one requires good understanding of map/territory distinction and correspondence. How completely real things like wings of an airplane can exist only in the map, and how maps themselves are embedded in the territory.
But this isn't part of philosophy 101 and so we get confused arguments about objectiveness of X and whole schools of philosophy, noticing that, in a sense, everything we interact with is subjective, so objectivity either doesn't exist or its existence doesn't matter to us, with all kind of implications, some of which do not add to normality.
Radical actions. The word "radical" means someone trying to find and eliminate root causes of social problems, rather than just their symptoms. Many people pursue radical goals through peaceful means (spreading ideas, starting a commune, attending a peaceful protest or boycotting would be examples), yet "radical act" is commonly used as a synonym to "violent act".
Extremism. Means having views far outside the mainstream attitude of society. But also carries a strong negative connotation, in some countries is prohibited by law and mentioned alongside "terrorism" like they're synonyms, and redefined by Wikipedia as "those policies that violate or erode international human rights norms" (but what if one's society is opposed to human rights?!) Someone disagreeing with society is not necessarily bad or violent, so this is a bad concept.
"Outside of politics". Any choice one makes affects the balance of power somehow, so one cannot truly be outside. In practice the phrase often means that supporting the status quo is allowed, but speaking against it is banned.
10 comments
Comments sorted by top scores.
comment by Bucky · 2021-11-16T16:37:31.483Z · LW(p) · GW(p)
Fact vs opinion is taught at my kids' school (age ~7 from memory). The lesson left them with exactly the confusion that you are talking about. Talking to them I got the impression that the teacher didn't really have this sorted out in their head themself.
My way of explaining it to them was that there are matters of fact and matters of opinion but often we don't know the truth about matters of fact. We can have opinions about matters of fact but the difference is that there is a true answer to those kinds of questions even when we don't know. This seemed to help them but I couldn't help but feel that it is kind of an unhelpful dichotomy.
Replies from: abramdemski↑ comment by abramdemski · 2021-11-17T04:31:54.401Z · LW(p) · GW(p)
I think maybe teachers (and parents) teach this because it's a social tool (we need a category for "hey don't argue about that, it's fine" for peacekeeping, and another category for "but take this very seriously"). Probably we can't get people to stop using these categories without a good replacement social tool.
Replies from: None↑ comment by [deleted] · 2021-11-19T11:19:10.892Z · LW(p) · GW(p)
Replies from: ali-rizvi-santiago↑ comment by Anonymous (ali-rizvi-santiago) · 2022-03-28T01:26:51.652Z · LW(p) · GW(p)
I think that we'll need to figure this out at some point to be able to clearly distinguish these things to children. Not being able to distinguish fact from fiction can lead to the propagation of non-factual information that will be then be discerned as fact as dictated by the overall consensus of a said individual's community. Our present tools on the internet are very good at determining relevancy (which is debatable of course), but they are not the best at determining verity or distinguishing bias.
We're have to assume that a source of factual information is altruistic and that they actually do the research in order to distinguish one from another. But this in itself is very fragile as if it violates the facts that a community already believes (such as when said altruistic person is in actuality not altruistic), there is no simple way to revert the information that has already propagated throughout said communities. Being able to clearly distinguish facts from opinions, enables one to reinforce the idea of what's factual...and avoid the potential issue of disputing a fact when comparing it against a differing opinion which is being misinterpreted as a fact.
I think this is critical, because certain structures rely on being able to discern fact from opinion, which has manifested itself in the national policies of some countries, but they of course can be easily misrepresented if an individual only consumes information (factual or otherwise) from a similar source of said information.
comment by Yoav Ravid · 2021-11-15T21:08:55.209Z · LW(p) · GW(p)
The fact vs opinion thing is indeed a common thing. One especially common and tricky version of it is a stand that says "What I'm saying is based on science, so it isn't an opinion, it's a fact" - I know because I used to believe and say that myself... then I read the sequences and Scott Alexander and it blew that notion out of the water for me. Scott especially because he has several good posts on how science is hard, and isn't as simple as "ask question > conduct experiment > acquire truth". After reading those posts I immediately lowered my confidence in a bunch of my beliefs.
comment by Shmi (shminux) · 2021-11-15T20:05:37.675Z · LW(p) · GW(p)
I'd go further than "fact vs opinion" and claim that the whole concept of there being one truth out there somewhere is quite harmful, given that the best we can do is have models that heavily rely on personal priors and ways to collect data and adjust said models.
Replies from: elriggs, Viliam, TAG↑ comment by Logan Riggs (elriggs) · 2021-11-17T22:19:26.276Z · LW(p) · GW(p)
I don't understand why shminux's comment was down to -6 (as of 11/17). I think this comment is good for thinking clearly. How reality is perceived to you is based off how you collect data, update, and interpret events. You can get really different results by changing any of those( biased data collection, only updating on positive results, redefining labels in a motte and bailey, etc.)
Going from a "one truth" to a "multiple frames" model helps communicating with others. I find it easier to tell someone
from a semantics viewpoint, 'purpose' is a word created by people to describe goals in normal circumstances. From this standpoint, to ask "What's my purpose in life?" doesn't make sense since a goal doesn't make sense applied to a whole life [Note: if you believe in a purposeful god, then yes you can ask that question]
than stating more objectively (ie without the "from a semantics viewpoint").
This is also good for clarifying metrics because different frames are better at different metrics, which should be pointed out (for clear communication's sake).
Instead of denying whole viewpoints, this allows zeroing in on what exactly is being valued and why. For example, Bob is wishing people loving-kindness and imagining them actually being happy as a result of his thoughts. I can say this is bad on a predictive metric, but good on a "Bob's subjective well-being" metric.
↑ comment by Viliam · 2021-11-15T21:34:11.547Z · LW(p) · GW(p)
The concept of "one truth" can be an infohazard, if people decide that they already know the truth, so there is no reason to learn anymore, and all that is left to do is to convert or destroy those who disagree.
To me this seems like an example of the valley of bad rationality [? · GW]. If possible, the solution is more rationality. If not possible, then random things will happen, not all of them good.
comment by PeterL (peter-loksa) · 2023-05-16T17:04:02.264Z · LW(p) · GW(p)
"His own stupid" - the idea that if someone is stupid, he deserves all the bad consequences of being stupid.
Disproof:
Let's assume this is true. Then there would have been at least one voluntary action that turned him from wise to stupid. But why would someone voluntarily choose to be stupid? Only because he wouldn't have known what being stupid means, so he would be already stupid. Thus there would be no such first action. (Assumtion rejected.)