Posts
Comments
You are free to choose between A or B if your choice will determine the outcome.
Right, but there's a lot of conflation between what people should think I am and what they do unfairly think I am, which to be fair is a real thing, though it's a real thing which the thing that people should think I am is trapped inside of, and to the extent that it is responsible for causing problems which the thing people should think I am are inclined to blame by nonconsensual association, it is parasitic, and the thing which people should think I am is a victim.
Upvoted for the finalmost sentence of your post; thank you so much.
Whoever argues that "MLK is a criminal" with the intent of instilling the negative connotation of the term is unlikely to apply the same standard everywhere.
This is an indictment of the human species, if this purported "unlikelihood" is true. Maybe you should not underestimate the likelihood that your interlocutors have a serious deep resentment of unlawful behavior, however alien this might be to you. Maybe part of their fundamental self-narrative includes the unforgivable harms consistently caused to them by crimes which were superficially dismissed as mild by others. They may think "If this is a mild (read: non-central) crime, I don't want to know what the serious (read: central) ones are." Maybe they feel they have no choice but to become a total "I'll end it forever if it's the last thing I do"-level enemy with criminality in all forms, as a precaution.
If humanity is willing to coexist with anything, well, imagine the worst possible thing. Imagine something worse than that. Worse in ways you didn't even realize things could be worse by. Recursive worseness. Explosive worseness via combination of worseness-multipliers. Worseness-multipliers that might seem like normally good things if they weren't being used by your imagination for the explicit purpose of making things worse. (Like hope, for example.) That is a thing which counts as a member of the "anything" set which humanity would be willing to coexist with, in the world where humanity would coexist with anything.
Unconditional coexistence is not safe for humans. To refuse coexistence with something that is evil in letter and spirit, on the outside and on the inside, you must have a clear sense of that thing no matter what are the stereotypes — the consensus reality — about its symbolic representation.
I liked this post on a personal level, because I like seeing how people can, with extremely fine subtlety, trick themselves into thinking the world is cooler than it is, but I had to downvote because that is not what LessWrong is for, or at least to the extent that self-deceiving memes are being shared then it's supposed to be explicitly intentional; "Instructions For Tricking Yourself Into Feeling That The World Is Cooler" is a thing you could plausibly post and explain, such that your beliefs about which tricks actually work pay rent in anticipated experiences.
My objection about specific contents of this post: you cannot make good things more plausible-about-reality by writing stories where realistic events happen plus good unrealistic events happen; the unrealistic events do not gain plausibility-about-reality by association-through-fiction.
Some clarifications about my objection, and some questions to help you hold your ground if you should and if you can: I don't take for granted that this observation is necessarily mutually exclusive with what you have written, but the observation is ostensibly mutually exclusive; the relation of 'subjectively-unresolved ostensible mutual exclusivity' between your post and my observation is what we might call 'tension'. Can you explain how the intended spirit of your post survives my objection? What do you think is the right way to resolve the tension between our world models?
One option for resolving the tension is to fix your world-model by removing this meme from it because you realize my model about reality, which does not contain your meme, is more consistent with what is noticeable about reality. Another option is to explain how I've misinterpreted the differences between what your argument should have been (which could be considered close enough to what you articulated), versus the worse version that it actually sounded like, followed by explaining that what your argument was close to is more important than how it sounded to me even if I heard right. This latter option could be considered 'rescuing the spirit of the post from the letter of it'.
(Sidenote: I will concede to you the merit that having to explain the trick makes it less subtle, and might make it work less for people who care about their beliefs paying rent in anticipated experiences. This is not fun, and I think there should be a place where you can post specifically rationalism-informed tricks like that; maybe a forum called FunTricks. Arguably this would boost epistemic security for the people who do care about beliefs paying rent in anticipated experiences, as content posted to FunTricks would serve as puzzles for experienced Bayescrafters to learn more about the nature of self-deception from. The irrationalists can get lost in a fun hall of mirrors, and the Bayescrafters can improve their epistemic security; it would be win-win.
FunTricks posters could rate posts by how subtle the trick was; whether they noticed the mistake. Subtlevote vs "Erm, wait"-vote)
Imagine that your meme is importantly inconsistent with what is noticeable about reality. After all my criticisms, what merits about your post, do you think, are still true? I am interested in this! I do not want to deny your post any credit that is due to it, even if I tentatively must downvote it because that credit is outweighed by the fact that it can mislead people about how cool reality is, which is something LessWrongers care about!
It is, on principle, possible that I am in the wrong; that your model is better due to the presence of your meme(s). That would be great if it were demonstrated, because I would have the privilege of learning more from you than what you would learn from me, which is a serious kind of 'winning' in debates! I am especially excited about opportunities for viewquakes!
Finally, thank you for posting on LessWrong! Thank you for engaging with philosophy and the memetic evolutionary process! Every interaction can make us wiser if we have the courage to admit error, forgive error, and persist, in the course of memetic negotiation! If you post memes (idea-genes) on LessWrong, please make those memes pay rent in anticipated experiences; those are the memes we do want here! :)
I don't agree that focusing on extrinsic value is less myopic than focusing on intrinsic value. This world is full of false promises, self-delusion, rationalization of reckless commitment, complexity of value, bad incentives/cybernetics, and the fallaciousness of planning. My impression is that the conscientious sort of people who think so much about utility have overconfidence in the world's structural friendliness and are way more screwed than the so-called "myopic" value-focused individuals.
It's objectively not good enough to be good to a boring degree. The world is full of bullying, we should stand up to it, and to stand up effectively against bullying is rarely boring.
Objective general morality exists, it doesn't have to exist for the sake of anything outside itself, and you should collaborate control over the world with objective general morality if not outright obey it; whichever is better after fully accounting for the human hunger for whimsy. The protection of whimsy is objectively a fragment of objective goodness.
All the narrative proofs that the world should not flow in accordance with good intentions are just hints about how to refine one's conception of Good Itself so that it does not lead to outcomes that are, surprise surprise, actually bad.
"Always remember that it is impossible to speak in such a way that you cannot be misunderstood: there will always be some who misunderstand you."
― Karl Popper
A person can rationalize the existence of causal pathways where people end up not understanding things that you think are literally impossible to misunderstand, and then very convincingly pretend that that was the causal pathway which led them to where they are,
and there is also the possibility that someone will follow such a causal pathway towards actually sincerely misunderstanding you and you will falsely accuse them of pretending to misunderstand.
This is wonderful; feels much more friendly, practical, and conducive to ideal speech situations. If someone tries to attack me for a wrong probability, I can respond "I'm just talking but with additional clarity; no one is perfect."
I am under the impression that here at LessWrong, everyone knows we have standards about what makes good, highly-upvotable top-level content. Currently I would not approve of a version of myself who would conform to those standards I perceive, but I can be persuaded otherwise, including by methods such as improving my familiarity with the real standards.
Addendum: I am not the type of guy who does homework. I am not the type of guy who pretends to have solved epistemology when they haven't. I am the type of guy who exchanges considerations and honestly tries to solve epistemology, and follows up with "but I'm not really sure; what do you guys think?" That is not highly-upvotable content in these parts'a town.
No one will hear my counter-arguments to Sabien's propaganda who does not ask me for them privately. Sabien has blocked me for daring to be unsubtle with him. He is equally welcome as anyone else to come forth to me and exchange considerations. I will not be lured into war; if it is to be settled, then it will be settled with words and in ideal speech situations.
Certain texts are characterized by precision, such as mathematical proofs, standard operating procedures, code, protocols, and laws. Their authority, power, and usefulness stem from this quality. Criticizing them for being imprecise is justified.
Nope; precision has nothing to do with intrinsic value. If Ashley asks Blaine to get her an apple from the fridge, many would agree that 'apple' is a rather specific thing, but if Blaine was insistent on being dense he can still say "Really? An apple? How vague! There are so many possible subatomic configurations that could correspond to an apple, and if you don't have an exact preference ordering of sub-atomically specified apple configurations, then you're an incoherent agent without a proper utility function!"
And Blaine, by the way, is speaking the truth here; Ashley could in fact be more specific. Ashley is not being completely vague, however; 'apple' is specific enough to specify a range of things, and within that range it may be ambiguous as to what she wants from the perspective of someone who is strangely obsessed with specificity, but Ashley can in fact simply and directly want every single apple that matches her rangerately-specified criteria.
So it is with words like 'Good', 'Relevant', 'Considerate', 'Justice', and 'Intrinsic Value Strategicism'.
Explain, please? I affirm the importance of charitability and I am interested in greater specificity about what you have identified as 'aggressiveness'. I see aggressiveness as sometimes justified.
No standing with whom? I am requesting that you not be cruel over shallow and irrelevant matters; that is exactly what I should be doing here no matter the density and inconsiderateness of you or anyone else.
My standing with Omniscient beings is the standing that should primarily matter to allegedly rational people.
I should have done the second; I was mistaken that clicking "Read More" in the commenting guidelines would not reward me with sufficient clarity about Duncan's elaborate standards; I apologize for my rude behavior.
I insist that you either always use it non-violently or always explain why it does not just mean 'being weird and disagreeable', and also why it doesn't mean anything else that is entirely morally irrelevant either, because you should never be cruel over anything that is morally irrelevant.
Why the downvotes? "Lizardman" is a great status-reducing thing to call a person just for being too weird and disagreeable! :)
This was the original reasoning behind judges-elected-for-life—that society needed principled men and women of discernment who did not need to placate or cater to lizardman.
After all, no one of discernment would ever heed a true lizardman. They know the difference between someone who seems like a lizardman and someone who is a lizardman.
No sane person can disagree with any of this succinctly on the level of truth nor of misleadingness; excellently written.
If AI copied all human body layouts down to the subatomic level, then re-engineered all human bodies so they were no longer recognizably human but rather something human-objectively superior, then gave all former humans the option to change back to their original forms, would this have been a good thing to do?
I think so!
It has been warned in ominous tones that "nothing human survives into the far future."
I'm not sure human-objectivity permits humanity to remain mostly-recognizably human, but it does require that former humans have the freedom to change back if they wish, and I'm sure that many would, and that would satisfy the criterion of something human surviving the far future.
(I apologize for being, or skirting too close to the edges of being, too political. I accept downvotes as the fair price and promise no begrudgement for it.)
I have an observation that I want more widely appreciated by low-contextualizers (who may be high or low in decoupling as well; they are independent axes): insisting that conversations happen purely in terms of the bet-resolvable portion of reality, without an omniscient being to help out as bet arbiter, can be frame control.
Status quos contain self-validating reductions, and people looking to score Pragmatic Paternalist status points can frame predictable bet outcomes as vindication of complacence with arbitrary, unreasonably and bullyishly exercised, often violent, vastly intrinsic-value-sacrificial power, on the basis of the weirdness and demonstrably inconvenient political ambitiousness of fixing the situation.
They seem to think, out of entitlement to epistemic propriety, that there must be some amount of non-[philosophical-arguments]-based evidence that should discourage a person from trying to resolve vastly objectively evil situations that neither the laws of physics, nor any other [human-will]-independent laws of nature, require or forbid. They are mistaken.
If that sounds too much like an argument for communism, get over it; I love free markets and making Warren Buffett the Chairman of America is no priority of mine.
If it sounds too much like an argument for denying biological realities, get over it; I'm not asking for total equality, I'm just asking for moral competence on behalf of institutions and individuals with respect to biological realities, and I detest censorship of all the typical victims, though I make exception for genuine infohazards.
If you think my standards are too high for humanity, were Benjamin Lay's also too high? I think his efforts paid off even if our world is still not perfect; I would like to have a comparable effect, were I not occupied with learning statistics so that I can help align AI for this guilty species.
If you think factory farmed animals have things worse than children... Yes. But I am alienated by EA's relative quietude; you may not see it this way, but so-called lip service is an invitation for privately conducted accountability negotiation, and I value that immensely as a foundation for change.
Engineering and gaming are just other words for understanding the constraints deeply enough to find the paths to desired (by the engineer) results.
Yes.
The words you choose are political, with embedded intentional beliefs, not definitional and objective about the actions themselves.
Well now that was out of left-field! People don't normally say that without having a broader disagreement at play. I suppose you have a more-objective reform-to-my-words prepared to offer me? My point about the letter of the law being more superficial than the spirit seems like a robust observation, and I think my choice of words accurately, impartially, and non-misleadingly preserves that observation;
until you have a specific argument against the objectivity, your response amounts to an ambiguously adversarially-worded request to imagine I was systematically wrong and report back my change of mind. I would like you to point my imagination in a promising direction; a direction that seems promising for producing a shift in belief.
Funny that you think gameability is closer to engineering; I had it in mind that exceptioncraft was closer. To my mind, gameability is more like rules-lawyering the letter of the law, whereas exceptioncraft relies on the spirit of the law. Syntactic vs semantic kinda situation.
Arbitrary incompleteness invites gameability, and arbitrary specificity invites exceptioncraft.
You can quote text using a caret (>) and a space.
Surely to be truthful is to be non-misleading...?
Read the linked post; this is not so. You can mislead with the truth. You can speak a wholly true collection of facts that misleads people. If someone misleads using a fully true collection of facts, saying they spoke untruthfully is confusing. Truth cannot just always lead to good inferences; truth does not have to be convenient, as you say in OP. Truth can make you infer falsehoods.
Saying you put the value of truth above your value of morality on your list of values is analogous to saying you put your moral of truth above your moral of values; it's like saying bananas are more fruity to you than fruits.
Where does non-misleadingness fall on your list of supposedly amoral values such as truth and morality? Is non-misleadingness higher than truth or lower?
The existence of natural abstractions is entirely compatible with the existence of language games. There are correct and incorrect ways to play language games.
Dialogue trees are the substrate of language games, and broader reality is the substrate of dialogue trees. Dialogue trees afford taking dialogical moves that are more or less arbitrary. A guy who goes around saying "claiming land for yourself and enforcing your claim is justice; Nozick is intelligent and his entitlement theory of justice vindicates my claim" will leave exact impressions on exact types of people, who will in turn respond in ways that are characteristic of themselves. Every branch of the dialogue tree will leave an audience with an impression of who is right, and some audiences have measurably better calibration.
Just because no one can draw perfect triangles doesn't mean it's nonsense to talk about such things.
In the sequences, Yudkowsky has remarked over and over that it is futile to protest that you acted with propriety if you do not achieve the correct answer; read the 12th virtue
No; pointless for me to complain, to be clear.
The Principle of Nameless Heartsmarts: It is pointless to complain that I acted with propriety if in the end I was too dense to any relevant consideration.
You can't say values "aren't objective" without some semantic sense of objectivity that they are failing to fulfill.
If you can communicate such a sense to me, I can give you values to match. That doesn't mean your sense of objectivity will have been perfect and unarbitrary; perhaps I will want to reconcile with you about our different notions of objectivity.
Still, I'm damn going to try to be objectively good.
It just so happens that my values connote all of your values, minus the part about being culturally local; funny how that works.
If you explicitly tell me that your terminal values require culturally local connotations then I can infer you would have been equally happy with different values had you been born in a different time or place. I would like to think that my conscience is like that of Sejong the Great's and Benjamin Lay's: relatively less dependent on my culture's sticks and carrots.
The dictionary defines arbitrary as:
based on random choice or personal whim, rather than any reason or system
The more considerate and reasoned your choice, the less random it is. If the truth is that your way of being considerate and systematic isn't as good as it could have been, that truth is systematic and not magical. The reason for the non-maximal goodness of your policy is a reason you did not consider. The less considerate, the more arbitrary.
There is no real reason to choose either the left or right side of the road for driving but it's very useful to choose either of them.
Actually there are real reasons to choose left or right when designing your policy; you can appeal to human psychology; human psychology does not treat left and right exactly the same.
If one person says I don't really need that many error codes, I don't want to follow arbitrary choices and send 44 instead of 404, this creates a mess for everyone who expects the standard to be followed.
If the mess created for everyone else truly outweighs the goodness of choosing 44, then it is arbitrary to prefer 44. You cannot make true arbitrariness truly strategic just by calling it so; there are facts of the matter besides your stereotypes. People using the word "arbitrary" to refer to something that is based on greater consideration quality are wrong by your dictionary definition and the true definition as well.
You are wrong in your conception of arbitrariness as being all-or-nothing; there are varying degrees, just as there are varying degrees of efficiency between chess players. A chess player, Bob, half as efficient as Kasparov, makes a lower-quality sum of considerations; not following Kasparov's advice is arbitrary unless Bob can know somehow that he made better considerations in this case;
maybe Bob studied Kasparov's biases carefully by attending to the common themes of his blunders, and the advice he's receiving for this exact move looks a lot like a case where Kasparov would blunder. Perhaps in such a case Bob will be wrong and his disobedience will be arbitrary on net, but the disobedience in that case will be a lot less arbitrary than all his other opportunities to disobey Kasparov.
A policy that could be better — could be more good — is arbitrarily bad. In fact the phrase "arbitrarily bad" is redundant; you can just say "arbitrary."
It is better to be predictably good than surprisingly bad, and it is better to be surprisingly good than predictably bad; that much will be obvious to everyone.
I think it is better to be surprisingly good than predictably good, and it is better to be predictably bad than surprisingly bad.
EDIT: wait, I'm not sure that's right even by deontology's standards; as a general categorical imperative, if you can predict something will be bad, you should do something surprisingly good instead, even if the predictability of the badness supposedly makes it easier for others to handle. No amount of predictable badness is easier for others to handle than surprising goodness.
EDIT EDIT: I find the implication that we can only choose between predictable badness and surprising badness to be very rarely true, but when it is true then perhaps we should choose to be predictable. Inevitably, people with more intelligence will keep conflicting with people with less intelligence about this; less intelligent people will keep seeing situations as choices between predictable badness and surprising badness, and more intelligent people will keep seeing situations as choices between predictable badness and surprising goodness.
Focusing on predictability is a strategy for people who are trying to minimize their expectedly inevitable badness. Focusing on goodness is a strategy for people who are trying to secure their expectedly inevitable weirdness.
I don't yet have any opinions about the arbitrariness of those rules. It is possible that I would disagree with you about the arbitrariness if I was more familiar.
Still, you claim that those rules are arbitrary and then defend them; what on Earth is the point of that? If you know they are arbitrary then you must know there are, in principle, less arbitrary policies available. Either you have a specific policy that you know is less arbitrary, in which case people should coordinate around that policy instead as a matter of objective fact, or you don't know a specific less arbitrary policy, and in that case maybe you want people with better Strategic Goodness about those topics to come up with a better policy for you that people should coordinate around instead.
You can complain about the inconvenience of improving, sure. But the improvement will be highly convenient for some other people. There's only so long you can complain about the inconvenience of improving before you're a cost-benefit-dishonest asshole and also people start noticing that fact about you.
Either 'fallacious' is not the true problem or it is the true problem but the stereotypes about what is fallacious do not align with reality: A Unifying Theory in Defense of Logical Fallacies
People defend normal rules by saying they're "not arbitrary." But if they were arbitrariness minimizers the rules would certainly be different. Why should I tolerate an arbitrary level of arbitrariness when I can have minimal instead?
Your policy's non-maximal arbitrariness is not an excuse for its remaining arbitrariness.
I do not suggest the absence of a policy if such an absence would be more arbitrary than the existing policy. All I want is a minimally arbitrary policy; that often implies replacing existing rules rather than simply doing away with them. Sometimes it does mean doing away with them.
If someone said "you'll never persuade people like that" to me I'd probably just ask them what's arbitrary about my position. If it's arbitrary then they may have a point. If it's not arbitrary then people will in fact be persuaded.
When I try to do virtue ethics, I find that all my virtues turn to swiss cheese after a day’s worth of exception handling.
"Put simply: inconsistency between words and actions is no big deal. Why should your best estimate about good strategies be anchored to what you're already doing? The anti-hypocrisy norm seems to implicitly assume we're already perfect; it leaves no room for people who are in the process of trying to improve."
— Abram Demski, Hufflepuff Cynicism on Hypocrisy
"With 'unlimited power' you have no need to crush your enemies. You have no moral defense if you treat your enemies with less than the utmost consideration.
With 'unlimited power' you cannot plead the necessity of monitoring or restraining others so that they do not rebel against you. If you do such a thing, you are simply a tyrant who enjoys power, and not a defender of the people.
Unlimited power removes a lot of moral defenses, really. You can't say 'But I had to.' You can't say 'Well, I wanted to help, but I couldn't.' The only excuse for not helping is if you shouldn't, which is harder to establish.
You cannot take refuge in the necessity of anything - that is the meaning of unlimited power."
— Eliezer Yudkowsky, Not Taking Over the World
You appreciate my essay (and feel seen), but neverthess you believe I was being deliberately deceitful and misleading?
I just finished saying that your honest and good-faith participation was not to be punished; I mean it. You can be misleading out of innocent beginner-level familiarity; there is no need for deliberation. I was only upset that you were misleading about the general LessWrong philosophy's stance on emotion; it is a common misrepresentation people make. I am not commenting on the misleadingness of anything else.
My (personal, individual) only conditions for your emotional expression:
- Keep in mind to craft the conversation so that both of us walk away feeling more benefitted that it happened than malefitted, and keep in mind that I want the same.
- Keep in mind that making relevant considerations not made before, and becoming more familiar of each other's considerations, are my fundamental units of progress.
I accept everything abiding by those considerations, even insults. I am capable of terrible things; to reject all insults under all circumstances reflects overconfidence in one's own sensitivity to relevance.
"Trying very hard not to be pattern-matched to a Straw Vulcan" does not make for correct emotional reasoning.
Perhaps, but you implied there was a norm to not talk about feelings here; there is no such norm! Well, I expect not at least; maybe we are habitually shy about looking irrationally emotional even if we have internalized the proper philosophical relationship with emotion. Still it is clear from your remark that you do not have experience with the great multitude of occasions where this common misconception about LessWrong rationalists has been corrected.
Then it's a good thing that we are in a community that values truth over social niceness, isn't it?
I find it doubtful that you spoke truth, and I find it doubtful that you were non-misleading. Still, your honest and good-faith participation in the community is not to be punished, indeed; it was only a microaggression. I do not care for activist sense generally; just in this case the opportunity of compelling comparison was tempting.
I think this community generally values truth over social niceness, yes. Or at least that's what we tell ourselves and can be held accountable to, which is not an irrelevant improvement compared to the outside population.
As for myself I do not value truth over niceness, to be frank. I recognize downvotes as the fair price for saying such a thing. "Social niceness" is irrelevant to me if it is not also real niceness. Without truth you will be misled (though you can be misled even with some truth). If you mislead others, that is not nice. Truths which seemed irrelevant can turn out to be relevant. So the nice thing is always to tell the non-misleading truth, save for extreme edge cases.
But we aren't supposed to talk about feelings here, are we?
ZT5, my friend. That's not how this place works at all. You are playing around with groundless stereotypes. Activist sense (somewhat alike and unlike common sense) would say you have committed a microaggression. :)
Anyways, I appreciated your essay for a number of reasons but this paragraph in particular makes me feel very seen:
Rational reasoning is based on the idea of local validity. But your thoughts aren't locally valid. They are only approximately locally valid. Because you can't tell the difference.
You can't build a computer if each calculation it does is only 90% correct. If you are doing reasoning in sequential steps, each step better be 100% correct, or very, very close to that. Otherwise, after even a 100 reasoning steps (or even 10 steps), the answer you get will be nowhere near the correct answer.
I don't disagree with the main thrust of your comment, but,
I just wanna point out that 'fallacious' is often a midwit objection, and either 'fallacious' is not the true problem or it is the true problem but the stereotypes about what is fallacious do not align with reality: A Unifying Theory in Defense of Logical Fallacies
On that note, I'd love to get more feedback on this shortform of mine, which I feel is very underrated and full of great potential:
https://www.lesswrong.com/posts/MveJKzvogJBQYaR7C/lvsn-s-shortform?commentId=e2TtdTbj5zbaGkE5c
I think your comment should try to track what's actually going on, not what you want to be going on.
Well the latter is obviously more arbitrary (and less strategic) than the former; you do need a non-misleading map to behave strategically within the territory, and the world does not get the way humans want it to be by convincing each other that it is already that way, except for some rare self-fulfilling prophecies such as your group's collective belief in the ability to correct each other.
And the people who take the time to comment have very likely put more thought into their vote than the median, so even if they represented their reasons accurately, it'd still be a distorted picture.
A distorted picture of why it was downvoted? But if the karma is determined arbitrarily then it is questionably valuable apart from the subset which does respond.
If people babble about why they downvoted, the babble will usually be related in some way even if it is imperfect. Also, your babble should be aligned with your sense of strategy rather than being arbitrary.
Actually I think we should be called LessWrong for a reason.
I'm always refreshing this page, where I see all shortforms:
https://www.lesswrong.com/allPosts?sortedBy=new&karmaThreshold=-1000
If things that are true feel better to believe, explain why people who believe that an Abrahamic God exists explain their belief by saying it benefits their happiness, even though God does not exist. If your theory was true, people would be happier believing in the absence of an Abrahamic God, and they are not happier.
You write really long paragraphs. My sense of style is to keep paragraphs at 1200 characters or less at all times, and the mean average paragraph no larger than 840 characters after excluding sub-160 character paragraphs from the averaged set. I am sorry that I am not good enough to read your text in its current form; I hope your post reaches people who are.
Your group's collective belief or disbelief in correction is self-fulfilling prophecy.
I will withdraw my downvote when you convince me this was not posted in bad faith.
It is possible that you wanted to say whatever it would take to make the audience reconsider previously unquestioned assumptions so that they will not be misled, which is an impulse I admire. After all, it is pointless to complain that you acted with propriety if in the end you are misled. I just don't think the implicature here is actually leading (the opposite of misleading).
When you say "all else should stay firm belief" do you mean "all else should be regarded as belief"? Also, was the word 'firm' in 'firm belief' playing any role there or can I just get rid of it?
I think all propositions should be subject to tests whether they are regarded as knowledge or belief.