Posts
Comments
Make your mind flexible. Achieve & maintain full mental range of movement. Don't get "stiff", and view mental inflexibility as a risk to your mental health.
There's a fun (or at least "fun") exercise in which I regularly engage at my heavily right-wing, ex-military workplace: I try to agree with the guys who are in knee-jerk agreement with Fox News. I find this helps immensely with mental flexibility, as it forces me to try to actually reason from a foreign point of view. For example: when my coworkers are vociferously agreeing that a wall should be built of the border with Canada, I try to enter the discussion from the Rationalist perspective, looking for any objective benefits a Canadian border wall might have.
This has the double benefit of developing mental flexibility AND making me seem to be a potential political/philosophical ally to the coworkers in question. That way, when I keep developing the Rationalist perspective into the deep, blatant flaws in such a measure, my inputs are actually considered instead of being immediately shouted down.
Occasionally, a coworker even abandons their support for the measure in question.... which leads me to believe I'm on the right path.
"Millions long for immortality who do not know what to do with themselves on a rainy Sunday afternoon,"
Of late, during my discussions with others about rational politics and eudaimonia, I've been experiencing a strangely significant proportion of people (particularly the religious) asking me - with no irony - "What would you even DO with immortality?" My favored response: "Anything. And everything. In that order." LessWrong and HP:MoR has played no small part in that answer, and much of the further discussion that generally ensues.
So... thanks, everyone!
Through great meringues, great science. :D
DAMN. IT.
"There might be - if you were just picking the simplest rules you could manage - a physical constant which related the metric of relatedness (space) to the metric of determination (time) and so enforced a simple continuous analogue of local causality... ...in our universe, we call it c, the speed of light."
I am now starting to REALLY lament my lack of formal education, because I JUST NOW managed to grasp why the whole "speed of light" thing makes sense. Stupid poverty, ruin my fun. :D
"The axioms aren't things you're arbitrarily making up, or assuming for convenience-of-proof, about some pre-existent thing called numbers. You need axioms to pin down a mathematical universe before you can talk about it in the first place. The axioms are pinning down what the heck this 'NUM-burz' sound means in the first place - that your mouth is talking about 0, 1, 2, 3, and so on."
Ok NOW I finally get the whole Peano arithmetic thing. ...Took me long enough. Thanks kindly, unusually-fast-thinking mathematician!
"It should be another matter if someone seems interested in the process, better yet the math, and has some non-zero grasp of it, and are just coming to different conclusions than the local consensus."
I anticipate that helping such people gain a better grasp of the process may well be the best possible demonstration that you care about the process itself. At minimum, providing rationalist adjustments to people's conclusions helps ME feel as though I have regard for the process, even if I'm currently still struggling to implement the process rigorously when deriving conclusions of my own.
"Remember—boredom is the enemy, not some abstract 'failure.'"
Boredom is the mind-killer. Boredom is the little-death that brings total obliteration. I will face my boredom. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the boredom has gone there will be... all kinds of interesting shit, actually. Which I might never have noticed... had boredom not driven me to look.
"The way to imagine how a truly unsympathetic mind sees a human, is to imagine yourself as a useful machine with levers on it."
Or imagine how you feel about your office computer. Not your own personal computer, which you get to use and towards which you may indeed have some projected affection. Think of the shitty company-bought computer you have to deal with on a daily basis, else you get fired. That's right. NOT AT ALL. "That damned thing CAUSES more problems than it SOLVES!"
I finally admitted to myself that I exhibit all the signs and underpinning patterns of thought associated with Impostor Syndrome, and asked an actual human being (!!!) for help. My hope is not that I will thus have more things about which to brag, but will feel something other than guilt over my successes. Fun times!
@CronoDAS - I used to play a very long time ago, but attempting to keep up with the expansions became too expensive, so I let the hobby lapse. This decision was made when Ice Age came out, so... there's your timeline. However, I did manage to acquire the old 2004 "Shandalar" PC version, which has been delightful both tactically and strategically (the overland game - defeating NPCs and ganking their cards - may be even more enjoyable to me than the card game itself). While I haven't tried the more recent multiplayer video game version, I'd definitely be amenable. So let me know. I can be reached at DarianSentient@gmail.com if you prefer, or if anyone else reading this would like to reach out, as well.
I just finished moving to the Bay Area, from a house right down the street from Focus On The Family's world headquarters. ...Bit of a change.
Somewhere out in mind design space, there's a mind with any possible prior; but that doesn't mean that you'll say, "All priors are created equal."
The corrected phrase may be: "All unentangled priors are created equal."
"No, I did not go through the traditional apprenticeship. But when I look back, and see what Eliezer18 did wrong, I see plenty of modern scientists making the same mistakes. I cannot detect any sign that they were better warned than myself."
It seems like a viable means of propagating education about such mistakes - or the mistakes of aspiring rationalists in general - would be to set up (relatively) straightforward scientific experiments that purposefully make a given mistake and then allow students to perform the experiment unsuccessfully. The postmortem for each class/lab would review what went wrong, what wrong looked like, why things went wrong, and so forth. Sort of a "no, seriously, learn from the past" symposium.
Do any of you know of any such existing educational structures in the Bay Area?
I find that the realization of consilience can be "as" good as original discovery; for me, the discovery that an idea about the world - even one posited centuries ago - comprehensively makes sense in the context of everything else known about reality is, itself, an original discovery.
It's just one that's unique to you or me.
"... people seem to get a tremendous emotional kick out of not knowing something. " Could be simple schadenfreude: asserting that "no one" knows a thing, even those demonstrably more intelligent than yourself, has the emotional effect of knocking them down into the same mud in which you already believe yourself to be mired. Not productive, but good solace for those unwilling to be productive.
My favorite part, at which there was actual LOLing:
"•[Imaginary Model Alicorn] acquired a certain level of status (respect for her mind-hacking skills and the approval that comes with having an approved-of "sensible" romantic orientation) within a relevant subculture. She got to write this post to claim said status publicly, and accumulate delicious karma. And she got to make this meta bullet point."
"...Although, do please make the check out to 'Cash'."
"Could I regenerate this knowledge if it were somehow deleted from my mind?"
Epistemologically, that's my biggest problem with religion-as-morality, along with using anything else that qualifies as "fiction" as a primary source of philosophy. One of my early heuristic tests to determine if a given religious individual is within reach of reason is to ask them how they think they'd be able to recreate their religion if they'd never received education/indoctrination in that religion (makes a nice lead-in to "do people who've never heard of your religion go to hell?" as well). The possibles will at least TRY to imply that gods are directly inferable from reality (though Intelligent Design is not a positive step, at least it shows they think reality is real); the lost causes give a supernatural solution ("Insert-God-Here wouldn't allow that to happen! Or if He did, He'd just make more holy books!").
If such a person's justification for morality is subjective and they just don't care that no part of it is even conceivably objective... what does that say for the relationship of any of their moral conclusions to reality?
"I wish I lived in an era where I could just tell my readers they have to thoroughly research something, without giving insult."
Is that not what this entire site is accomplishing?
Many big-L Libertarians I've met - along with those who consider themselves to be trench-fighters for Ayn Rand-ian Objectivism - seem to want to conflate "selfishness" with "enlightened self-interest" for the positive connotations of the latter... yet their rationale for various big-L proposals (such as "let's turn over national security to corporations, who will certainly never abuse the power to force decisions upon people") tends to be of the extremely rosy, happy death spiral, declare-anything-that-doesn't-fit-an-"externality" variety. That seems somewhat removed from any meaning of "enlightened" that approaches sensibility; and that's coming from a mild, little-l, "A free society means you need a reason to make things illegal" libertarian framing.
Ultimately, I can understand the "It's So Simple! (tm)" appeal of claiming that selfishness itself is good as an absolute, but delivering that advice only appears to hold true - at either a societal OR individual level - if the scoreboard is measuring relative altruistic effects. A benefit to oneself that derives from (having helped propagate) a mutually self-interested society only qualifies as a benefit relative to 1) a society of self-sacrificial lemmings (which is a bit of a straw man); or 2) no society at all, where there really ARE no externalities and self-interest can be truly self-referent. ...I feel I may not be explaining this clearly, so I'll simply request suggestions and wrap up this comment.
It seems that, instead of trumpeting "selfishness!" as a counterintuitive moral panacea, all that's really needed for altruism to symbiotically cohabitate with "selfishness" is to use the phrase "rational self-regard" instead, since it doesn't require you to engage in Ethical-Egoism-esque displays of unnecessary dickishness towards one's fellow man. ...And I feel I may have to try to write an article on that subject if one does not yet exist.
Ha ha, this comment shows up on the Recent Comments feed at right as:
" Racism and sexism are pretty good
by SeanMCoincon on The uniquely awful example of theism | 0 points "
THAT certainly couldn't be misconstrued against me in any way! I think I'll run for Congress.
"And what would be the analogy to collapsing to form a Bose-Einstein condensate?"
...All of them moving into the same compound and acquiring an arsenal seems about right, particularly when you consider the increased chance of violent explosion.
"I know I can never be perfect, but that's certainly not going to stop me from trying." --Sean Coincon
:D
This immediately brings to mind the old adage about it being better to be Socrates dissatisfied than a pig satisfied. I'd imagine, from the pig's point of view, that the loftiest height of piggy happiness was not terribly dissimilar from the baseline level of piggy contentment, so equating "happiness" to "contentment" would not be an inexcusable breach of piggy logic. Indeed, we humans pretty much have to infer this state of affairs when considering animal wellbeing ("appearance of sociobiological contentment approximates happiness"), as we don't yet possess any means of engaging animals in philosophical conversation on the subject.
Yet it seems that those who would have us believe that "blissful ignorance" is a good thing as an absolute are confusing contentment with happiness unnecessarily. Happiness registers more as a positive, aspirational value within the context of the human experience range; contentment seems more a negative, absence-of-dissatisfaction value that indicates only that things aren't going poorly. Doublethink and willful ignorance do not seem to be able to positively provide qualia that contribute to happiness; they can only obscure knowledge of things that are actually going poorly, thus creating a false sense of contentment.
That's my general counterpoint whenever people speak positively of the "happiness" created by things like religion and opiates. Nothing is being added; your knowledge of reality is being obscured. It's difficult to see how that approach could be considered a mature option.
It may be useful to the cause of avoiding one's own potential happy death spirals (HDSs) to actively attempt to subvert the "my ideas are my children" trope. Perceived ownership of an idea or mental tool may be a prime contributor to HDS thinkery, giving rise to the kind of protectiveness we humans tend to provide our offspring whether or not they deserve it. The fact that our child started the fight with another child doesn't prevent us from stepping in on OUR child's side; the fact that our child is demonstrably average doesn't prevent us from telling complete strangers how intelligent, sweet, talented, beautiful, etc. OUR child is, was, and shall always be, forever and ever, amen.
So too it seems to be with the ideas we feel we own, particularly the ones we ourselves have generated. This impulse is entirely understandable within the context of a species whose primary survival trait is intelligence, with opposable thumbs taking a distant second. Yet to feel ownership of an idea to the point that we feel protective of it seems rationally contraindicated: an idea - anyone's - should only be valued insofar as it can stand on its own in the uncaring realm of reality... in a making beliefs pay rent kind of way.
So perhaps a good solution to the "How?" of resisting HDSs would be to try to view ideas and mental tools as being both fundamentally borrowed and potentially disposable upon breaking. It's a nice way of avoiding even the temptation to indulge in ad hominem, as well.
Agreed on all points; I've found it interesting in my conversations with anti-evolutionists that even doing the work of dispelling the straw man argument - "monkeys turning into humans", "why are there still monkeys", etc. - doesn't seem to change even their conception of the evolution argument; they STILL think all the science and reason in the world can be summarized as "monkeys turned into humans". Their degree of investment in opposing that argument may be too great for additional rationality to crack. When/if that becomes apparent, I've found the more-effective-yet-less-satisfying counter to be something along the lines of: "America grew out of England, yet England's still a country.". Not the most accurate metaphor, granted... but it seems to back their confidence level down from outright absoluteness.
Plus, it's kinda fun to see their faces turn red. Whoever coined "Sticks and stones can break my bones, but words can never hurt me." must not have been a rationalist amongst children.
Racism and sexism are pretty good candidates as well. Prejudice in general would be even more inclusive; one could even consider religion to be a special case of prejudice against reality.
"What on Earth makes you think monkeys can change into humans?"
It seems - based upon personal experience - that the difference between the rational and the irrational is that the rational at least attempts to present a cogent answer to such questions in a way that actually answers the question; the irrational just gets mad at you for asking.
The most useful skill I've developed has been in meeting immaturity (both in rationale and delivery) with maturity (ditto). I work in a heavily right-wing workplace that refuses to allow anything but Fox News on anything resembling a television. This is my training environment. Even in the presence of highly irrational and emotionally charged convictions, I've found that the ability to maintain an uninvested calm and slowly help my partner to make their argument better (through gradual consilience with reality) can result in ACTUALLY CHANGED MINDS. The first step seems, invariably, to point out those counterfactuals that back them away from absolute confidence; when presented as potential improvements ("You'd probably see greater success at decreasing the actual number of abortions if you could find ways to enable people to only purposefully conceive a child.") even a position they once reviled can seem outright tasteful. The key appears to be presentation of oneself as a potential ally, so as to avoid the "I must engage on all fronts" mentality that prevents meaningful engagement at all.
My concern is less with the degree to which I wear the rationality mantle relative to others (which is low to the point of insignificance, though often depressing) and more with ensuring that the process I use to approach rationality is the best one available. To that end, I'm finding that lurking on LessWrong is a pretty effective process test, particularly since I tend to come back to articles I've previously read to see what further understanding I can extract in the light of previous articles. SCORING such a test is a more squiffy concept, though correlation of my (defeasibly) rational conclusions to the evidence of reality seems an effective measure... though I've now run into a concern that my own self-assessment of confirmation bias elimination may not be satisfactorily objective. The obvious solution to THAT problem would be to start publishing process/conclusion articles to LessWrong. I think I may have to start doing so.
Oddly, this problem seems (to my philosopher/engineer mind) to have an exceedingly non-complex solution, and it depends not upon the chooser but upon Omega.
Here's the payout schema assumed by the two-boxer, for reference: 1) Both boxes predicted, both boxes picked: +$1,000 2) Both boxes predicted, only B picked: $0 3) Only B predicted, both boxes picked: +$1,001,000 4) Only B predicted, only B picked: +$1,000,000
Omega, being an unknowable superintelligence, qualifies as a force of nature from our current level of human understanding. Since Omega's ways are inscrutable, we can only evaluate Omega based upon what we know of him so far: he's 100 for 100 on predicting the predilections of people. While I'd prefer to have a much larger success base before drawing inference, it seems that we can establish a defeasible Law of Omega: whatever decision Omega has predicted is virtually certain to be correct.
So while the two-boxer would hold that choosing both boxes would give them either $1,000 or $1,001,000, this is clearly IRRATIONAL: the (defeasible) Law of Omega outright eliminates outcomes 2 and 3 above, which means that (until such time as new data forces a revision of the Law of Omega) the two-boxer's anticipated payoff of $1,001,000 DOES NOT EXIST. The only choice is between outcome 1 (two-boxer gets $1,000) and outcome 4 (one-boxer gets $1,000,000). At that point, option 4 is the dominant strategy... AND the rational thing to do.
Does that makes sense? Or am I placing unfounded faith in Omega?