The Jordan Peterson Mask
post by Jacob Falkovich (Jacobian) · 2018-03-03T19:49:20.813Z · LW · GW · 154 commentsContents
154 comments
This is a cross-post from Putanumonit.com
It seems that most people haven’t had much trouble making up their minds about Jordan Peterson.
The psycho-philosophizing YouTube prophet rose to prominence for refusing to acquiesce to Bill C-16, a Canadian law mandating the use of preferred pronouns for transgender people. The sort of liberal who thinks that this law is a great idea slapped the alt-right transphobe label on Peterson and has been tweeting about how no one should listen to Peterson about anything. The sort of conservative who thinks that C-16 is the end of Western Civilization hailed Peterson as a heroic anti-PC crusader and has been breathlessly retweeting everything he says, with the implied #BooOutgroup.
As the sort of rationalist who googles laws before reacting to them, I assured myself that Peterson got the legal facts wrong: no one is actually getting dragged to jail for refusing to say zir. I’m going to use people’s preferred pronouns regardless, but I’m happy I get to keep doing it in the name of libertarianism and not-being-a-dick, rather than because of state coercion.
With that, I decided to ignore Peterson and go back to my media diet of rationalist blogs, Sam Harris, and EconTalk.
But Jordan Peterson turned out to be a very difficult man to ignore. He showed up on Sam Harris’ podcast, and on EconTalk, and on Joe Rogan and Art of Manliness and James Altucher. He wrote 12 Rules for Life: An Antidote to Chaos, a self-help listicle book inspired by Jesus, Nietzsche, Jung, and Dostoyevsky. [Let's see if I can tie all 12 rules to this essay] And he got rationalists talking about him, which I’ve done for several hours now. As a community, we haven’t quite figured out what to make of him.
Peterson is a social conservative, a Christian who reads truth in the Bible and claims that atheists don’t exist, and a man who sees existence at every level as a conflict between good and evil. The majority of the rationalist community (present company included) are socially liberal and trans-friendly, confident about our atheism, and mistake theorists who see bad equilibria more often than intentional malevolence.
But the most salient aspect of Peterson isn’t his conservatism, or his Christianity, or Manicheanism. It’s his commitment, above all else, to seek the truth and to speak it. [Rule 8: Tell the truth – or, at least, don’t lie] Rationalists can forgive a lot of an honest man, and Peterson shoots straighter than a laser gun.
Peterson loves to talk about heroic narratives, and his own life in the past few months reads like a movie plot, albeit more Kung Fu Panda than Passion of the Christ. Peterson spent decades assembling worldview that integrates everything from neurology to Deuteronomy, one that’s complex, self-consistent and close enough to the truth to withstand collision with reality. It’s also light-years and meta-levels away from the sort of simplistic frameworks offered by the mass media on either the right or the left.
When the C-16 controversy broke, said media assumed that Peterson would meekly play out the role of outgroup strawman, and were utterly steamrolled. A lot of the discussion about the linked interview has to do with rhetoric and argument, but to me, it showcased something else. A coherent worldview like that is a powerful and beautiful weapon in the hands of the person who is committed to it.
But it wasn’t the charismatic performances that convinced me of Peterson’s honesty, it’s clips like this one, where he was asked about gay marriage.
Most people are for or against gay marriage based on their object level feeling about gays, and their tribal affiliation. The blue tribe supports gay marriage, opposes first-cousin marriage, and thinks that the government should force a cake shop to bake a gay wedding cake because homophobia is bad. The red tribe merely flips the sign on all of those.
Some people go a meta-level up: I support gay marriage, support cousin marriage, and support bakers getting to decide themselves which cakes they bake for reasons of personal freedom [Rule 11: don’t bother children when they are skateboarding], and the ready availability of both genetic testing clinics and gay-friendly bakeries.
But to Peterson, everything is a super-meta-level tradeoff that has the power to send all of Western Civilization down the path to heaven or hell:
With regards to gay marriage specifically, that’s a really tough one for me. I can imagine… [long pause] I can’t do anything other than speak platitudes about it I suppose, unfortunately.
If the marital vows are taken seriously, then it seems to me it’s a means by which gay people can be integrated more thoroughly into standard society, and that’s probably a good thing. And maybe that would decrease promiscuity which is a public health problem, although obviously that’s not limited to gay people. Gay men tend to be more promiscuous than average, probably because there are no women to bind them with regards to their sexual activity. […]
I’m in favor of extending the bounds of traditional relationships to people who wouldn’t be involved in a traditional long-term relationship otherwise, but I’m concerned about the undermining of traditional modes of being including marriage [which has always been about] raising children in a stable and optimal environment.
Few people besides Peterson himself can even fully understand his argument, let alone endorse it. And yet he can’t help himself from actually trying to figure out what his worldview says about gay marriage, and from saying it with no reservations.
I think that Peterson overgeneralizes about gay men (and what about lesbians?), and he’s wrong about the impact of gay marriage on society on the object level. I’m also quite a fan of promiscuity, and I think it’s stupid to oppose a policy just because “neo-Marxists” support it.
But I don’t doubt Peterson’s integrity, which means that I could learn something from him. [Rule 9: assume that the person you are listening to might know something you don’t].
So, what can Jordan Peterson teach rationalists?
In 12 Rules, Peterson claims that eating a large, low-carb breakfast helps overcome depression and anxiety. Is this claim true?
There’s a technical sort of truth, and here “technical” is itself a synonym for “true”, that’s discoverable using the following hierarchy of methods: opinion -> observation -> case report -> experiment -> RCT -> meta-analysis -> Scott Alexander “much more than you wanted to know” article. If you ask Scott whether a low-carb breakfast reduces anxiety he’ll probably say that there isn’t a significant effect, and that’s the technical truth of the matter.
So why does Peterson believe the opposite? He’s statistically literate… for a psychologist. He references a couple of studies about the connection between insulin and stress, although I’d wager he wouldn’t lose much sleep if one of them failed to replicate. It probably also helps that Gary Tabes is really playing the part of the anti-establishment truth-crusader. Ultimately, Peterson is answering a different question: if a patient comes to your psychiatry clinic complaining about mild anxiety, should you tell them to eat bacon and eggs for breakfast?
My rationalist steelman of Peterson would say something like this: maybe the patient has leaky gut syndrome that contributes to their anxiety, and reducing gluten intake would help. If not, maybe the link between insulin and cortisol will turn out to be real and meaningful. If not, maybe having a morning routine that requires a bit of effort (it’s harder to make eggs than eat a chocolate bar, but not too hard) will bring some needed structure to the patient’s life. If not, maybe getting any advice whatsoever from a serious looking psychologist would make the patient feel that they are being listened to, and that will placebo their anxiety by itself. And if not, then no harm was done and you can try something else.
But, Peterson would add, you can’t tell the patient all of that. You won’t help them by explaining leaky guts and p-values and placebo effects. They need to believe that their lives have fallen into chaos, and making breakfast is akin to slaying the dragon-goddess Tiamat and laying the foundation for stable order that creates heaven on Earth. This is metaphorical truth.
If you’re a rationalist, you probably prefer your truths not to be so… metaphorical. But it’s a silly sort of rationalist who gets sidetracked by arguments about definitions. If you don’t like using the same word to mean different things [Rule 10: be precise in your speech], you can say “useful” or “adaptive” or “meaningful” instead of “true”. It’s important to use words well, but it’s also important to eat a good breakfast. Probably. [Rule 2: treat yourself like you would someone you are responsible for helping]
One of the most underrated recent ideas in rationality is the idea of fake frameworks. I understand it thus: if you want to understand how lasers work, you should really use quantum physics as your framework. But if you want to understand how a cocktail party works, looking at quarks won’t get you far. You can use the Hansonian framework of signaling, or the sociological framework of class and status, or the psychometric framework of introverts and extroverts, etc.
All of those frameworks are fake in the sense that introvert isn’t a basic physical entity the same way an up quark is. Those frameworks are layers of interpretation that you impose on what you directly experience, which is human-shaped figures walking around, making noises with their mouths and sipping gin & tonics. You can’t avoid imposing interpretations, so you should gather a diverse toolbox of frameworks and use them consciously even you know they’re not 100% true.
Here’s a visual example:
Q: Which map is more true to the territory?
A: Neither. But if your goal is to meet Einstein on his way to work you use the one on the right, and if your goal is to count the trees on the golf course you use the one on the left.
By the way, there’s a decent chance that “fake frameworks” is what the post-rationalists have been trying to explain to me all along, except they were kind of rude about it. If it’s true that they had the same message, it took Valentine to get it through my skull because he’s an excellent teacher, and also someone I personally like. Likingshouldn’t matter to rationalists, but somehow it always seems to matter to humans. [Rule 5: do not let your children do anything that makes you dislike them]
That’s what Jordan Peterson is: a fake framework. He’s a mask you can put on, a mask that changes how you see the world and how you see yourself in the mirror. Putting on the Jordan Peterson mask adds two crucial elements that rationalists often struggle with: motivation and meaning.
The Secular Solstice is a celebration designed by rationalists to sing songs together and talk about meaning. [Rule 3: make friends with people who want the best for you] The first time I attended, the core theme was the story of Stonehenge. Once upon a time, humans lived in terror of the shortening of the days each autumn. But we built Stonehenge to mark the winter solstice and predict when spring would come – a first step towards conquering the cold and dark.
But how did Stonehenge get built?
First, the tribe had a Scott Alexander. Neolithic Scott listened to the shamans speak of the Sun God, and demanded to see their p-values. He counted patiently the days between the solstices of each year and drew arrows pointing to the exact direction the sun rose each day.
Finally, Scott spoke up:
Hey guys, I don’t think that the sun is a god who cares about dancing and goat sacrifice. I think it just moves around in a 365-day period, and when it rises from the the direction of that tree that’s when the days start getting longer again.
And the tribe told him that it’s all much more than they wanted to know about the sun.
But Scott only gets us halfway to Stonehenge. The monument itself was built over several centuries, using 25-ton rocks that were brought to the site from 140 miles away. The people who hauled the first rock had to realize (unless subject to extreme planning fallacy) that not a single person they know, nor their children or grandchildren, would see the monument completed. Yet these people hauled the rocks anyway, and that required neolithic Peterson to inspire them.
Peterson is very popular with the sort of young people who have been told all their lives to be happy and proud of just who they are. But when you’re 19, short on money, shorter on status, and you start to realize you won’t be a billionaire rock star, you don’t see a lot to be satisfied with. Lacking anything to be proud of individually, they are tempted to substitute their self for a broader group identity. What the identity groups mostly do is complain that the world is unfair to them; this keeps the movement going but doesn’t do much to alleviate the frustration of its members.
And then Peterson tells them to lift the heaviest rock they can and carry it. Will it ease their suffering? No. Everyone is suffering, but at least they can carve meaning out of that. And if enough people listen to that message century after century, we get Stonehenge. [Rule 7: pursue what is meaningful, not what is expedient]
A new expansion just came out for the Civilization 6 video game, and instead of playing it I’m nine hours into writing this post and barely halfway done. I hope I’m not the only one getting some meaning out of this thing.
It’s not easy to tell a story that inspires a whole tribe to move 25-ton rocks. Peterson noticed that the Bible is one story that has been doing that for a good while. Eliezer noticed it too, and he was not happy about it, so he wrote his own tribe-inspiring work of fiction. I’ve read both, cover to cover. And although I found HPMoR more fun to read, I end up quoting from the Old Testament a lot more often when I have a point to make.
“Back in the old days, saying that the local religion is a work of fiction would have gotten you burned at the stake“, Eliezer replies. Well, today quoting research on psychology gets you fired from Google, and quoting research on climate change gets you fired from the EPA. Eppur si muove.
Jews wrote down commentaries suggesting that the story of Jonah is metaphorical a millennium before Galileo was born, and yet they considered themselves the People of the Book. The Peterson mask reminds us that people don’t have to take a story literally to take it seriously.
Peterson loves to tell the story of Cain and Abel. Humans discovered sacrifice: you can give away something today to get something better tomorrow. “Tomorrow” needs a face, so we call it “God” and act out a literal sacrifice to God to hammer the point home for the kids.
But sometimes, the sacrifice doesn’t work out. You give, but you don’t get, and you are driven to resentment and rage against the system. That’s what Cain does, and the story tells us that it’s the wrong move – you should ponder instead how to make a better sacrifice next time.
When I was younger, I went to the gym twice a week for a whole year. After a year I didn’t look any sexier, I didn’t get much stronger, and I was sore a lot. So I said fuck it and stopped. Now I started going to the gym twice a week again, but I also started reading about food and exercise to finally get my sacrifice to do something. I still don’t look like someone who goes to the gym twice a week, but I can bench 20 pounds more than I could last year and I rarely get sore or injured working out. [Rule 4: compare yourself with who you were yesterday, not with who someone else is today]
Knowing that the story of Cain and Abel is made up hasn’t prevented it from inspiring me to exercise smarter.
There’s a problem: many stories that sound inspirational are full of shit. After listening to a few hours of Peterson talking about archetypes and dragons and Jesus, I wasn’t convinced that he’s not full of it either. You should only wear a mask if it leaves you wiser when you take it off and go back to facing your mundane problems.
What convinced me about Peterson is this snippet from his conversation with James Altucher (24 minutes in):
If you’re trying to help someone who’s in a rough situtation, let’s say with their relationship, you ask them to start watching themselves so that you can gather some information. Let’s take a look at your relationship for a week and all you have to do is figure out when it’s working and when it’s not working. Or, when it’s working horribly and when it’s working not too bad. Just keep track of that.
“Well, my wife ignores me at the dinner table,” or “My wife ignores me when I come home.” Then we start small. How would you like your wife to greet you when you come home?
“I’d like her to stop what she’s doing and come to the door.” Well, ask her under what conditions she would be willing to do that. And let her do it badly. Do it for a week, just agree that when either of you comes home you shut off the TV and ask “how was your day?” and listen for 10 seconds, and see how that goes.
Carl Jung said “modern people can’t see God because they won’t look low enough”. It means that people underestimate the importance of small things. They’re not small. How your wife says hi to you when you come home – that’s not small, because you come home all the time. You come home three times a day, so we can do the arithmetic.
Let’s say you spend 15 minutes a day coming home, something like that. And then it’s every day, so that’s 7 days a week, so that’s 105 minutes. Let’s call it 90 minutes a week. So that’s 6 hours a month, 72 hours a year. So you basically spend two full workweeks coming home, that’s about 3% of your life.
You spend about 3% of your life coming home. Fix it! Then, fix 30 more things.
Aside from the Jung quote, that’s the most Putanumonit piece of life advice I have ever heard on a podcast, complete with unnecessary arithmetic. If Peterson can put on a Putanumonit hat and come up with something that makes deep sense to me, perhaps I could do the same with a Peterson mask.
The rationalist project is about finding true answers to difficult questions. We have a formula that does that, and we’ve tracked many ways in which our minds can veer of the right path. But there are profound ways in which a person can be unready to seek the truth, ways that are hard to measure in a behavioral econ lab and assign a catchy moniker to.
I have written a lot about romance and dating in the last two years, including some mildly controversial takes. I could not have written any of them before I met my wife. Not because I didn’t know the facts or the game theory, but because I wasn’t emotionally ready. When I read private drafts I wrote about women from years ago, they are colored by frustration, guilt, exuberance or fear, all depending on the outcome of the last date I’ve been on. Those emotions aren’t exactly conducive to clarity of thought.
I think this was also the reason why Scott Aaronson wrote The Comment that led to Untitled only when he was married and had a child. Then, he could weather the resulting storm without backing down from his truth. It is hard to see something true about relationships when your own aren’t in order, let alone write something true. [Rule 6: set your house in perfect order before you criticize the world]
The flipside is: when you wear the Peterson mask, you are compelled to spread the word when you’ve found a path that leads somewhere true. There is no higher calling in Peterson’s worldview. The Kolmogorov Option becomes the Kolmogorov Mandate (and the Scott Alexander mask mostly agrees).
Let’s go back to the beginning: Peterson made noise by refusing to comply with a law that doesn’t actually do what he claims. How is that contributing to the truth?
For starters, I would have bet that Peterson was going to lose his job when the letters calling for his dismissal started rolling in, letters signed by hundreds of Peterson’s own colleagues at the University of Toronto. I would have bet wrong: the only thing that happened is that Peterson now makes several times his academic salary from his Patreon account (if you want me to start saying crazy things you can try clicking here).
This is critical: it created common public knowledge that if free speech is ever actually threatened by the government to the extent that Peterson claims, the support for free speech will be overwhelming even at universities. Speaking an unpopular truth is a coordination problem, you have to know that others will stand with you if you stand up first. [Rule 1: stand up straight with your shoulders back]
Now, more people know that there’s appetite in the West for people who stand up for truth. This isn’t a partisan thing, I hope that Peterson inspires people with inconvenient leftist opinions to speak up in red tribe-dominated spaces (e.g. the NFL protests).
Peterson was technically wrong, as he is on many things. But he sees the pursuit of truth as a heroic quest and he’s willing to toss some rocks around, and I think this helps the cause of truth even if one gets some technical details wrong.
Being wrong about the details is not good, but I think that rationalists are pretty decent at getting technicalities right. By using the Peterson Mask judiciously, we can achieve even more than that.
[Rule 12: pet a cat when you encounter one on the street], but don’t touch the hedgehog, they don’t like it.
154 comments
Comments sorted by top scores.
comment by moridinamael · 2018-03-04T04:28:33.800Z · LW(p) · GW(p)
I have a generally positive opinion of Peterson but I wouldn't be adding anything to the conversation by talking about why, you already covered his good points. Instead I'll talk about why I'm leery of dubbing him a Rationalist hero.
Peterson's entire first podcast with Sam Harris was an argument over Peterson's (ab)use of the word "truth". I'm not sure if anyone walked away from that experience entirely sure what Peterson means when he says "truth".
One can assume that he means something like "metaphorical truth", that some stories contain a kind of truth that is more like "usefulness as a map" than "accurate reflection of objective reality". Sam Harris' rejoinder was along the lines that using stories as ways of discovering meaning is all well and good, but believing those stories are true leads to horrifying failure modes.
For example, if you believe some particular bit of metaphysical narrative is true, you feel compelled act on contingent details of the story that are unrelated to the intended moral. Insert your own favorite minor bit of religious dogma that led to hundreds of years of death and suffering.
At a civilizational level, the norm of "believing that false stories are actually true is good for society" selects for simple, stupid stories and/or interpretations of stories that effectively control and dominate the population.
I've seen hints that he's written his bottom line with respect to Christianity. He's trying to prove how it's true, by constructing an elaborate framework that redefines "truth", not figure out if it's true. If it weren't for this, and the general sense that he's treading on really crucial concepts in order to make himself seem more cosmically right, then I would be more fully on board.
Replies from: Evan_Gaensbauer, alkjash, TurnTrout, dsatan↑ comment by Evan_Gaensbauer · 2018-03-05T03:35:32.373Z · LW(p) · GW(p)
Not to bias anyone, but as anecdata a couple of my Christian friends have told me they find it difficult to understand Peterson's framing of his (relationship with) Christianity either. So that from the perspective on the other end that Peterson is trying at some sketchy epistemology could be telling. Maybe he's committing some kind of [golden mean fallacy[(https://en.wikipedia.org/wiki/Argumentto moderation) and advocating for widespread [cultural Christianity](https://en.wikipedia.org/wiki/Cultural_Christian).
↑ comment by alkjash · 2018-03-04T16:21:04.928Z · LW(p) · GW(p)
His concept of truth seems to be the main beef rationalists have with Peterson, and something I've also struggled with for a while.
I think this is partly solved with a healthy application of Rationalist Taboo - Peterson is a pragmatist, and AFAICT the word truth de-references as "that which it is useful to believe" for him. In practice although he adds a bunch of "metaphorical truths" under this umbrella, I have not seen him espouse any literal falsehoods as "useful to believe," so his definition is just a strict generalization of our usual notion of truth.
Of course I'm not entirely happy with his use of the word, but if you assume as I do "it is useful to believe what is literally true" (i.e. usefulness = accuracy for maps) then his definition agrees on the literal level with your usual notion of truth.
The question then is what he means by metaphorical truth, and in what sense this is (as he claims) a higher truth than literal truth. The answer is something like "metaphorical truth is extracted meta-stories from real human behavior that are more essential to the human experience than any given actual story they are extracted from." E.g. the story of Cain and Abel is more true to the human experience than the story of how you woke up, brushed your teeth, and went to work this morning. This is where the taboo needs to come in: what he means by more true is "it is more useful learn from Cain and Abel as a story about the human experience than it is to learn from your morning routine."
I claim that this is a useful way to think about truth. For any given mythological story, I know it didn't actually happen, but I also know that whether or not it actually happened is irrelevant to my life. So the regular definition of truth "did it actually happen" is not the right generalization to this setting. Why should I believe useless things? What I really want to know is (a) what does this story imply about reality and how I should act in the world and (b) if I do act that way, does my life get better? The claim is that if believing a story predictably makes your life better, then you should overload your definition of truth and treat the story as "true," and there is no better definition of the word.
Regarding the Christian stories, his lecture series is called "The Psychological Significance of the Bible" and I don't think he endorses anything other than a metaphorical interpretation thereof. He has for example made the same type of claims about the truth of Christianity as about Daoism, old Mesopotamian myths, Rowling's Harry Potter, and various Disney movies. People probably get confused when he says things like "Christianity is more true than literal truth" without realizing he says the same things about Pinocchio and The Little Mermaid.
tl;dr: Taboo the word "truth".
Replies from: moridinamael, habryka4, cousin_it↑ comment by moridinamael · 2018-03-05T14:50:42.806Z · LW(p) · GW(p)
I think it's pretty risk to play Rationalist taboo with what other people are saying. It's supposed to be a technique for clarifying an argument by removing a word from the discussion, preventing it from being solely an argument about definitions. I would like it if Peterson would taboo the word "truth", yeah.
I also don't think that dereferencing the pointer actually helps. I object to how he uses "truth", and I also object to the idea that Harry Potter is (dereferenced pointer)->[more psychologically useful to believe and to use as a map than discoveries about reality arrived at via empiricism]. It's uh ... it's just not. Very much not. Dangerous to believe that it is, even. Equally if not more dangerous to believe that Christianity is [more psychologically useful to believe and to use as a map than discoveries about reality arrived at via empiricism]. I might sign on to something like, certain stories from Christianity are [a productive narrative lens to try on in an effort to understand general principles of psychology, maybe, sometimes].
The claim is that if believing a story predictably makes your life better, then you should overload your definition of truth and treat the story as "true," and there is no better definition of the word.
This is indeed a hazardous application of Dark Arts to be applied judiciously and hopefully very very rarely. As a rule of thumb, if you feel like calling Harry Potter "true", you've probably gone too far, IMO.
↑ comment by habryka (habryka4) · 2018-03-04T18:15:49.557Z · LW(p) · GW(p)
I do wonder whether you would change your mind after checking the links by Gaius Leviathan IX in a comment below. A lot of those did strike me as “literal falsehoods”, and seem to go against the things you outlined here.
Replies from: alkjash↑ comment by alkjash · 2018-03-04T19:22:41.970Z · LW(p) · GW(p)
I have previously noticed (having watched a good hundred hours of Peterson's lectures) all of these things and these seem to me to be either straight-up misinterpretation on the part of the listener (taboo your words!) or the tiny number of inevitable false positives that comes out of Peterson operating his own nonstandard cognitive strategy, which is basically UNSONG Kabbalah.
This overall argument reminds me of the kind of student who protests that "i isn't actually a number" or "a step function doesn't actually have a derivative."
↑ comment by cousin_it · 2018-03-06T10:19:14.611Z · LW(p) · GW(p)
For any given mythological story, I know it didn’t actually happen, but I also know that whether or not it actually happened is irrelevant to my life. So the regular definition of truth “did it actually happen” is not the right generalization to this setting. Why should I believe useless things? What I really want to know is (a) what does this story imply about reality and how I should act in the world and (b) if I do act that way, does my life get better?
Yeah, the benefits of literal truth are more altruistic and long-term.
↑ comment by TurnTrout · 2018-03-04T06:25:23.461Z · LW(p) · GW(p)
This is what worries me. I frankly haven't looked into Peterson too closely, but what I've heard hasn't impressed me. I found some of his quotes in OP's piece to be quite insightful, but I don't understand why he's spoken of so glowingly when he apparently espouses theist beliefs regularly. Warning signs of halo effect?
Replies from: Viliam, Evan_Gaensbauer, dsatan↑ comment by Viliam · 2018-03-09T21:17:32.237Z · LW(p) · GW(p)
Speaking for myself, I do not agree with some Peterson's opinions, but I like him as a person. Probably the best way to explain it is that he is the kind of "politically controversial" person which I wouldn't be scared to hypothetically find out that he is actually living next door to me.
I find the way he uses the word "truth" really annoying. Yet, if I told him, I don't expect him to... yell some abuse at me, hit me with a bike lock, or try to get me fired from my job... just to give a few examples of recent instruments of political discourse. He would probably smile in a good mood, and then we could change the topic.
Peterson is definitely not a rationalist, but there is something very... psychologically healthy... about him. It's like when you are in a room full of farts, already getting more or less used to it, and then suddenly someone opens the window and lets the fresh air in. He can have strong opinions without being nasty as a person. What a welcome change, just when it seemed to me that the political discourse online is dominated by, uhm, a combination of assholes and insane people (and I am not channeling Korzybski now; I am using the word in its old-fashioned sense).
I'd like to somehow combine the rationality of LessWrong with the personality of Peterson. To become a rational lobster, kind of. Smart, strong, and a nice neighbor.
EDIT:
I guess I wanted to say that I am not concerned with Peterson's lack of x-rationality -- but neither I deny it -- because I do not intend to use him as an example of x-rationality. In many aspects he talks nonsense (although probably not more than an average person). But he has other strengths I want to copy.
I see Peterson as a valid and valuable member of the "niceness and civilization" tribe, if there is such a thing. As opposed to e.g. people who happen to share my disrespect of religion and mysticism, but personality-wise are just despicable little Nazis, and I definitely wouldn't want them as neighbors.
↑ comment by Evan_Gaensbauer · 2018-03-05T05:05:11.837Z · LW(p) · GW(p)
I think:
- compartmentalization by theists makes it so they're apparently as rational or competent in thought on a lot of topics as anyone else, despite disagreements regarding religion and theism;
- bias in all its forms is so ubiquitous outside of a domain of beliefs related to skepticism or religion, non-theists often don't make for more rational conversation partners than theists;
- (this might be more unique to me, but) theists often have a better map of abstracts parts of the territory than non-theists.
An example of (3) is how seeking conflict resolution through peaceful and truth-seeking deliberation rather than through tribalism and force. I've observed Christians I know are likelier to stay politically moderate as politics has become more polarized the last couple years. Something about loving your neighbour and the universality of human souls being redeemable or whatever results in Christians opting for mistake theory over conflict theory than non-religious folk I know. In a roundabout way, some theists have reached the same conclusions regarding how to have a rational dialogue as LessWrongers.
All this combined has made it so myself and a few friends in the rationality community have become less worried about theism among someone's beliefs than in the past. This is only true of a small number of religious people I tend to hang out with, which is a small sample, and my social exposure has pretty much always been set up to be to moderates, as opposed to a predominantly left-wing or right-wing crowd. If other rationalists share this attitude, this could be the reason for increased tolerance for prominent theism in rationalist discourse besides the halo effect.
Admittedly, even if being bullish about theists contributing to social epistemology isn't due to the halo effect, ultimately it's something that looks like a matter of social convenience, rather than a strategy optimized for truth-seeking. (Caveat: please nobody abruptly start optimizing social circles for truth-seeking and epistemic hygiene, i.e., cutting people out of your life who aren't exemplary ultra-rationalists. This probably won't increase your well-being or your ability to seek truth, long-term.)
Replies from: TurnTrout↑ comment by TurnTrout · 2018-03-05T05:18:41.408Z · LW(p) · GW(p)
I’d like to make clear that the claim i am making is more with respect to the assertions that Peterson is someone who has exemplary rationality, when that is clearly not the case. Rejecting religion is a sign that one is able to pass other epistemic hurdles. I used to be religious, I seriously thought about it because of the Sequences, and then I deconverted - that was that. I looked at it as the preschool entrance exam for tougher problems, so I took it quite seriously.
Also, I would never claim that theists are worse people in a moral sense. What is important to me, however, is that epistemic rigour in our community not be replaced by comforting rationalizations. I don’t know if that’s what’s happening here, but I have my suspicions.
Replies from: Evan_Gaensbauer, ChristianKl, vedrfolnir↑ comment by Evan_Gaensbauer · 2018-03-12T23:29:47.894Z · LW(p) · GW(p)
Upvoted. Thanks for the clarifications. It seems you're not talking about the mere presence of theists in the rationality community at all, but rather in spite of his theism, being at least poorly articulated, and everything else specious in his views, I agree it's indeed alarming JBP might be seen as an exemplar of rationality. It's my impression it is still a minority of community members who agree JBP is impressive. I've currently no more thoughts on how significant that minority is, or what its portents for the rest of rationality might be.
↑ comment by ChristianKl · 2018-03-05T05:53:40.942Z · LW(p) · GW(p)
When it comes to epistemic rigour, you show in your post that you clearly have a strong personal motivation for believing that rejecting religion is a good sign to pass other epistemic hurdles but at the same you don't provide any good evidence for the claim.
The priors for taken a particular single characteristic that's tribal in nature like religious beliefs as a high information for whether or not a person is rational aren't good.
Replies from: TurnTrout↑ comment by vedrfolnir · 2018-03-05T07:22:20.302Z · LW(p) · GW(p)
I wouldn't use rejection of religion as a signal -- my guess is that most people who become atheists do so for social reasons. Church is boring, or upper-middle-class circles don't take too kindly to religiosity, or whatever.
And is our community about epistemic rigor, or is it about instrumental rationality? If, as they say, rationality is about winning, the real test of rationality is whether you can, after rejecting Christianity, unreject it.
Replies from: TurnTrout↑ comment by TurnTrout · 2018-03-05T10:04:26.949Z · LW(p) · GW(p)
Have you read the sequences? I don’t mean this disrespectfully, but this issue is covered extremely thoroughly early on. If you want to win, your map has to be right. If you want to be able to make meaningful scientific discoveries, your map has to be right. If you hold on to beliefs that aren’t true, your map won’t be right In many areas.
Replies from: vedrfolnir, TAG↑ comment by vedrfolnir · 2018-03-05T13:50:07.302Z · LW(p) · GW(p)
Have you been following the arguments about the Sequences? This issue has been covered fairly thoroughly over the last few years.
The problem, of course, is that the Sequences have been compiled in one place and heavily advertised as The Core of Rationality, whereas the arguments people have been having about the contents of the Sequences, the developments on top of their contents, the additions to the conceptual splinter canons that spun off of LW in the diaspora period, and so on aren't terribly legible. So the null hypothesis is the contents of the Sequences, and until the contents of the years of argumentation that have gone on since the Sequences were posted are written up into new sequences, it's necessary to continually try to come up with ad-hoc restatements of them -- which is not a terribly heartening prospect.
Of course, the interpretations of the sacred texts will change over the years, even as the texts themselves remain the same. So: why does it matter if the map isn't right in many areas? Is there a general factor of correctness, such that a map that's wrong in one area can't be trusted anywhere? Will benefits gained from errors in the map be more than balanced out by losses caused by the same errors? Or is it impossible to benefit from errors in the map at all?
Replies from: TurnTrout↑ comment by TurnTrout · 2018-03-05T14:33:09.825Z · LW(p) · GW(p)
No, I’m fairly new. Thanks for the background.
What would the benefits be of "unrejecting" Christianity, and what would that entail? I’d like to understand your last point a little better.
Replies from: vedrfolnir↑ comment by vedrfolnir · 2018-03-05T19:08:44.435Z · LW(p) · GW(p)
A correct epistemological process is likely to assign very low likelihood to the proposition of Christianity being true at some point. Even if Christianity is true, most Christians don't have good epistemics behind their Christianity; so if there exists an epistemically justifiable argument for 'being a Christian', our hypothetical cradle-Christian rationalist is likely to reach the necessary epistemic skill level to see through the Christian apologetics he's inherited before he discovers it.
At which point he starts sleeping in on Sundays; loses the social capital he's accumulated through church; has a much harder time fitting in with Christian social groups; and cascades updates in ways that are, given the social realities of the United States and similar countries, likely to draw him toward other movements and behavior patterns, some of which are even more harmful than most denominations of Christianity, and away from the anthropological accumulations that correlate with Christianity, some of which may be harmful but some of which may be protecting against harms that aren't obvious even to those with good epistemics. Oops! Is our rationalist winning?
To illustrate the general class of problem, let's say you're a space businessman, and your company is making a hundred space shekels every metric tick, and spending eighty space shekels every metric tick. You decide you want to make your company more profitable, and figure out that a good lower-order goal would be to increase its cash incoming. You implement a new plan, and within a few megaticks, your company is making four hundred space shekels every metric tick, and spending a thousand. Oops! You've increased your business's cash incoming, but you've optimized for too low-order a goal, and now your business isn't profitable anymore.
Now, as you've correctly pointed out, epistemic rationality is important because it's important for instrumental rationality. But the thing we're interested in is instrumental rationality, not epistemic rationality. If the instrumental benefits of being a Christian outweigh the instrumental harms of being a Christian, it's instrumentally rational to be a Christian. If Christianity is false and it's instrumentally rational to be a Christian, epistemic rationality conflicts with instrumental rationality.
This is the easy-to-summarize scaffolding of what I'll call the conflict argument. It isn't the argument itself -- the proper form of the argument would require convincing examples of such a conflict, which of course this margin is too small to contain. In a sentence, it seems that there are a lot of complaints common in these parts -- especially depression and lack of social ties -- that are the precise opposites of instrumental benefits commonly attributed to religious participation. In more than a sentence, lambdaphagy's Tumblr is probably the best place to start reading.
(I don't mean to position this as the last word on the subject, of course -- it's just a summary of a post-Sequences development in parts of the rationalist world. It's possible to either take this one step further and develop a new counterargument to the conflict argument or come up with an orthodox Sequencist response to it.)
Replies from: Jacobian, TurnTrout, habryka4↑ comment by Jacob Falkovich (Jacobian) · 2018-03-05T19:54:32.654Z · LW(p) · GW(p)
But the thing we're interested in is instrumental rationality, not epistemic rationality.
Ironically, this sentence is epistemically true but instrumentally very dangerous.
See, to accurately assess which parts of epistemic rationality one should sacrifice for instrumental improvements requires a whole lot of epistemic rationality. And once you've made that sacrifice and lost some epistemic rationality, your capacity to make such trade-offs wisely in the future is severely impaired. But if you just focus on epistemic rationality, you can get quite a lot of winning as a side effect.
To bring it back to our example: it's very dangerous to convince yourself that Jesus died for your sins just because you notice Christians have more friends. To do so you need to understand why believing in Jesus correlates with having friends. If you have a strong enough understanding of friendship and social structures for that, you can easily make friends and build a community without Jesus.
But if you install Jesus on your system you're now left vulnerable to a lot of instrumentally bad things, with no guarantee that you'll actually get the friends and community you wanted.
Replies from: vedrfolnir↑ comment by vedrfolnir · 2018-03-07T00:46:56.680Z · LW(p) · GW(p)
Assuming that the instrumental utility of religion can be separated from the religious parts is an old misconception. If all you need is a bit of sociological knowledge, shouldn't it be possible to just engineer a cult of reason? Well, as it turns out, people have been trying for centuries, and it's never really stuck. For one thing, there are, in startup terms, network effects. I'm not saying you should think of St. Paul as the Zuckerberg of Rome, but I've been to one of those churches where they dropped all the wacky supernatural stuff and I'd rather go to a meetup for GNU Social power users.
For another thing, it's interesting that Eliezer Yudkowsky, who seems to be primarily interested in intellectual matters that relate to entities that are, while constrained by the rules of the universe, effectively all-knowing and all-powerful, and who cultivated interest in the mundane stuff out of the desire to get more people interested in said intellectual matters, seems to have gotten unusually far with the cult-of-reason project, at least so far.
Of course, if we think of LW as the seed of what could become a new religion (or at least a new philosophical scene, as world-spanning empires sometimes generate when they're coming off a golden age -- and didn't Socrates have a thing or two to say about raising the sanity waterline?), this discussion would have to look a lot different, and ideally would be carried out in a smoke-filled room somewhere. You don't want everyone in your society believing whatever nonsense will help them out with their social climbing, for reasons which I hope are obvious. (On the other hand, if we think of LW as the seed of what could become a new religion, its unusual antipathy to other religions -- I haven't seen anyone deploy the murder-Gandhi argument to explain why people shouldn't do drugs or make tulpas -- is an indisputable adaptive necessity. So there's that.)
If, on the other hand, we think of LW as some people who are interested in instrumental rationality, the case has to be made that there's at least fruit we can reach without becoming giraffes in grinding epistemic rationality. But most of us are shut-ins who read textbooks for fun, so how likely should we think it is that our keys are under the streetlight?
Replies from: ozymandias, Jacobian↑ comment by ozymandias · 2018-03-11T22:02:53.935Z · LW(p) · GW(p)
its unusual antipathy to other religions -- I haven't seen anyone deploy the murder-Gandhi argument to explain why people shouldn't do drugs or make tulpas
The murder-Gandhi argument against drugs is so common it has a name, "addiction." Rationalists appear to me to have a perfectly rational level of concern about addiction (which means being less concerned about certain drugs, such as MDMA, and more concerned about other drugs, such as alcohol).
I am puzzled about how making tulpas could interfere with one's ability to decide not to make any more tulpas.
Replies from: vedrfolnir↑ comment by vedrfolnir · 2018-03-22T01:00:57.000Z · LW(p) · GW(p)
The only explanation I caught wind of for the parking lot incident was that it had something to do with tulpamancy gone wrong. And I recall SSC attributing irreversible mental effects to hallucinogens and noting that a lot of the early proponents of hallucinogens ended up somewhat wacky.
But maybe it really does all work out such that the sorts of things that are popular in upper-middle-class urban twenty-something circles just aren't anything to worry about, and the sorts of things that are unpopular in them (or worse, popular elsewhere) just are. What a coincidence!
↑ comment by Jacob Falkovich (Jacobian) · 2018-03-07T16:42:50.364Z · LW(p) · GW(p)
Is your goal to have a small community of friends or to take over the world? The tightest-knit religions are the smaller and weirder ones, so if you want stronger social bonds you should join Scientology and not the Catholic church.
Or, you know, you can just go to a LessWrong meetup. I've been to one yesterday: we had cake, and wine, and we did a double crux discussion about rationality and self-improvement. I dare say that we're getting at least half as much community benefit as the average church-goer, all for a modest investment of effort and without sacrificing our sanity.
If someone doesn't have a social life because don't leave their house, they should leave their house. The religious shut-ins who read the Bible for fun aren't getting much social benefit either.
Rationality is a bad religion, but if you understand religions well enough you probably don't need one.
Replies from: Viliam↑ comment by Viliam · 2018-03-09T20:50:32.694Z · LW(p) · GW(p)
One day I will have to write a longer text about this, but shortly: it is a false dilemma to see "small and tight-knit community" and "taking over the world" as mutually exclusive. Catholic church is not a small community, but it contains many small communities. It is an "eukaryotic" community, containing both the tight-knit subgroups and the masses of lukewarm believers, which together contribute to its long-term survival.
I would like to see the rationalist community to become "eukaryotic" in a similar way. In certain ways it already happens: we have people who work at MIRI and CFAR, we have people who participate at local meetups, we have people who debate online. This diversity is strength, not weakness: if you only have one mode of participation, then people who are unable to participate in that one specific way, are lost to the community.
The tricky part is keeping it all together. Preventing the tight-knit groups from excommunicating everyone else as "not real members", but also preventing the lukewarm members from making it all about social interaction and abandoning the original purpose, because both of those are natural human tendencies.
Replies from: vedrfolnir↑ comment by vedrfolnir · 2018-03-09T22:19:10.692Z · LW(p) · GW(p)
One thing I'd like to see is more research into the effects of... if not secret societies, then at least societies of some sort.
For example, is it just a coincidence that Thiel and Musk, arguably the two most interesting public figures in the tech scene, are both Paypal Mafia?
Another good example is the Junto.
Replies from: Viliam↑ comment by Viliam · 2018-03-10T09:50:04.938Z · LW(p) · GW(p)
I imagine this could be tricky to research even if people wouldn't try to obfuscate the reality (which they of course will). It would be difficult to distinguish "these two people conspired together" from "they are two extremely smart people, living in the same city, of course they are likely to have met each other".
For example, in a small country with maybe five elite high schools, elite people of the same age have high probability to have been high-school classmates. If they later take over the world together, it would make a good story to claim that they already conspired to do that during the high school. Even if the real idea only came 20 years later, no one would believe it after some journalist finds out that actually they are former classmates.
So the information is likely to be skewed in both ways: not seeing connections where they are, and seeing meaningful connections in mere coincidences.
Replies from: vedrfolnir↑ comment by vedrfolnir · 2018-03-10T14:56:27.372Z · LW(p) · GW(p)
Small groups have a bigger problem: they won't be very well documented. As far as I know, the only major source on the Junto is Ben Franklin's autobiography, which I've already read.
Large groups, of course, have an entirely different problem: if they get an appreciable amount of power, conspiracy theorists will probably find out, and put out reams of garbage on them. I haven't started trying to look into the history of the Freemasons yet because I'm not sure about the difficulty of telling garbage from useful history.
↑ comment by TurnTrout · 2018-03-05T22:15:28.016Z · LW(p) · GW(p)
That makes more sense. Broadly, I agree with Jacobian here, but there are a few points I'd like to add.
First, it seems to me that there aren't many situations in which this is actually the case. If you treat people decently (regardless of their religion or lack thereof), you are unlikely to lose friends for being atheist (especially if you don't talk about it). Sure, don't be a jerk and inappropriately impose your views on others, and don't break it to your fundamentalist parents that you think religion is a sham. But situations where it would be instrumentally rational to believe falsely important things, the situations in which there really would be an expected net benefit even after factoring in the knock-on effects of making your epistemological slope just that bit more slippery, these situations seem constrained to "there's an ASI who will torture me forever if I don't consistently system-2 convince myself that god exists". At worst, if you really can't find other ways of socializing, keep going to church while internally keeping an accurate epistemology.
Second, I think you're underestimating how quickly beliefs can grow their roots. For example, after reading Nate's Dark Arts of Rationality, I made a carefully-weighed decision to adopt certain beliefs on a local level, even though I don't believe them globally: "I can understand literally anything if I put my mind to it for enough time", "I work twice as well while wearing shoes", "I work twice as well while not wearing shoes" (the internal dialogue for adopting this one was pretty amusing), etc. After creating the local "shoe" belief and intensely locally-believing it, I zoomed out and focused on labelling it as globally-false. I was met with harsh resistance from thoughts already springing up to rationalize why my shoes actually could make me work harder. I had only believed this ridiculous thing for a few seconds, and my subconscious was already rushing to its defense. For this reason, I decided against globally-believing anything I know to be false, even though it may be "instrumentally rational" for me to always study as if I believe AGI is a mere two decades away. I am not yet strong enough to do this safely.
Third, I think this point of view underestimates the knock-on effects I mentioned earlier. Once you've crossed that bright line, once "instrumental rationality let me be Christian" is established, what else is left? Where is the Schelling fence for beliefs? I don't know, but I think it's better to be safe than sorry - especially in light of 1) and 2).
↑ comment by habryka (habryka4) · 2018-03-05T19:12:29.409Z · LW(p) · GW(p)
It should be noted that there are practically-secular jewish communities that seem to get a lot of the benefit of religion, without actually believing in supernatural things. I haven't visited one of those myself, but friends who looked into it seemed to think they were doing pretty well on the epistemics front. So for people interested in religion, but not interested in the supernatural-believing stuff: Maybe joining a secular jewish community would be a good idea?
Replies from: vedrfolnir↑ comment by vedrfolnir · 2018-03-05T19:23:42.328Z · LW(p) · GW(p)
That does seem to be a popular option for people around here who have the right matrilineage for it.
↑ comment by TAG · 2018-03-05T17:17:34.179Z · LW(p) · GW(p)
If you want to win, your map has to be right.
It has to be correct and useful, and correctness only matters for winning inasmuch as it entails usefulness. Having a lot of correct information about golf is no good if you want to be a great chef.
Replies from: TurnTrout↑ comment by TurnTrout · 2018-03-05T17:46:48.818Z · LW(p) · GW(p)
Having correct object-level information and having a correct epistemological process and belief system are two different things. An incorrect epistemological process is likely to reject information it doesn’t like.
Replies from: TAG, vedrfolnir↑ comment by vedrfolnir · 2018-03-05T19:20:14.382Z · LW(p) · GW(p)
Right, that's a possible response: the sacrifice of epistemic rationality for instrumental rationality can't be isolated. If your epistemic process leads to beneficial incorrect conclusions in one area, your epistemic process is broadly incorrect, and will necessarily lead to harmful incorrect conclusions elsewhere.
But people seem to be pretty good at compartmentalizing. Robert Aumann is an Orthodox Jew. (Which is the shoal that some early statements of the general-factor-of-correctness position broke on, IIRC.) And there are plenty of very instrumentally rational Christians in the world.
On the other hand, maybe people who've been exposed to all this epistemic talk won't be so willing to compartmentalize -- or at least to compartmentalize the sorts of things early LW used as examples of flaws in reasoning.
Replies from: TAG↑ comment by TAG · 2018-03-08T11:52:56.174Z · LW(p) · GW(p)
our epistemic process is broadly incorrect, and will necessarily lead to harmful incorrect conclusions elsewhere.
But people seem to be pretty good at compartmentalizing...
Which is why you shouldn't have written "necessarily".
Replies from: vedrfolnir↑ comment by vedrfolnir · 2018-03-10T15:29:17.325Z · LW(p) · GW(p)
I'm not sure how to square "rejecting religion is the preschool entrance exam of rationality" with "people are pretty good at compartmentalizing". Certainly there are parts of the Sequences that imply the insignificance of compartmentalization.
I personally recall, maybe seven years ago, having to break the news to someone that Aumann is an Orthodox Jew. This was a big deal at the time! We tend to forget how different the rationalist consensus is from the contents of the Sequences.
Every once in a while someone asks me or someone I know about what "postrationality" is, and they're never happy with the answer -- "isn't that just rationality?" Sure, to an extent; but to the extent that it is, it's because "postrationality" won. And to tie this into the discussion elsewhere in the thread: postrationality mostly came out of a now-defunct secret society.
Replies from: TurnTrout↑ comment by TurnTrout · 2018-03-10T19:20:29.353Z · LW(p) · GW(p)
Your line of reasoning re: Aumann feels akin to "X billionaire dropped out of high school / college, ergo you can drop out, too". Sure, perhaps some can get by with shoddy beliefs, but why disadvantage yourself?
I wouldn't use rejection of religion as a signal
Point of clarification: are you claiming that rejecting religion provides no information about someone's rationality, or that it provides insignificant information?
If postrationality really did win, I don't know that it should have. I haven't been convinced that knowingly holding false beliefs is instrumentally rational in any realistic world, as I outlined below.
Replies from: vedrfolnir↑ comment by vedrfolnir · 2018-03-10T23:31:42.672Z · LW(p) · GW(p)
Your line of reasoning re: Aumann feels akin to "X billionaire dropped out of high school / college, erg you can drop out, too". Sure, perhaps some can get by with shoddy beliefs, but why disadvantage yourself?
If people are pretty good at compartmentalization, it's at least not immediately clear that there's a disadvantage here.
It's also not immediately clear that there's a general factor of correctness, or, if there is, what the correctness distribution looks like.
It's at least defensible position that there is a general factor of correctness, but that it isn't useful, because it's just an artifact of most people being pretty dumb, and there's no general factor within the set of people who aren't just pretty dumb. I do think there's a general factor of not being pretty dumb, but I'm not sure about a general factor of correctness beyond that.
It seems probable that "ignore the people who are obviously pretty dumb" is a novel and worthwhile message for some people, but not for others. I grew up in a bubble where everyone already knew to do that, so it's not for me, but maybe there are people who draw utility from being informed that they don't have to take seriously genuine believers in astrology or homeopathy or whatever.
Point of clarification: are you claiming that rejecting religion provides no information about someone's rationality, or that it provides insignificant information?
In a purely statistical sense, rejecting religion almost certainly provides information about someone's rationality, because things tend to provide information about other things. Technically, demographics provide information about someone's rationality. But not information that's useful for updating about specific people.
Religious affiliation is a useful source of information about domain-specific rationality in areas that don't lend themselves well to compartmentalization. There was a time when it made sense to discount the opinions of Mormon archaeologists about the New World, although now that they've been through some time of their religiously-motivated claims totally failing to pan out it probably lends itself to compartmentalization alright.
On the other hand, I wouldn't discount the opinions of Mormon historical linguists about Proto-Uralic. But I would discount the opinions of astrologers about Proto-Uralic, unless they have other historical-linguistic work in areas that I can judge the quality of and that work seems reasonable.
If postrationality really did win, I don't know that it should have. I haven't been convinced that knowingly holding false beliefs is instrumentally rational in any realistic world, as I outlined below.
Postrationality isn't about knowingly holding false beliefs. Insofar as postrationality has a consensus that can be distilled to one sentence, it's "you can't kick everything upstairs to the slow system, so you should train the fast system." But that's a simplification.
Replies from: Jacobian↑ comment by Jacob Falkovich (Jacobian) · 2018-03-11T03:50:02.929Z · LW(p) · GW(p)
"you can't kick everything upstairs to the slow system, so you should train the fast system."
I know that postrationality can't be distilled to a single sentence and I'm picking on it a bit unfairly, but "post"-rationality can't differentiate itself from rationality on that. Eliezer wrote about system 1 and system 2 in 2006:
When people think of "emotion" and "rationality" as opposed, I suspect that they are really thinking of System 1 and System 2—fast perceptual judgments versus slow deliberative judgments. Deliberative judgments aren't always true, and perceptual judgments aren't always false; so it is very important to distinguish that dichotomy from "rationality". Both systems can serve the goal of truth, or defeat it, according to how they are used.
And it's not like this statement was ever controversial on LW.
You can't get any more "core LW rationality" than the fricking Sequences. If someone thinks that rationality is about forcing everything into System 2 then, well, they should reread the fricking Sequences.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2020-07-15T10:07:33.256Z · LW(p) · GW(p)
Minor: but I appreciate you using the word “fricking”, instead of the obvious alternative. For me, it feels like it gets the emphaticness across just as well, without the crudeness.
↑ comment by dsatan · 2018-03-05T03:00:56.108Z · LW(p) · GW(p)
While Peterson is a bit sloppy when he talks about truth, the notion of truth that he is working with is not simply his own construction to write some bottom line. There is a lot of literature of pragmatist analyses of truth and belief that roughly align with what he is saying and I would consider closer to what is the nature of truth (truer about truth) than the correspondence theory of truth presented in the sequences.
I recommend Peirce's Making our Ideas Clear, Putnam's Corresponding with Reality, and James's The Will to Believe. Peirce and James can easily be found free online by searching and I can PM you Putnam if you want it.
comment by Qiaochu_Yuan · 2018-03-06T22:03:59.661Z · LW(p) · GW(p)
Putting on the Jordan Peterson mask adds two crucial elements that rationalists often struggle with: motivation and meaning.
Holy shit, yes, thank you, this is exactly what has been motivating all of my contributions to LW 2.0. What is even the point of strengthening your epistemics if you aren't going to then use those strong epistemics to actually do something?
I first read the Sequences 6 years ago, and since then what little world-saving-relevant effort I've put in has been entirely other people asking me to join in on their projects. The time I spent doing that (at SPARC, at CFAR workshops, at MIRI workshops) was great; an embarrassing amount of the time I spent not doing that (really, truly embarrassing; amounts of time only a math grad student could afford to spend) was wasted on various forms of escapism (random internet browsing, TV, video games, anime, manga), because I was sad and lonely and put a lot of effort into avoiding having to deal with that (including avoiding debugging it at CFAR workshops because it was too painful to think about). At almost no point did I have the motivation to start a project on my own, and I didn't.
I've been working intently on debugging this for the last year or so and it finally more or less got solved in the beginning of February (I've been avoiding saying too much about this directly for the last month because I wanted to check that it's sticking - so far so good, one month in). I did not solve this problem by getting better at epistemics, at least not in any way that would have been legible to a skeptical rationalist. I solved it by following my instincts towards Circling, Tantra workshops, the Authentic Man Program - all interventions in a cluster that rationalists are only beginning to talk about publicly. (And improving my diet, that mattered too.)
I think there is an entire class of interventions that can radically improve your motivation and your sense that your life has meaning, which actually matters even if all I'm optimizing for is x-risk reduction (which it's not), and I really worry that 1) some rationalists are cutting themselves off from investigating these interventions because they're, from my perspective, way too worried about epistemic risks, and 2) this position will become the LW 2.0 default. If that happens I'm going to leave.
Rationalists who are epistemically strong are very lucky: you can use that strength in a place where it will actually help you, like investigating mysticism, by defending you from making the common epistemic mistakes there. It should be exciting for rationalists to learn about powerful tools that carry epistemic risks, because those are precisely the tools that rationalists should be best equipped to use compared to other people! (Separately, I also think that these tools have actually improved my epistemics, by improving my ability to model myself and other people.)
Replies from: Wei_Dai, SaidAchmiz↑ comment by Wei Dai (Wei_Dai) · 2018-03-07T01:54:07.259Z · LW(p) · GW(p)
Rationalists who are epistemically strong are very lucky: you can use that strength in a place where it will actually help you, like investigating mysticism, by defending you from making the common epistemic mistakes there.
This is an interesting idea, but how does someone tell whether they're strong enough to avoid making the common epistemic mistakes when investigating mysticism? For example, if I practice meditation I might eventually start experiencing what Buddists call vipassana ("insight into the true nature of reality"). I don't know if I'd be able to avoid treating those experiences as some sort of direct metaphysical knowledge as most people apparently do, as opposed to just qualia generated by my brain while it's operating differently from normal (e.g., while in a state of transient hypofrontality).
There's probably a number of distinct epistemic risks surrounding mysticism. Bad social dynamics in the face of asymmetric information might be another one. (Access to mystical experiences is hard to verify by third parties but tempting to claim as a marker of social status.) I don't know how someone or some community could be confident that they wouldn't fall prey to one of these risks.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2018-03-07T04:27:14.135Z · LW(p) · GW(p)
Good question. You can test your ability to avoid mistaking strong emotions for strong beliefs in general. For example, when you get very angry at someone, do you reflexively believe that they're a terrible person? When you get very sad, do you reflexively believe that everything is terrible? When you fall in love with someone, do you reflexively believe that they have only good qualities and no bad qualities? Etc.
I know I keep saying this, but it keeps being true: for me a lot of my ability to do this, and/or my trust in my ability to do this, came from circling, and specifically repeatedly practicing the skill of distinguishing my strong emotional reactions to what was happening in a circle from my best hypotheses about what was happening.
I don't know how someone or some community could be confident that they wouldn't fall prey to one of these risks.
I can't tell people what level of confidence they should want before trying this sort of thing, but I decided based on my instincts that the risk for me personally was low enough relative to the possible benefits that I was going to go for it, and things have been fine as far as my outside view is concerned so far, e.g. my belief in physics as standardly understood has not decreased at all.
Also, to some extent I feel like this argument proves too much. There are epistemic risks associated to e.g. watching well-made TV shows or movies, or reading persuasive writing, and rationalists take on these epistemic risks all the time without worrying about them.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2018-03-07T12:34:16.378Z · LW(p) · GW(p)
You can test your ability to mistake strong emotions for strong beliefs in general.
How much of this ability is needed in order to avoid taking strong mystical experiences at face value?
I can’t tell people what level of confidence they should want before trying this sort of thing, but I decided based on my instincts that the risk for me personally was low enough relative to the possible benefits that I was going to go for it,
In the comment I was replying to, you were saying that some rationalists are being too risk-averse. It seems like you're now backing off a bit and just talking about yourself?
and things have been fine so far, e.g. my belief in physics as standardly understood has not decreased at all.
I'm worried that the epistemic risks get stronger the further you go down this path. Have you had any mystical experiences similar to vipassana yet? If not, your continuing belief in physics as standardly understood does not seem to address my worry.
Also, to some extent I feel like this argument proves too much. There are epistemic risks associated to e.g. watching well-made TV shows or movies, or reading persuasive writing, and rationalists take on these epistemic risks all the time without worrying about them.
We do have empirical evidence about how strong these risks are though, and the epistemic risks associated with "investigating mysticism" seem much stronger than those associated with watching well-made TV shows and movies, or reading persuasive writing. There are other activities, for example joining a church (let's say for the social benefits), that I think have epistemic risks more comparable with investigating mysticism, and rationalists do worry about them.
P.S., After writing the above I viewed some interviews of Jeffery Martin (author of the PNSE paper previously discussed here), and he comes across as not being obviously irrational or epistemically corrupted by his investigations into mystical experiences. For example he claims to have gone through PNSE locations 1 through 4, but unlike the research subjects that he described in the paper, does not seem to have (or managed to overcome) a "tremendous sense of certainty that participants were experiencing a ‘deeper’ or ‘more true’ reality". However he does seem to consistently oversell the benefits of PNSE relative to his own descriptions in the paper (which I infer from his interviews was largely if not completely written before he "transitioned" to PNSE himself), and this makes me think he still got corrupted in a more subtle way.
Replies from: Jacobian, Qiaochu_Yuan↑ comment by Jacob Falkovich (Jacobian) · 2018-03-07T17:02:47.426Z · LW(p) · GW(p)
I've only taken a few steps down the path that Qiaochu is following, but I have a few thoughts regarding epistemic risk-management:
- If you're ever going to investigate any altered-consciousness experiences at all, you're going to have to take a risk. You can never be 100% sure that something is "epistemically safe": certainty is impossible and time is limited.
- There is clearly an efficient frontier of risk/reward tradeoffs. I'm also a fan of circling, which doesn't ask you to accept any supernatural claims or dogmas and is incredibly useful for understanding the landscape of human minds. A few circling sessions with seriously strange people can do a lot to cure one of typical mind fallacy. On the other hand, joining Scientology the same week you start experimenting with ayahuasca is probably unwise.
- As a community, we can reduce risk by diversifying. Some of us will do LSD, some will do vipassana, some will circle, some will listen to 100 hours of Peterson... We should be able to notice if any particular subgroup are losing their minds. The real danger would occur if all of us suddenly started doing the same thing with no precautions.
↑ comment by gjm · 2018-03-07T17:39:25.019Z · LW(p) · GW(p)
What would it look like, if we noticed that a particular subgroup was beginning to lose its mind? I think it might look like a few unusually-rude people calling into question the alleged experiences of that particular subgroup and asking pointed questions about exactly what had happened to them and exactly why they thought themselves better off for it; and like the members of that particular subgroup responding with a combination of indignation and obfuscation: "we've definitely been changed for the better, but of course we can't expect you to understand what it's like if it hasn't happened to you, so why do you keep pushing us for details you know we won't be able to give?" / "I find it very discouraging to get this sort of response, and if it keeps happening I'm going to leave"; and like some of the more community-minded folks objecting to the rudeness of the questioners, observing acerbically that it always seems to be the same people asking those rude questions and wondering whether the emperor really has any clothes, and maybe even threatening to hand out bans.
All of which sounds kinda familiar.
I don't actually think that ... let's call it the Berkeley School of rationality, though I'm not sure what fraction of its members are actually in Berkeley ... is really losing its mind. (I'm not 100% sure it isn't, though.) And, for the avoidance of doubt, I think it would be a damn shame if LW lost the people who have made noises about possibly leaving if the local community is too rude to them. -- But if it were losing its mind, I think that might well look roughly the way things currently look.
Which, I think, means: maybe we can't safely assume that if a particular subgroup was losing its mind then we'd notice and take the actions needed for epistemic safety. Because we're (almost certainly correctly) not rising up and throwing out the Berkeleyans right now, nor would we (probably correctly) even if they got a couple of notches weirder than they are now ... but by that point, if they were losing their minds, they would surely be posing a genuine epistemic threat to at least some of the rest of us.
Replies from: Jacobian, Qiaochu_Yuan↑ comment by Jacob Falkovich (Jacobian) · 2018-03-07T20:30:24.832Z · LW(p) · GW(p)
gjm, point well taken. I wonder if it would be easier for people inside or outside Berkeley to spot if anyone there is seriously going off the rails and say something about it.
Anyway, I do want to elaborate a little bit on my "Efficient Frontier" idea. If anyone can build a map of which "mystical experiences" are safe/dangerous/worthwhile/useless and for whom, it should be people like us. I think it's a worthwhile project and it has to be done communally, given how different each person's experience may be and how hard it is to generalize.
The main example here is Sam Harris, a hardcore epistemic rationalist who has also spent a lot of time exploring "altered states of consciousness". He wrote a book about meditation, endorses psychedelics with caveats, is extremely hostile to any and all religions, and probably thinks that Peterson is kinda crazy after arguing with him for four hours. Those are good data points, but we need 20 more Sam Harrises. I'm hoping that LW can be the platform for them.
Perhaps we need to establish some norms for talking about "mystical experiences", fake frameworks, altered consciousness etc. so that people feel safe both talking and listening.
Replies from: TAG↑ comment by TAG · 2018-03-08T12:35:15.810Z · LW(p) · GW(p)
There's Daniel Ingram, Vincent Horn, Kenneth Folk and the other Buddhist geeks.
↑ comment by Qiaochu_Yuan · 2018-03-07T19:53:08.422Z · LW(p) · GW(p)
I was triggered by this initially, but I reread it and you're making a completely reasonable point. I notice I'm still concerned about the possibility that your reasonable point / motte will be distorted into a less reasonable point / bailey.
"I find it very discouraging to get this sort of response, and if it keeps happening I'm going to leave"
That is not what I said. What I said is that if the pushback I've been getting becomes the default on LW 2.0, then I'm going to leave. This is a matter of people deciding what kind of place they want LW 2.0 to be. If they decide that LW 2.0 does not want to be the place for the things I want to talk about, then I'm going to respect that and talk about those things somewhere else. Staying would be unpleasant for everyone involved.
But if it were losing its mind, I think that might well look roughly the way things currently look.
I concede the point. We can try asking what kinds of externally verifiable evidence would distinguish this world from a world in which people like Val and I have been talking about real things which we lack the skill to explain (in a way satisfying to skeptical rationalists) via text. One prediction I'm willing to make is that I'm now more capable of debugging a certain class of thorny emotional bugs, so e.g. I'm willing to predict that over the next few years I'll help people debug such bugs at CFAR workshops and wherever else, and that those people will at least in expectation be happier, more productive, more willing to work on x-risk or whatever they actually want to do instead, less likely to burn out, etc.
(But, in the interest of trying to be even-handed about possible hypotheses that explain the current state of public evidence, it's hard to distinguish the above world from a world in which people like Val and I are losing our minds and also becoming more charismatic / better at manipulation.)
Replies from: evan-clark, gjm↑ comment by Evan Clark (evan-clark) · 2018-03-08T00:14:35.335Z · LW(p) · GW(p)
I think that perhaps what bothers a lot of rationalists about your (or Valentine's) assertions is down to three factors:
- You don't tend to make specific claims or predictions. I think you would come off better - certainly to me and I suspect to others - if you were to preregister hypotheses more, like you did in the above comment. I believe that you could and should be more specific, perhaps stating that over a six month period you expect to work n more hours without burning out or that a consensus of reports from outsiders about your mental well-being will show a marked positive change during a particular time period that the evaluators did not know was special. While these would obviously not constitute strong evidence, a willingness to informally test your ideas would at least signal honest belief.
- You seem to make little to no attempt to actually communicate your ideas in words, or even define your concepts in words. Frankly, it continues to strike me as suspicious that you claim difficulty in even analogizing or approximating your ideas verbally. Even something as weak as the rubber-sheet analogy for General Relativity would - once again - signal an honest attempt.
- There doesn't seem to be consistency on the strength of claims surrounding frameworks. As mentioned elsewhere in thread, Valentine seems to claim that mythic mode generated favorable coincidences like he was bribing the DM. Yet other times Valentine seems to stay acknowledge that the narrative description of reality is at best of metaphorical use.
I think that given recent rationalist interest in meditation, fake frameworks, etc., and in light of what seems to be a case of miscommunication and/or under-communication, there should be some attempt to establish a common basis of understanding, so that if someone asks, "Are you saying x?" they can be instantly redirected to a page that gives the relevant definitions and claims. If you view this is as impossible, do you think that that is a fact of your map or of the relevant territory?
Anyway, I really hope everyone can reach a point of mutual intelligibility, if nothing else.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2018-03-08T01:22:45.992Z · LW(p) · GW(p)
You don't tend to make specific claims or predictions. I think you would come off better - certainly to me and I suspect to others - if you were to preregister hypotheses more, like you did in the above comment. I believe that you could and should be more specific, perhaps stating that over a six month period you expect to work n more hours without burning out or that a consensus of reports from outsiders about your mental well-being will show a marked positive change during a particular time period that the evaluators did not know was special.
I have several different responses to this which I guess I'll also number.
- Sure, fine, I'm willing to claim this. Everyone who has interacted with me both in the last month and, say, a year ago will tell you that I am visibly happier and doing more of what I actually want ("productive" can be a loaded term). People can ask Anna, Duncan, Lauren, etc. if they really want. I can also self-report that I've engaged in much less escapism (TV, movies, video games, etc.) this month than in most months of the last 5 years, and what little I have engaged in was mostly social.
- I would love to be having this conversation; if the responses I've been getting had been of the form "hey, you seem to be making these interesting and non-obvious claims, what evidence do you have / what are your models / what are predictions you're willing to make?" then I would've been happy to answer, but instead the responses were of the form "hey, have you considered the possibility that you're evil?" I have a limited budget of time and attention I'm willing to spend on LW and my subjective experience is that I've been spending it putting out fires that other people have been starting. Please, I would love to have the nice conversation where we share evidence and models and predictions while maintaining principle of charity, but right now I mostly don't have enough trust that I won't be defected on to do this.
- You'll notice I haven't written a top-level post about any of these topics. That's precisely because I'm not yet willing to put in the time and effort necessary to get it up to epistemic snuff. I didn't want to start this conversation yet; it was started by others and I felt a duty to participate in order to prevent people from idea inoculating against this whole circle of ideas.
You seem to make little to no attempt to actually communicate your ideas in words, or even define your concepts in words.
This seems like an unfair conflation of what happened in the Kensho post and everything else. The Circling post was entirely an attempt to communicate in words! All of these comments are attempts to communicate in words!
Frankly, it continues to strike me as suspicious that you claim difficulty in even analogizing or approximating your ideas verbally. Even something as weak as the rubber-sheet analogy for General Relativity would - once again - signal an honest attempt.
This is exactly what the cell phone analogy in the Kensho post was for, although I also don't want people to bucket me and Val too closely here; I'm willing to make weaker claims that I think I can explain more clearly, but haven't done so yet for the reasons described above.
There doesn't seem to be consistency on the strength of claims surrounding frameworks. As mentioned elsewhere in thread, Valentine seems to claim that mythic mode generated favorable coincidences like he was bribing the DM. Yet other times Valentine seems to stay acknowledge that the narrative description of reality is at best of metaphorical use.
I warned Val that people would be unhappy about this. Here is one story I currently find plausible for explaining at least one form of synchronicity: operating in mythic mode is visible to other humans on some level, and causes them to want to participate in the myth that it looks like you're in. So humans can sometimes collaboratively generate coincidences as if they were playing out an improv scene, or something. (Weak belief weakly held.)
As for consistency, it's partly a matter of what level of claim I or anyone else is willing to defend in a given conversation. It may be that my true belief is strong belief A but that I expect it will be too difficult to produce a satisfying case for why I believe A (and/or that I believe that attempting to state A in words will cause it to be misinterpreted badly, or other things like that), so in the interest of signaling willingness to cooperate in the LW epistemic game, I mostly talk about weaker belief A', which I can defend more easily, but maybe in another comment I instead talk about slightly weaker or slightly stronger belief A'' because that's what I feel like I can defend that day. Do you really want to punish me for not consistently sticking to a particular level of weakening of my true belief?
If you view this is as impossible, do you think that that is a fact of your map or of the relevant territory?
I think it's very difficult because of long experiential distances. This is to some extent a fact about my lack of skill and to some extent what I see as a fact about how far away some parts of the territory are from the experience of many rationalists.
Overall, from my point of view there's a thing that's happening here roughly analogous to the Hero Licensing dialogue; if I spent all my time defending myself on LW like this instead of just using what I believe my skills to be to do cool stuff, then I won't ever get around to doing the cool stuff. So at some point I am just going to stop engaging in this conversation, especially if people continue to assume bad faith on the part of people like me and Val, in order to focus my energy and attention on doing the cool stuff.
Replies from: evan-clark↑ comment by Evan Clark (evan-clark) · 2018-03-08T02:31:56.482Z · LW(p) · GW(p)
(This is my second comment on this site, so it is probable that the formatting will come out gross. I am operating on the assumption that it is similar to Reddit, given Markdown)
- To be as succinct as possible, fair enough.
- I want to have this conversation too! I was trying to express what I believe to be the origins of people's frustrations with you, not to try to discourage you. Although I can understand how I failed to communicate that.
- I am going to wrap this up with the part of your reply that concerns experiential distance and respond to both. I suspect that a lot of fear of epistemic contamination comes from the emphasis on personal experience. Personal (meatspace) experiences, especially in groups, can trigger floods of emotions and feelings of insights without those first being fed through rational processing. Therefore it seems reasonable to be suspicious of anyone who claims to teach through personal experience. That being said, the experimental spirit suggests the following course of action: get a small group and try to close their experiential gap gradually, while having them extensively document anything they encounter on the way, then publish that for peer analysis and digestion. Of course that relies on more energy and time than you might have.
This seems like an unfair conflation of what happened in the Kensho post and everything else. The Circling post was entirely an attempt to communicate in words! All of these comments are attempts to communicate in words!
On a general level, I totally concede that I am operating from relatively weak ground. It has been a while - or at least felt like a while - since I read any of the posts I mentioned (tacitly or otherwise) with the exception of Kensho, so that is definitely coloring my vision.
If I spent all my time defending myself on LW like this instead of just using what I believe my skills to be to do cool stuff, then I won't ever get around to doing the cool stuff. So at some point I am just going to stop engaging in this conversation, especially if people continue to assume bad faith on the part of people like me and Val, in order to focus my energy and attention on doing the cool stuff.
I acknowledge that many people are responding to your ideas with unwarranted hostility and forcing you onto the defensive in a way that I know must be draining. So I apologize for essentially doing that in my original reply to you. I think that I, personally, am unacceptably biased against a lot of ideas due to their "flavor" so to speak, rather than their actual strength.
Do you really want to punish me for not consistently sticking to a particular level of weakening of my true belief?
As to consistency, I actually do want to hold you to some standard of strength with respect to beliefs, because otherwise you could very easily make your beliefs unassuming enough to pass through arbitrary filters. I find ideas interesting; I want to know A, not any of its more easily defensible variants. But I don't want to punish you or do anything that could even be construed as such.
In summary, I am sorry that I came off as harsh.
EDIT: Fixed terrible (and accidental) bolding.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2018-03-08T03:28:43.220Z · LW(p) · GW(p)
I suspect that a lot of fear of epistemic contamination comes from the emphasis on personal experience. Personal (meatspace) experiences, especially in groups, can trigger floods of emotions and feelings of insights without those first being fed through rational processing.
I recognize the concern here, but you can just have the System 1 experience and then do the System 2 processing afterwards (which could be seconds afterwards). It's really not that hard. I believe that most rationalists can handle it, and I certainly believe that I can handle it. I'm also willing to respect the boundaries of people who don't think they can handle it. What I don't want is for those people to typical mind themselves into assuming that because they can't handle it, no one else can either, and so the only people willing to try must be being epistemically reckless.
Therefore it seems reasonable to be suspicious of anyone who claims to teach through personal experience.
There are plenty of completely mundane skills that can basically only be taught in this way. Imagine trying to teach someone how to play basketball using only text, etc. There's no substitute for personal experience in many skills, especially those involving the body, and in fact I think this should be your prior. It may not feel like this is the prior but I think this is straight up a mistake; I'd guess that people's experiences with learning skills here are skewed by 1) school, which heavily skews towards skills that can be learned through text, and 2) the selection effect of being LWers, liking the Sequences, etc. There's a reason CFAR focuses on in-person workshops instead of e.g. blog posts or online videos.
I acknowledge that many people are responding to your ideas with unwarranted hostility and forcing you onto the defensive in a way that I know must be draining. So I apologize for essentially doing that in my original reply to you. I think that I, personally, am unacceptably biased against a lot of ideas due to their "flavor" so to speak, rather than their actual strength.
Thank you.
As to consistency, I actually do want to hold you to some standard of strength with respect to beliefs, because otherwise you could very easily make your beliefs unassuming enough to pass through arbitrary filters. I find ideas interesting; I want to know A, not any of its more easily defensible variants. But I don't want to punish you or do anything that could even be construed as such.
Unfortunately my sense is strongly that other people will absolutely punish me for expressing A instead of any of its weaker variants - this is basically my story about what happened to Val in the Kensho post, where Val could have made a weaker and more defensible point (for example, by not using the word "enlightenment") and chose not to - precisely because my inability to provide a satisfying case for believing A signals a lack of willingness to play the LW epistemic game, which is what you were talking about earlier.
(Umeshism: if you only have beliefs that you can provide a satisfying case for believing on LW, then your beliefs are optimized too strongly for defensibility-on-LW as opposed to truth.)
So I'm just not going to talk about A at all, in the interest of maintaining my cooperation signals. And given that, the least painful way for me to maintain consistency is to not talk about any of the weaker variants either.
Replies from: evan-clark↑ comment by Evan Clark (evan-clark) · 2018-03-08T03:59:08.651Z · LW(p) · GW(p)
you can just have the System 1 experience and then do the System 2 processing afterwards (which could be seconds afterwards). It's really not that hard. I believe that most rationalists can handle it, and I certainly believe that I can handle it.
It is probably true that most rationalists could handle it. It is also probably true, however, that people who can't handle it could end up profoundly worse for the experience. I am not sure we should endorse potential epistemic hazards with so little certainty about both costs and benefits. I also grant that anything is a potential epistemic hazard and that reasoning under uncertainty is kind of why we bother with this site in the first place. This is all to say that I would like to see more evidence of this calculation being done at all, and that if I was not so geographically separated from the LWsphere, I would like to try these experiences myself.
There's no substitute for personal experience in many skills, especially those involving the body, and in fact I think this should be your prior. It may not feel like this is the prior but I think this is straight up a mistake; I'd guess that people's experiences with learning skills here are skewed by 1) school, which heavily skews towards skills that can be learned through text, and 2) the selection effect of being LWers, liking the Sequences, etc.
I am not sure that it should be the prior for mental skills however. As you pointed out, scholastic skills are almost exclusively (and almost definitionally) attainable through text. I know that I can and have learned math, history, languages, etc., through reading, and it seems like that is the correct category for Looking, etc., as well (unless I am mistaken about the basic nature of Looking, which is certainly possible).
So I'm just not going to talk about A. And given that, the least painful way for me to maintain consistency is to not talk about any of the weaker variants either.
This is a sad circumstance, I wish it were otherwise, and I understand why you have made the choice you have considering the (rather ironically) immediate and visceral response you are used to receiving.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2018-03-08T04:17:45.193Z · LW(p) · GW(p)
I am not sure we should endorse potential epistemic hazards with so little certainty about both costs and benefits.
I'm not sure what "endorse" means here. My position is certainly not "everyone should definitely do [circling, meditation, etc.]"; mostly what I have been arguing for is "we should not punish people who try or say good things about [circling, meditation, etc.] for being epistemically reckless, or allege that they're evil and manipulative solely on that basis, because I think there are important potential benefits worth the potential risks for some people."
I am not sure that it should be the prior for mental skills however. As you pointed out, scholastic skills are almost exclusively (and almost definitionally) attainable through text. I know that I can and have learned math, history, languages, etc., through reading, and it seems like that is the correct category for Looking, etc., as well (unless I am mistaken about the basic nature of Looking, which is certainly possible).
I still think you're over-updating on school. For example, why do graduate students have advisors? At least in fields like pure mathematics that don't involve lab work, it's plausibly because being a researcher in these fields requires important mental skills that can't just be learned through reading, but need to be absorbed through periodic contact with the advisor. Great advisors often have great students; clearly something important is being transmitted even if it's hard to write down what.
My understanding of CFAR's position is also that whatever mental skills it tries to teach, those skills are much harder to teach via text or even video than via an in-person workshop, and that this is why we focus so heavily on workshops instead of methods of teaching that scale better.
and I understand why you have made the choice you have considering the (rather ironically) immediate and visceral response you are used to receiving.
I know, right? Also ironically, learning how to not be subject to my triggers (at least, not as much as I was before) is another skill I got from circling.
↑ comment by gjm · 2018-03-07T22:48:17.255Z · LW(p) · GW(p)
I'm glad you got over the initial triggered-ness. I did wonder about being even more explicit that I don't in fact think you guys are losing your minds, but worried about the "lady doth protest too much" effect.
I wasn't (in case it isn't obvious) by any means referring specifically to you, and in particular the "if it keeps happening I'm going to leave" wasn't intended to be anything like a quotation from you or any specific other person. It was intended to reflect the fact that a number of people (I think at least three) of what I called the Berkeley School have made comments along those general lines -- though I think all have taken the line you do here, that the problem is a norm of uncharitable pushback rather than being personally offended. I confess that the uncharitably-pushing-back part of my brain automatically translates that to "I am personally offended but don't want to admit it", in the same way as it's proverbially always correct to translate "it's not about the money" to "it's about the money" :-).
(For the avoidance of doubt, I don't in fact think that auto-translation is fair; I'm explaining how I came to make the error I did, rather than claiming it wasn't an error.)
[EDITED to replace "explicitly" in the second paragraph with "specifically", which is what I had actually meant to write; I think my brain was befuddled by the "explicit" in the previous paragraph. Apologies for any confusion.]
↑ comment by Qiaochu_Yuan · 2018-03-07T19:33:51.035Z · LW(p) · GW(p)
How much of this ability is needed in order to avoid taking strong mystical experiences at face value?
Not sure how to quantify this. I also haven't had a mystical experience myself, although I have experienced mildly altered states of consciousness without the use of drugs. (Which is not at all unique to dabbling in mysticism; you can also get them from concerts, sporting events, etc.) I imagine it's comparable to the amount of ability needed to avoid taking a strong drug experience at face value while having it (esp. since psychoactive drugs can induce mystical experiences).
In the comment I was replying to, you were saying that some rationalists are being too risk-averse. It seems like you're now backing off a bit and just talking about yourself?
I want to make a distinction between telling people what trade-offs I think they should be making (which I mostly can't do accurately, because they have way more information than I do about that) and telling people I think the trade-offs they're making are too extreme (based on my limited information about them + priors). E.g. I can't tell you how much your time is worth in terms of money, but if I see you taking on jobs that pay a dollar an hour I do feel justified in claiming that probably you can get a better deal than that.
I'm worried that the epistemic risks get stronger the further you go down this path.
Yes, this is probably true. I don't think you need to go very far in the mystical direction per se to get the benefits I want rationalists to get. Again, it's more that I think there are some important skills that it's worth it for rationalists to learn, and as far as I can tell the current experts in those skills are people who sometimes use vaguely mystical language (as distinct from full-blown mystics; these people are e.g. life coaches or therapists, professionally). So I want there to not be a meme in the rationality community along the lines of "people who use mystical language are crazy and we have nothing to learn from them," because I think people would be seriously missing out if they thought that.
We do have empirical evidence about how strong these risks are though
That's not clear to me because of blindspots. Consider the Sequences, for example: I think we can agree that they're in some sense psychoactive, in that people really do change after reading them. What kind of epistemic risks did we take on by doing that? It's unclear whether we can accurately answer that question because we've all been selected for thinking that the Sequences are great, so we might have shared blindspots as a result. I can tell a plausible story where reading the Sequences makes your life worse in expectation, in exchange for slightly increasing your chances of saving the world.
Similarly we all grow up in a stew of culture informed by various kinds of TV, movies, etc. and whatever epistemic risks are contained in those might be hidden behind blindspots we all share too. This is one of the things I interpret The Last Psychiatrist to have been saying.
↑ comment by Said Achmiz (SaidAchmiz) · 2018-03-06T22:24:05.181Z · LW(p) · GW(p)
Why do you (or I, or anyone else) need mysticism (either of the sort you’ve talked about, or whatever Jordan Peterson talks about) in order to have motivation and meaning? In my experience, it is completely unnecessary to deviate even one micrometer from the path of epistemic rectitude in order to have meaning and motivation aplenty. (I, if anything, find myself with far too little time to engage in all the important, exciting projects that I’ve taken on—and there is a long queue of things I’d love to be doing, that I just can’t spare time for.)
(Perhaps partly to blame here is the view—sadly all too common in rationalist circles—that nothing is meaningful or worth doing unless it somehow “saves the world”. But that is its own problem, and said view quite deserves to be excised. We ought not compound that wrong by indulging in woo—two wrongs don’t make a right.)
Rationalists who are epistemically strong are very lucky: you can use that strength in a place where it will actually help you, like investigating mysticism, by defending you from making the common epistemic mistakes there. It should be exciting for rationalists to learn about powerful tools that carry epistemic risks, because those are precisely the tools that rationalists should be best equipped to use compared to other people! (Separately, I also think that these tools have actually improved my epistemics, by improving my ability to model myself and other people.)
You do a disservice to that last point by treating it as a mere parenthetical; it is, in fact, crucial. If the tools in question are epistemically beneficial—if they are truth-tracking—then we ought to master them and use them. If they are not, then we shouldn’t. Whether the tools in question can be used “safely” (that is, if one can use them without worsening one’s epistemics, i.e. without making one’s worldview more crazy and less correct); and, conditional on that, whether said tools meaningfully improve our grasp on reality and our ability to discover truth—is, in fact, the whole question. (To me, the answer very much seems to be a resounding “no”. What’s more, every time I see anyone—“rationalist” or otherwise—treat the question as somehow peripheral or unimportant, that “no” becomes ever more clear.)
Replies from: Qiaochu_Yuan, TAG↑ comment by Qiaochu_Yuan · 2018-03-06T23:03:40.290Z · LW(p) · GW(p)
Why do you (or I, or anyone else)
I have said this to you twice now and I am going to keep saying it: are we talking about whether mysticism would be useful for Said, or useful for people in general? It seems to me that you keep making claims about what is useful for people in general, but your evidence continues to be about whether it would be useful for you.
I consider myself to be making a weak claim, not "X is great and everyone should do it" but "X is a possible tool and I want people to feel free to explore it if they want." I consider you to be making a strong claim, namely "X is bad for people in general," based on weak evidence that is mostly about your experiences, not the experiences of people other than you. In other words, from my perspective, you've consistently been typical minding every time we talk about this sort of thing.
I'm glad that you've been able to find plenty of meaning and motivation in your life as it stands, but other people, like me, aren't so lucky, and I'm frustrated at you for refusing to acknowledge this.
You do a disservice to that last point by treating it as a mere parenthetical; it is, in fact, crucial. If the tools in question are epistemically beneficial—if they are truth-tracking—then we ought to master them and use them. If they are not, then we shouldn’t.
The parenthetical was not meant to imply that the point was unimportant, just that it wasn't the main thrust of what I was trying to say.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-03-06T23:27:40.917Z · LW(p) · GW(p)
I’m glad that you’ve been able to find plenty of meaning and motivation in your life as it stands, but other people, like me, aren’t so lucky, and I’m frustrated at you for refusing to acknowledge this.
Why do you say it’s luck? I didn’t just happen to find these things. It took hard work and a good long time. (And how else could it be? —except by luck, of course.)
I’m not refusing to acknowledge anything. I do not for a moment deny that you’re advocating a solution to a real problem. I am saying that your solution is a bad one, for most (or possibly even “all”) people—especially “rationalist”-type folks like you and I are. And I am saying that your implication—that this is the best solution, or maybe even the only solution—is erroneous. (And how else to take the comment that I have been lucky not to have to resort to the sort of thing you advocate, and other comments in a similar vein?)
So, to answer your question:
I have said this to you twice now and I am going to keep saying it: are we talking about whether mysticism would be useful for Said, or useful for people in general? It seems to me that you keep making claims about what is useful for people in general, but your evidence continues to be about whether it would be useful for you.
I, at least, am saying this: of course these things would not be useful for me; they would be detrimental to me, and to everyone, and especially to the sorts of people who post on, and read, Less Wrong.
Is this a strong claim? Am I very certain of it? It’s not my most strongly held belief, that’s for sure. I can imagine many things that could change my mind on this (indeed, given my background[1], I start from a place of being much more sympathetic to this sort of thing than many “skeptic” types). But what seems to me quite obvious is that in this case, firm skepticism makes a sensible, solid default. Starting from that default, I have seen a great deal of evidence in favor of sticking with it, and very little evidence (and that, of rather low quality) in favor of abandoning it and moving to something like your view.
So this is (among other reasons) why I push for specifics when people talk about these sorts of things, and why I don’t simply dismiss it as woo and move on with my life (as I would if, say, someone from the Flat Earth Society were to post on Less Wrong about the elephants which support the world on their backs). It’s an important thing to be right about. The wrong view seems plausible to many people. It’s not so obviously wrong that we can simply dismiss it without giving it serious attention. But (it seems to me) it is still wrong—not only for me, but in general.
[1] No, it’s not religion.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2018-03-07T00:22:15.859Z · LW(p) · GW(p)
I am going to make one more response (namely this one) and then stop, because the experience of talking to you is painful and unpleasant and I'd rather do something else.
And I am saying that your implication—that this is the best solution, or maybe even the only solution—is erroneous.
I don't think I've said anything like that here. I've said something like that elsewhere, but I certainly don't mean anything like "mysticism is the only solution to the problem of feeling unmotivated" since that's easy to disprove with plenty of counterexamples. My position is more like:
"There's a cluster of things which look vaguely like mysticism which I think is important for getting in touch with large and neglected parts of human value, as well as for the epistemic problem of how to deal with metacognitive blind spots. People who say vaguely mystical things are currently the experts on doing this although this need not be the case in principle, and I suspect whatever's of value that the mystics know could in principle be separated from the mysticism and distilled out in a form most rationalists would be happy with, but as far as I know that work mostly hasn't been done yet. Feeling more motivated is a side effect of getting in touch with these large parts of human value, although that can be done in many other ways."
↑ comment by TAG · 2018-03-08T12:18:26.159Z · LW(p) · GW(p)
(Perhaps partly to blame here is the view—sadly all too common in rationalist circles—that nothing is meaningful or worth doing unless it somehow “saves the world”.
It seems tautologous to me that if thing A is objectively more important than thing B, then, all other things being equal, you should be doing thing A. Mysticism isn't a good fit for the standard rationalist framing of "everything is ultimately about efficiently achieving arbitrary goals", but a lot of other things aren't either, and the framing itself needs justification.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-03-08T15:57:51.867Z · LW(p) · GW(p)
It seems tautologous to me that if thing A is objectively more important than thing B, then, all other things being equal, you should be doing thing A.
This certainly sounds true, except that a) there’s no such thing as “objectively more important”, and b) even if there were, who says that “saving the world” is “objectively more important” than everything else?
Mysticism isn’t a good fit for the standard rationalist framing of “everything is ultimately about efficiently achieving arbitrary goals”, but a lot of other things aren’t either, and the framing itself needs justification.
Well I certainly I agree with you there—I am not a big fan of that framing myself—but I don’t really understand whether you mean to be disagreeing with me, here, or what. Please clarify.
Replies from: dxu↑ comment by dxu · 2018-03-08T16:22:04.663Z · LW(p) · GW(p)
Saving the world certainly does seem to be an instrumentally convergent strategy for many human terminal values. Whatever you value, it's hard to get more of it if the world doesn't exist. This point should be fairly obvious, and I find myself puzzled as to why you seem to be ignoring it entirely.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-03-08T17:57:11.388Z · LW(p) · GW(p)
Please note that you’ve removed the scare quotes from “saving the world”, and thus changed the meaning. This suggests several possible responses to your comment, all of which I endorse:
- It seems likely, indeed, that saving the world would be the most important thing. What’s not clear is whether ‘“saving the world”’ (as it’s used in these sorts of contexts) is the same thing as ‘saving the world’. It seems to me that it’s not.
- It’s not clear to me that the framework of “the world faces concrete threats X, Y, and Z; if we don’t ‘save the world’ from these threats, the world will be destroyed” is even sensible in every case where it’s applied. It seems to me that it’s often misapplied.
- If the world needs saving, is it necessary that all of everyone’s activity boil down to saving it? Is that actually the best way to save the world? It seems to me that it is not.
comment by Gaius Leviathan XV · 2018-03-04T12:00:14.266Z · LW(p) · GW(p)
If you really think Jordan Peterson is worth inducting into the rationalist hall of fame, you might as well give up the entire rationalist project altogether. The problem is not merely that Peterson is religious and a social conservative, but that he is a full-blown mystic and a crackpot, and his pursuit of "metaphorical truth" necessarily entails bad methodology and a lack of rigor that leads to outright falsehoods.
Take, for example, his stated belief that the ancient Egyptians and Chinese depicted the double helix structure of DNA in their art. (In another lecture he makes the same claim in the context of Hindu art)
Or his statement suggesting that belief in God is necessary for mathematical proof.
Or his "consciousness creates reality" quantum mysticism.
Or his use of Jung, including Jung's crackpot paranormal concept of "synchronicity".
As a trained PhD psychologist, Peterson almost certainly knows he's teaching things that are unsupported, but keeps doing it anyway. Indeed, when someone confronted him about his DNA pseudo-archaeology, he started backpedaling about how strongly he believed it- though he also went on to speculate about whether ESP could explain it.
So does Peterson sincerely pursue what he sees as the truth? I don't pretend to know, but one must still consider that other mystics, religious seekers, and pseudoscientists presumably genuinely pursue the truth too, and end up misled. Merely pursuing the truth does not a rationalist make.
Also, it takes some Olympic-level mental gymnastics to claim that Peterson's statements about bill C-16 were correct on some metaphorical level when they were simply false on the literal level (i.e. the level that he was understood as intending at the time). It's only in hindsight, after the bill was passed and it became clear that Peterson was wrong, that his apologists started defending his claim as being "metaphorical" rather than literal. This is, of course, the same rhetorical strategy used by religious fundamentalists when confronted with evidence that their beliefs are wrong, and it is disappointing to see the same strategy being used on a rationalist blog.
edit: I should also clarify that Bill C-16 did not actually mandate the use of preferred pronouns - that was a misrepresentation of the bill on Peterson's part that many people took at face value. As mentioned earlier, the law passed last June and has not led to mandated pronoun use.
Replies from: alkjash, Erfeyah↑ comment by alkjash · 2018-03-04T18:55:59.259Z · LW(p) · GW(p)
To respond to this without diving into the culture wars demon thread:
(1) The DNA claim I agree is absurd, though not nearly as absurd as you make it out to be. Certainly Democritus proposed the existence of atoms long before we had anything like microscopes. It's not inconceivable that ancient people could have deduced mathematical efficiencies of the double helix structure empirically and woven that into mythological stories, and some of these mathematical efficiencies are relevant reasons for DNA being actually the way it is. I think the DNA claim is basically a rare false positive for an otherwise useful general cognitive strategy, see (4) below.
As for the backpedaling and the ESP: what you call backpedaling looks to me like "giving a more accurate statement of his credence on request," which is fine. The ESP thing is actually a statement about the brilliant and unexpected insights from psychedelics. I'm personally somewhat skeptical about this but many many rationalists have told me that LSD causes them to be life-changingly insightful and is exactly what I need in life.
(2) Belief in God is something that needs to be disentangled about Peterson, he always hesitates to state he "believes in God" for exactly the reason of being misinterpreted this way. The closest thing to what he means by "faith in God" that I can express is "having a terminal value," and that statement translates to "human beings cannot be productive (including create mathematics) without a terminal value," i.e. you cannot derive Ought from Is.
(3) Peterson is not confusing the Copenhagen Interpretation with Wheeler's interpretation, but saying he believes Wheeler's interpretation is the most metaphorically true one. Independently of the quantum mechanics, which I don't think he has a strong side in, he's saying something like "conscious attention is so powerful as a tool for thinking that it might as well literally transform reality." Then the quantum mechanics shenanigans are basically him saying "oh look, it would be pretty funny and not altogether surprising if this were literally true."
(4) The synchronicity point is exactly (I claim) what drove Scott to write UNSONG and focus so much on puns and kabbalistic interpretations. The basic claim is that "the world is self-similar at every level of organization" is a really useful lens (see e.g. my post The Solitaire Principle) and to find these self-similarities you need to pay way way more attention to aesthetic coincidences than they seem to deserve. Again, as far as Peterson is concerned "being a useful lens" is the correct definition of "true" for metaphorical statements.
Let me frame the general point this way. Eliezer says that beliefs should pay rent in anticipated experiences. Peterson says that beliefs should pay rent in guidelines for action. That is, to determine if something is true, you should update disproportionately on evidence of the form "I tried to live as if this was true and measured what happened."
Replies from: ozymandias, MakoYass, TurnTrout, dsatan↑ comment by ozymandias · 2018-03-04T19:43:18.706Z · LW(p) · GW(p)
Death of the Author, but iirc Scott mentioned the point of the Kabbalah in Unsong is the exact opposite-- you can connect anything to anything if you try hard enough, so the fact that you can is meaningless.
Of course, this shows the exact problem with using fiction as evidence.
Replies from: alkjash, habryka4, vedrfolnir↑ comment by alkjash · 2018-03-04T21:50:57.209Z · LW(p) · GW(p)
Sorry, I didn't mean to imply that Scott believed the thing. What I think is that he has particularly strong subtle-pattern-noticing ability and this explains both the contents of UNSONG and the fact that he's such a great and lucid writer.
you can connect anything to anything if you try hard enough, so the fact that you can is meaningless.
This is a sort of Fallacy of Gray. Some connections are much stronger than others, and connections that jump out between core mythological structures that have lasted across thousands of years deserve attention.
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-03-05T01:27:47.988Z · LW(p) · GW(p)
Yes, but I think Scott is very weary of exactly his ability (and other people's ability) to draw connections between mostly unrelated things, and if he thinks that it's still an important part of rationality, my model of Scott still thinks that skill should be used with utmost care, and its misapplication is the reason for a large part of weird false things people come to believe.
↑ comment by habryka (habryka4) · 2018-03-04T19:48:06.639Z · LW(p) · GW(p)
Yeah, that was also my interpretation.
↑ comment by vedrfolnir · 2018-03-04T19:50:08.355Z · LW(p) · GW(p)
Oh, crypto-Discordianism. I haven't read Unsong, but does the Law of Fives show up anywhere?
↑ comment by mako yass (MakoYass) · 2018-03-05T02:05:38.897Z · LW(p) · GW(p)
1) What mathematics are you referring to? Does Peterson know it? I'd always just assumed that dna is helical because... it is connected by two strands, and those strands happen to rotate a bit when they connect to each base pair, due to some quirk of chemistry that definitely isn't something you'd ever want to discuss in art unless you knew what DNA was. It's conceivable that some ancient somewhere did somehow anticipate that life would contain strands of codings, but why would they anticipate that every strand would be paired with a mirror?
2) But telos has nothing to do with deities, and belief/intuition that it does is a really pernicious delusion. What is this supposed to explain or excuse? It's just another insane thing that a person would not think if they'd started from sound premises.
I never really doubted that there would be some very understandable, human story behind how Jordan Peterson synthesised his delusions. I am not moved by hearing them.
↑ comment by TurnTrout · 2018-03-04T20:57:51.007Z · LW(p) · GW(p)
The DNA claim I agree is absurd, though not nearly as absurd as you make it out to be. Certainly Democritus proposed the existence of atoms long before we had anything like microscopes. It's not inconceivable that ancient people could have deduced mathematical efficiencies of the double helix structure empirically and woven that into mythological stories, and some of these mathematical efficiencies are relevant reasons for DNA being actually the way it is. I think the DNA claim is basically a rare false positive for an otherwise useful general cognitive strategy, see (4) below.
It's also not literally inconceivable that someone in Egypt formulated and technically solved the alignment problem, but I wouldn't put odds on that of more than . Yes, I am prepared to make a million statements with that confidence and not expect to lose money to the gods of probability.
Belief in God is something that needs to be disentangled about Peterson, he always hesitates to state he "believes in God" for exactly the reason of being misinterpreted this way. The closest thing to what he means by "faith in God" that I can express is "having a terminal value," and that statement translates to "human beings cannot be productive (including create mathematics) without a terminal value," i.e. you cannot derive Ought from Is.
This seems motte-and-bailey. If that's what he means, shouldn't he just advance "terminal values are necessary to solve Moore's open question"?
I feel like throughout the comments defending Peterson, the bottom line has been written first and everything else is being justified post facto.
Replies from: alkjash↑ comment by alkjash · 2018-03-04T21:41:10.247Z · LW(p) · GW(p)
It's also not literally inconceivable that someone in Egypt formulated and technically solved the alignment problem, but I wouldn't put odds on that of more than 1×10−7. Yes, I am prepared to make a million statements with that confidence and not expect to lose money to the gods of probability.
When I say inconceivable I don't mean literally inconceivable. People have done some pretty absurd things in the past. What is your subjective probability that the most prolific mathematician of all time did half of his most productive work after going blind in both eyes?
I feel like throughout the comments defending Peterson, the bottom line has been written first and everything else is being justified post facto.
I can't speak for others, but I have spent hundreds of hours thinking about Peterson's ideas and formulating which parts I agree with and why, including almost every argument that has been put forth so far. Me going back retrospectively and extracting the reasons I made each decision to believe what I believe will look from the outside just like "writing down the bottom line and justifying things post facto." In general I don't think the bottom line fallacy is one that you're supposed to use on other people's reasoning.
Replies from: ozymandias, TurnTrout↑ comment by ozymandias · 2018-03-05T01:16:28.898Z · LW(p) · GW(p)
What is your subjective probability that the most prolific mathematician of all time did half of his most productive work after going blind in both eyes?
That's surprising but not that surprising: Milton wrote much of his best poetry while blind, and Beethoven was famously deaf. Conversely, I cannot think of a single unambiguous example of a mythological motif encoding a non-obvious scientific truth (such as that nothing can go faster than light, or that all species evolved from a single-celled organism, or that the stars are trillions and trillions of miles away), so I think this is very very unlikely.
↑ comment by TurnTrout · 2018-03-04T22:22:36.485Z · LW(p) · GW(p)
I wasn’t saying that unusual things can’t happen. I should have made myself clearer - what I was getting at was with respect to claims that ancient societies managed to spontaneously derive properties of things they were, in fact, literally incapable of observing. That smells like a second law of thermodynamics-violating information gain to me.
The assertion I’m making is not that Peterson is bad, or that he never has amazing insights, etc. My point is purely with respect to putting him on Eliezer’s level of truth-seeking and technical rationality. Having been wrong about things does not forever disbar you from being a beisutsukai master. However, if one is wrong about important things, becomes aware of the methods of rationality (as I imagine he has), thinks carefully, and still retains their implausible beliefs - that should be enough to indicate they aren’t yet on the level required.
On the other hand, I notice I am confused and that I am disagreeing with people whom I respect very much. I’m happy to update on any new information, but I have a hard time seeing how I could update very far on this particular claim, given that he is indeed quite religious.
Replies from: alkjash↑ comment by alkjash · 2018-03-04T22:56:50.730Z · LW(p) · GW(p)
Thank you for being charitable. =)
Regarding the DNA claim: I think what I'm saying is much weaker than what you think I'm saying.
e.g. ancient people discovered how to store information sequentially in a book. DNA stores information sequentially. This is not surprising. Why would it be inconceivable that the double helix structure is not uniquely weird about DNA?
My steelman of Peterson's claim about DNA is not that ancient people knew what DNA was or were making any attempt to map it, but that there might be some underlying mathematical reason (such as high compressibility) that the double helix structure is amenable to information storage, and also simultaneously makes it a good mythological motif. This seems to be only 1 in 100 or 1 in 1000 surprising to me.
Here's a bit of what it means to be "real" in Peterson's pragmatic sense, expanding on another comment:
Atoms are real. Numbers are real. You might call numbers a "useful metaphor," but numbers are more real than atoms. Part of what I mean by this is: I would be more surprised if the universe didn't obey simple mathematical laws than if it were not made out of atoms. Another part of what I mean is: if I had to choose between knowing about numbers and knowing about atoms, knowing about numbers would be more powerful in guiding me through life. And this is the pragmatic definition of truth.
At some point in the distant past people believed the imaginary unit i was not a "real" number. At first, it was introduced as a "useful shorthand" for a calculation made purely in the reals. People noticed, for example, that the easiest way to solve cubic equations like was to go through these imaginary numbers, even if the answer you end up with is real.
Eventually, the concept of i became so essential and simplified so many other things (e.g. every polynomial has a root) that its existence graduated from "useful metaphor" to "true." It led to ridiculous things like taking complex exponents, but somehow phenomena like made so much internal sense that the best explanation was that that i is as real as anything can be. Or if it isn't, we might as well treat it as if it is. There is some underlying metaphorical reality higher than technical truth. I could explain exactly what set of physical patterns is a shorthand for, but that would be putting the cart before the horse.
Metaphorical truth is the idea that the patterns in human behavior recorded in our mythological stories are more true than literal truth, in the same way that is more true than "the world is made out of atoms." This is the right way to overload the concept of truth for stories.
Replies from: TurnTrout↑ comment by TurnTrout · 2018-03-04T23:13:06.991Z · LW(p) · GW(p)
I still don’t quite grasp the DNA point, even after multiple reads - how would compressibility make it show up in mythos? I can’t find any non-reddit / youtube source on his statements (Freedom is keeping a patient eye on my browsing habits, as always).
I don’t disagree that mathematical truth is, in a certain sense, "higher" than other truths.
I’d just like to point out that if I could consistently steelman Eliezer’s posts, I’d probably be smarter and more rational than he (and no, I cannot do this).
Replies from: alkjash↑ comment by alkjash · 2018-03-04T23:35:50.283Z · LW(p) · GW(p)
For the DNA point, I'm drawing on some mathematical intuition. Here are two examples:
What if I told you that ancient Egyptian civilizations had depictions of the hyperbolic cosine even though they never came close to discovering the constant ? Well, the hyperbolic cosine is also called the catenary, which is the not-quite-parabola shape that all uniformly-weighted chains make if held from their two ends. So of course this shape was everywhere!
What if I told you that a physicist who had never studied prime number theory discovered the distribution of the zeros of the Riemann zeta function (that had escaped the attention of number theorists)? It turns out that this is basically how random matrix theory was discovered by Dyson and Montgomery.
The point is that mathematically interesting structures show up in not-obviously-connected ways. Now if I could tell you what exactly the structural property of DNA was, then I would actually believe Peterson's claim about it, which I don't. But at least a start to this question is: suppose a thousand genetic life forms evolved independently on a thousand planets. How many mathematically different information storage structures like the double helix would appear? Probably not more than 10, right? Most likely there's something canonically robust and efficient about the way information is packed into DNA molecules.
Re: steelmanning. Really what I'm doing is translating Peterson into language more palatable to rationalists. Perhaps you could call this steelmanning.
Replies from: TurnTrout↑ comment by TurnTrout · 2018-03-04T23:49:32.398Z · LW(p) · GW(p)
How many mathematically different information storage structures like the double helix would appear? Probably not more than 10, right? Most likely there's something canonically robust and efficient about the way information is packed into DNA molecules.
I'd agree with this claim, but it feels pretty anthropically-true to me. If it weren't the case, we wouldn't be able to exist.
Really what I'm doing is translating Peterson into language more palatable to rationalists. Perhaps you could call this steelmanning.
Once understood, chains of reasoning should (ideally) be accepted or rejected regardless of their window dressing. I may be turned off by what he says due to his mannerisms / vocabulary, but once I take the time to really understand what he's claiming... If I still find his argumentation lacking, then rephrasing it in an actually-more-defensible way is steelmanning. I haven't taken that time (and can't really, at the moment), but I suspect if I did, I'd still conclude that there is no steel strong enough to construct a beisutsukai out of someone who believes in god.
Replies from: alkjash↑ comment by alkjash · 2018-03-05T00:03:01.681Z · LW(p) · GW(p)
The crux of the matter is that he believes in God then? I'll also let him speak for himself, but as far as I can tell he doesn't by your definition of believe in God. Furthermore, I've always been an atheist and not changed any object-level beliefs on that front since I can remember, but I think with respect to Peterson's definitions I also believe in God.
↑ comment by dsatan · 2018-03-05T03:28:48.446Z · LW(p) · GW(p)
Parent commenter is doing some pretty serious cherry picking. 2) and 3) can basically be ignored. 2) comes from a 2013 deleted tweet which the parent commenter has pulled off of archive, and 3) from a 2011 debate which is anyways misrepresented by the parent commenter. He never lays out something that can unambiguously be taken to be quantum mysticism, even though he starts out talking about copenhagen. "consciousness creates reality" does actually correspond to a reasonable position which can be found by being a little charitable and spending some time trying to interpret what he says. 1 and 4 depend on his rather complex epistemology, "I really do believe this though it is complicated to explain," he prefaces the DNA comment with.
I would be much more concerned if something like 2) were something he repeated all the time rather than promptly deleted, and was central to some of his main theses.
↑ comment by Erfeyah · 2018-03-04T21:48:59.689Z · LW(p) · GW(p)
I would like to focus on a minor point in your comment. You say:
So does Peterson sincerely pursue what he sees as the truth? I don't pretend to know, but one must still consider that other mystics, religious seekers, and pseudoscientists presumably genuinely pursue the truth too, and end up misled. Merely pursuing the truth does not a rationalist make.
The structuring of your sentence implies a world view in which mystics and religious seekers are the same as pseudoscientists and are obviously 'misled'. Before that you are putting the word 'mystic' next to 'crackpot' as if they are the same thing. This is particularly interesting to me because an in depth rational examination of mystical material, in conjunction with some personal empirical evidence, indicate that mystical experiences exist and have a powerful transformative effect on the human psyche. So when I hear Peterson taking mysticism seriously I know that I am dealing with a balanced thinker that hasn't rejected this area before taking the necessary time to understand it. There are scientists and pseudo-scientists, religious seekers and pseudo religious seekers and, maybe, even mystics and pseudo-mystics. I know this is hard to even consider but how can you rationally assess something without taking the hypothesis seriously?
Replies from: TurnTrout↑ comment by TurnTrout · 2018-03-05T02:18:53.433Z · LW(p) · GW(p)
mystical experiences exist
that’s a pretty strong claim. Why would your priors favor "the laws of physics allow for mystical experiences" over "I misinterpreted sensory input / that’s what my algorithm feels like from the inside, I guess"?
Replies from: Vaniver, dsatan, Erfeyah↑ comment by Vaniver · 2018-03-05T04:02:01.945Z · LW(p) · GW(p)
Why would your priors favor "the laws of physics allow for mystical experiences" over "I misinterpreted sensory input / that’s what my algorithm feels like from the inside, I guess"?
Why are you contrasting "mystical experiences" and "that's what my algorithm feels like from the inside"? It's like claiming consciousness has to be non-material.
Replies from: TurnTrout↑ comment by TurnTrout · 2018-03-05T04:19:01.764Z · LW(p) · GW(p)
I don’t follow. Mystical experience implies ontologically basic elements outside the laws of physics as currently agreed upon. I’m asserting that mystical experiences are best explained as features of our algorithms.
Replies from: Qiaochu_Yuan, TAG↑ comment by Qiaochu_Yuan · 2018-03-06T21:12:01.843Z · LW(p) · GW(p)
Mystical experience implies ontologically basic elements outside the laws of physics as currently agreed upon.
Why? I don't need to have any particular interpretation of a mystical experience to have a mystical experience. Map-territory errors are common here but they certainly aren't inevitable.
Replies from: TurnTrout↑ comment by TurnTrout · 2018-03-06T21:23:37.097Z · LW(p) · GW(p)
I suspect I have a different understanding of "mystical experience" than you do - how would you define it?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2018-03-06T22:16:24.936Z · LW(p) · GW(p)
There's a cluster of experiences humans have had throughout history, which they've talked about using words like "seeing God" or "becoming one with the universe" (but again, let's carefully separate the words from a particular interpretation of the words), and that have been traditionally associated with religions, especially with people who start religions. They can be induced in many ways, including but not limited to meditation, drugs, and sex. Fuller description here.
Replies from: TurnTrout↑ comment by TAG · 2018-03-08T11:45:34.589Z · LW(p) · GW(p)
Mystical experience implies ontologically basic elements outside the laws of physics as currently agreed upon.
I don't see why. "Oness with the universe" is a fact implied by physcialism -- we are not outside observers. Conscious awareness of OWTU is not implied by physicalsim, but that's because nothing about consciousness is implied by physicalism.
↑ comment by dsatan · 2018-03-05T03:27:58.474Z · LW(p) · GW(p)
When is something a misinterpretation of sensory input? When the interpretation is not rendered in terms of the laws of physics which your alternative implies or...?
A better hypothesis is "in a metaphysics which takes Being as primary, which is not in any way contrary to science (since science does not imply a metaphysics like scientific realism or reductive and eliminative materialism), mystical experience is permissible and not contrary to anything we know".
Replies from: TurnTrout↑ comment by TurnTrout · 2018-03-05T04:03:03.156Z · LW(p) · GW(p)
That’s a long way of saying "theory with a strictly greater complexity and exponentially smaller prior probability than reductionism"
Replies from: dsatan↑ comment by dsatan · 2018-03-06T07:26:04.489Z · LW(p) · GW(p)
Crushing what I say into some theory of bayesian epistemology is a great way of destroying the meaning of what I say.
But to try to fit it into your theory without losing as much information as your attempt: humans, by the evolved structure of our brains, especially by the nature of human perception and decision making, have a built in ontology - the way we cut out things in our perception as things, and the way we see them as being things which are relevant to our involvements in the world. You can't get rid of it, you can only build on top of it. Mistakenly taking reductionistic materialism as ontology (which is not an action you can take short of completely changing the fundamental structure of your brain) only adds its complexity on top of the ontology that is already there. It's like using a windows emulator to do everything instead of using the OS the emulator is running in.
If you tried to turn your statement into an actual mathematical statement, and tried to prove it, you would see that there is a large gap between the mathematics and the actual psychology of humans, such as yourself.
Replies from: TurnTrout↑ comment by TurnTrout · 2018-03-06T15:41:58.187Z · LW(p) · GW(p)
I wasn’t trying to be rude, I just thought you were claiming something else entirely. My apologies.
I still don‘t understand the point you’re making with respect to mystical experiences, and I’d like to be sure I understand before giving a response.
↑ comment by Erfeyah · 2018-03-05T23:28:50.656Z · LW(p) · GW(p)
I can offer a couple of points on why I consider it a subject of great significance.
[1] On a personal level, which you are of course free to disregard as anecdotal, I had such an experience myself. Twice to be precise. So I know that the source is indeed experiential ("mystical experiences exist") though I would not yet claim that they necessarily point to an underlying reality. What I would claim is that they certainly need to be explored and not disregarded as a 'misinterpretation of sensory input'. My personal observation is that (when naturally occurring not chemically induced!) they accompany a psychological breakthrough through an increase in experiential (in contrast to rational) knowledge.
[2] Ancient foundational texts of major civilizations have a mystical basis. Good examples are the Upanishads and the Teo Te Ching but the same experiences can be found in Hebrew, Christian and Sufi mystics, the Buddha, etc. A look at the evidence will immediately reveal that the experience is common among all these traditions and also seems to have been reached independently. We can then observe that this experience is present in the most ancient layers of our mythological structures. The attempt of abstracting the experience into an image can be seen, for example, in symbols such as the Uroboros which point to the underlying archetype. The Uroboros, Brahman and the Tao are all different formulations of the same underlying concept. If we then take seriously Peterson's hypothesis about the basis of morality in stories things get really interesting; but I am not going to expand on that point here.
These are by no means the only reasons. Indeed the above points seem quite minor when viewed through a deeper familiarity with mystical traditions. But we have to start somewhere I guess.
comment by cousin_it · 2018-03-03T23:39:04.735Z · LW(p) · GW(p)
Jordan Peterson certainly has a strong and appealing idea about what went wrong. But I think Eric Hoffer (a similar character half a century ago) already answered that question pretty much. And when I try to find examples that put their views into contrast, Hoffer wins.
For example, in this video Peterson gives one of his most powerful phrases: "Don't use language instrumentally!" The idea is that, if you allow yourself to twist your words away from what's perfectly truthful, you gradually begin to think like that too, leading straight to the horrors of fascism and communism. It all sounds very convincing.
But then I remember every product manager I've had as a programmer. They were all happy, well-adjusted people who had this incredible skill at using language instrumentally - convincing various decision makers to go along, sometimes not in perfectly honest ways. They all seemed to have learned that skill with their mother's milk, and it hasn't hurt them at all!
Hoffer wouldn't be surprised by that. His only message is that you should have self-esteem, and not join any mass movements to compensate for lack of self-esteem. If you can manage that, it's okay to be a liar or cynic or product manager - things still won't go as wrong as they went in the 20th century.
I read Hoffer's The True Believer in my early twenties, along with Solzhenitsyn, Orwell and other writers who tried to figure out what went wrong. Many of them had great ideas, but became kind of maximalist about them. Hoffer's book stands out because it's measured. It gives a diagnosis and cure that's neither too little nor too much. You don't have to be perfect as Peterson says (or as Solzhenitsyn says...) Just don't join your local mob of fanatics, and other than that, do what you like.
Replies from: Jacobian↑ comment by Jacob Falkovich (Jacobian) · 2018-03-04T01:12:43.349Z · LW(p) · GW(p)
I decided to squeeze my discussion of Hoffer's frustration into a single paragraph that includes a link to an essay about True Believer, so it wouldn't take over a post that was already getting long. If you've read Lou Keep's review on Samzdat, do you think it's worth spending the time to read True Believer itself?
As a big proponent of Horseshoe Theory, I actually find Peterson very disappointing in this regard. He treats the Red mass movement (the alt-right) as misguided souls who just need a little nudge and a discount to selfauthoring.com to become great citizens. But similar young people who joined the Blue mass movement because of a contingency like skin color are, to JBP, evil fanatics in the service of a murderous ideology. Of course, as Hoffer notes, creating a scary boogeyman is a great way to fuel the fire of the worst kinds of mass movements. I think Peterson is making the alt-right worse, not defusing them.
I find self-help Peterson to be useful, Bible study Peterson to be interesting, but culture war Peterson is net harmful.
Replies from: cousin_it, Chris_Leong↑ comment by cousin_it · 2018-03-04T01:33:16.430Z · LW(p) · GW(p)
If you’ve read Lou Keep’s review on Samzdat, do you think it’s worth spending the time to read True Believer itself?
Yes! Hoffer's book is as clear as humanly possible, while Lou Keep's review is more impressionistic.
I think reading second-hand impressions of Hoffer is like reading second-hand impressions of Machiavelli. There's no way they can come close to the real thing.
Replies from: Jacobian, Chris_Leong↑ comment by Jacob Falkovich (Jacobian) · 2018-03-04T02:36:48.980Z · LW(p) · GW(p)
Oh man, I guess this means I have to actually read The Prince now too. I should have known better than to ask!
↑ comment by Chris_Leong · 2018-03-04T05:39:28.179Z · LW(p) · GW(p)
Do you have any opinions on what comes closest if I just want a quick summary? Unfortunately, I found Samzdat's article hard to follow due to the style.
Replies from: cousin_it↑ comment by cousin_it · 2018-03-04T10:53:48.418Z · LW(p) · GW(p)
The summary on Wikipedia is good.
↑ comment by Chris_Leong · 2018-03-04T05:43:11.871Z · LW(p) · GW(p)
I suppose that many people are less worried about the alt-right because they are very much a fringe movement, even on the right and even with the Trump presidency. But further than that, his opinion has probably been shaped by how he has had more success in turning people away from the alt-right, than from ideological forms of social justice (he has talked about how he has received many letters from people who said that they were drawn towards the alt-right until they started reading his content).
comment by Teja Prabhu (0xpr) · 2018-03-03T21:45:13.897Z · LW(p) · GW(p)
I've had trouble making up my mind about Jordan Peterson, and this post was enormously helpful in clarifying my thinking about him. Also:
A new expansion just came out for the Civilization 6 video game, and instead of playing it I’m nine hours into writing this post and barely halfway done. I hope I’m not the only one getting some meaning out of this thing.
This resulted in me updating heavily for the amount of effort involved in writing great content.
Replies from: Jacobian↑ comment by Jacob Falkovich (Jacobian) · 2018-03-04T00:45:19.327Z · LW(p) · GW(p)
This post took about 13 hours, and I didn't even edit the first draft much. Just imagine how long great content would take!
On the other hand, from a couple of conversations I've had with Scott he seems to write much faster and with almost no editing needed. Something like this might take him 3-4 hours in a single sitting. I've only been writing seriously for a couple of years - maybe writers get faster with time, and maybe Scott is just in a different class in terms of talent.
Replies from: arundelo↑ comment by arundelo · 2018-03-04T08:44:25.590Z · LW(p) · GW(p)
George H. Smith said something once, maybe in an email discussion group or something. I can't find it now but it was something along the lines of:
When he first started writing he did the standard thing of writing a first draft then rewriting it. But after spending years writing a large quantity of (short) complete pieces, many of them on a deadline, he got so he could usually just write it right the first time through—the second editing pass was only needed to fix typos.
Replies from: arundelo↑ comment by arundelo · 2018-03-06T04:10:37.885Z · LW(p) · GW(p)
I think I found what I was thinking of! It wasn’t George H. Smith, it was Jeff Riggenbach. Smith published it in a “short-lived online zine” of his and reposted it here. (It’s a review of Ayn Rand’s The Art of Nonfiction. Be warned that the formatting isn’t quite right—block quotes from the book are not formatted differently from the text of the review.)
A couple excerpts:
“Do not make time a constant pressure,” she cautions. “Do not judge your progress by each day; since the production of any written material is irregular, nobody but a hack can be sure how much he will produce in a given day”.
Apparently, then, every newspaperman or –woman, every columnist, every reviewer, every editorial writer who ever had to meet a daily deadline, is a hack, writing only what comes easily. Well, as one of their number, I’ll testify that, yes, hacks they assuredly are, but they do not write only what comes easily. From the late 1970s to the mid-1990s, I earned some portion of my income (anywhere from around ten percent to around eighty percent, depending on the year) by writing for newspapers. I wrote a variety of things, but probably ninety percent of my output was editorials, book reviews, and Op Ed articles (opinion articles that appear on the page Opposite the Editorial page).
---
Writers who face tight deadlines on a regular basis have no time for extensive revision and editing. They have to get it right the first time.
comment by Jacob Falkovich (Jacobian) · 2018-03-03T19:57:01.860Z · LW(p) · GW(p)
Commenting note: this post is subject to LesserWrong frontpage moderation rules, but I want to offer my own guidelines, in line with Putanumonit policy.
I'm all up for Crocker's Rules - if you want to call me a moron please don't waste space sugarcoating it. Jordan Peterson is probably also beyond caring if you insult him. However, this doesn't extend to anyone else mentioned in the post (like Scott A and like Scott A), or to any other commenters.
With that said - don't be a fool. Make some effort not to confuse my own opinions with those of Peterson's. Think twice before starting a discussion of some of the object level issues (e.g. pronouns), then think a third time. If you really feel like saying something that you feel may get you trouble on LW, I encourage to comment on Putanumonit where I keep a very light touch on moderation.
comment by alkjash · 2018-03-03T21:02:12.819Z · LW(p) · GW(p)
Meta-comment for authors: take some time after each post to update on how actually contrarian your positions are. As far as I can tell the response to Jordan Peterson on LessWrong has been uniformly positive.
I sense that there are a lot of reasonable people with good ideas like yourself who feel reluctant to share "controversial" views (on e.g. fuzzy System 1 stuff) because they feel like embattled contrarians. Of course, this is probably correct in whatever other social sphere you get your training data from. However, the whole "please be reasonable and charitable to these views" disclaimer gets old fast if people have been receiving similar views well in the past.
tl;dr: on LessWrong, you are probably less contrarian than you think.
Replies from: vedrfolnir, Charlie Steiner↑ comment by vedrfolnir · 2018-03-04T13:31:29.767Z · LW(p) · GW(p)
I've gotten a much more negative reception to fuzzy System 1 stuff at IRL LW meetups than online -- that could be what's going on there.
And it's possible for negative reception to be more psychologically impactful and less visible to outsiders than positive reception. This seems especially likely for culture war-adjacent topics like Jordan Peterson. Even if the reception is broadly positive, there might still be a few people who have very negative reactions.
(This is why I'm reluctant to participate in the public-facing community nowadays -- there were a few people in the rationalist community who had very negative reactions to things I said, and did things like track me down on Facebook and leave me profanity-laden messages, or try to hound me out of all the circles they had access to. With a year or two of hindsight, I can see that those people were a small minority and this wasn't a generally negative reaction. But it sure felt like one at the time.)
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-03-04T18:11:37.639Z · LW(p) · GW(p)
I just want to make it clear that sending other users on the page insulting or threatening messages is not cool, and that if anyone else ever experiences that, please reach out to me and I will be able to give you a bit of perspective and potentially take action against the person sending the messages (if they’ve done that repeatedly).
↑ comment by Charlie Steiner · 2018-03-04T18:45:15.596Z · LW(p) · GW(p)
I don't have a positive reaction to Jordan Peterson - I wouldn't call liking him contrarian, but it's at least controversial. To me, he just seems like a self-help media personality shaped by slightly different selection pressure.
Replies from: vedrfolnir↑ comment by vedrfolnir · 2018-03-04T20:04:22.244Z · LW(p) · GW(p)
Jordan Peterson is controversial, but "controversial" is an interesting word. Is Paul Krugman controversial?
comment by dsatan · 2018-03-05T01:03:08.911Z · LW(p) · GW(p)
All of those frameworks are fake in the sense that introvert isn’t a basic physical entity the same way an up quark is.
The reductive materialism implicit in this is as fake as introverts - possibly even more fake because unless you have a particle accelerator on hand, "everything is made of quarks" translates 100% to hypotheticals rather than anything you can actually do or see in the world; and in the presence of a particle accelerator, that 100% is reduced by epsilon.
Replies from: dsatan↑ comment by dsatan · 2018-03-05T03:41:15.107Z · LW(p) · GW(p)
I think one of the biggest things Peterson has to offer is a way out of many of the fake frameworks that rationalists hold, by offering a fake framework which takes Being as primary, and actually being able to deal with Being directly (which becomes possible with a fake framework which permits the concept of being) is a pathway to Looking.
Replies from: vedrfolnir↑ comment by vedrfolnir · 2018-03-07T13:42:13.416Z · LW(p) · GW(p)
What does "deal with Being directly" mean?
comment by John_Maxwell (John_Maxwell_IV) · 2018-03-04T01:48:24.972Z · LW(p) · GW(p)
I think that Peterson overgeneralizes about gay men (and what about lesbians?), and he’s wrong about the impact of gay marriage on society on the object level. I’m also quite a fan of promiscuity, and I think it’s stupid to oppose a policy just because “neo-Marxists” support it.
Any political issue can be analyzed on the policy level or on the coalition level. Gay marriage seems like an example of an issue that has less to do with policy and more to do with coalitions. If gay marriage was about policy, people would not draw a meaningful distinction between marriage and a legally equivalent civil union. But in practice people draw a huge distinction.
That's not to say Peterson's analysis is correct. Gay marriage was first championed by gay conservative Andrew Sullivan, and in some ways it could be seen as a compromise position between the left and the right. As Sullivan put it in his 1989 essay:
Much of the gay leadership clings to notions of gay life as essentially outsider, anti-bourgeois, radical. Marriage, for them, is co-optation into straight society.
Note: In Sullivan's essay, he makes policy-related points. I agree that as a policy wonk, Sullivan sees gay marriage as a policy issue. I'd argue most others don't see it that way.
On a coalition level, I think there's an alternate history where gay marriage catches on among conservatives first, and those on the left resist it on that basis. And this might even be the right strategic move for the left, from a coalition power perspective.
Replies from: gjm↑ comment by gjm · 2018-03-04T19:11:26.745Z · LW(p) · GW(p)
In what sense was gay marriage "first championed by [...] Andrew Sullivan"?
Replies from: John_Maxwell_IV, Vaniver↑ comment by John_Maxwell (John_Maxwell_IV) · 2018-03-05T04:14:22.271Z · LW(p) · GW(p)
Wikipedia describes Sullivan's article as "the first major article in the United States advocating for gay people to be given the right to marry".
[Andrew Sullivan] argued for a principle that few found compelling at the time but that most now endorse.
[Andrew Sullivan] was making conservative arguments for gay marriage recognition while much of the gay political establishment was focused in other areas like fighting workplace discrimination. He was writing about it all the way back in 1989.
Then a breakthrough in Hawaii, where the state supreme court ruled for marriage equality on gender equality grounds. No gay group had agreed to support the case, which was regarded at best as hopeless and at worst, a recipe for a massive backlash. A local straight attorney from the ACLU, Dan Foley, took it up instead, one of many straight men and women who helped make this happen. And when we won, and got our first fact on the ground, we indeed faced exactly that backlash and all the major gay rights groups refused to spend a dime on protecting the breakthrough … and we lost.
In fact, we lost and lost and lost again. Much of the gay left was deeply suspicious of this conservative-sounding reform; two thirds of the country were opposed; the religious right saw in the issue a unique opportunity for political leverage – and over time, they put state constitutional amendments against marriage equality on the ballot in countless states, and won every time. Our allies deserted us. The Clintons embraced the Defense of Marriage Act, and their Justice Department declared that DOMA was in no way unconstitutional the morning some of us were testifying against it on Capitol Hill. For his part, president George W. Bush subsequently went even further and embraced the Federal Marriage Amendment to permanently ensure second-class citizenship for gay people in America. Those were dark, dark days.Replies from: gjm
↑ comment by gjm · 2018-03-05T10:13:52.996Z · LW(p) · GW(p)
The things you quote don't claim he was first, they just say he was early (which, indeed, he was; I wasn't disputing that).
It does indeed appear that Johann Hari says that in 1989 he wrote the "first major article" in the US arguing for same-sex marriage. But, for instance, in the Wikipedia article about the history of same-sex marriage in the US we find:
In August 1953, officials of the U.S Post Office delayed delivery of that month's issue of ONE magazine, with the cover story "Homosexual Marriage?", for three weeks while they tried to determine whether its contents were obscene.
and
In June 1971, members of the Gay Activists Alliance demanded marriage rights for same-sex couples at New York City’s Marriage License Bureau.
and
The next year [sc. 1973], the National Coalition of Gay Organizations called for the repeal of all statutes limiting marriage to different-sex couples and for extending the legal benefits of marriage to all cohabiting couples.
On Wikipedia's timeline of same-sex marriage (which incidentally doesn't mention Sullivan's article) we find that in 1975 some same-sex marriage licences were actually issued in Colorado! (But they got blocked.)
Perhaps Sullivan was the first major conservative pundit to argue for same-sex marriage in the US, or something like that. Good for him! But he wasn't the first person to champion it, not by a long way.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2018-03-06T04:58:58.785Z · LW(p) · GW(p)
I acknowledge others were talking about it earlier, but I think "first major conservative pundit" is an understatement. Tyler Cowen called Sullivan the most influential public intellectual of the past 20 years, largely due to the influence he had on gay marriage.
↑ comment by Vaniver · 2018-03-05T04:06:57.307Z · LW(p) · GW(p)
Here's Freedom to Marry's history of the subject. Wolfson in 1983 is definitely earlier.
comment by gjm · 2018-03-03T23:53:45.934Z · LW(p) · GW(p)
Something's gone screwy with the formatting somewhere around the "Untitled" link, as a result of which the entire end of the post and all the comments are in italics. Jacob, perhaps you can fix whatever's broken in the post? LW2 mods, perhaps you can fix whatever's wrong in the rendering code that enables formatting screwups' effects to persist like that?
Replies from: gjm↑ comment by gjm · 2018-03-03T23:54:53.518Z · LW(p) · GW(p)
Huh, weird. My comment isn't all italicized, even though others before it and after it are. Perhaps because there are actual italics in it? Or maybe it all looks this way only to me and no one else is seeing the spurious italics?
[EDITED to add:] I reloaded the page and then my comments were italicized like everyone else's. I seem to recall that there's some quirk of the way formatting is done on LW2 that means that just-posted comments are processed slightly differently from older ones, or something; perhaps that's responsible.
[EDITED again to add:] All seems to be fixed now. Thanks to Jacob and/or the site admins.
Replies from: habryka4, gjm, philh↑ comment by habryka (habryka4) · 2018-03-04T01:03:48.681Z · LW(p) · GW(p)
Sorry for that! I fixed it for Jacob.
One of the draft-js plugins we use appears to have some bugs related to links that are formatted, and sometimes doesn't properly close the HTML-styling tag for stuff like italics, in the middle of a link. For other authors: If you see some weird formatting in your post, it's probably because you applied some styling in the middle of a link. Removing it for now will fix it, I should really get around to submitting a PR that fixes this systematically.
comment by jollybard · 2019-12-03T06:16:15.609Z · LW(p) · GW(p)
Metaphysical truth here describes self-fulfilling truths as described by Abram Demski [? · GW], and whose existence are garanteed by e.g. Löb's theorem. In other words, metaphysical truth is truth, and rationalists should be aware of them.
comment by SquirrelInHell · 2018-03-04T01:56:15.401Z · LW(p) · GW(p)
[Note: somewhat taking you up on the Crocker's rules]
Peterson's truth-seeking and data-processing juice is in super-heavy weight class, comparable to Eliezer etc. Please don't make the mistake of lightly saying he's "wrong on many things".
At the level of analysis in your post and the linked Medium article, I don't think you can safely say Peterson is "technically wrong" about anything; it's overwhelmingly more likely you just didn't understand what he means. [it's possible to make more case-specific arguments here but I think the outside view meta-rationality should be enough...]
Replies from: Jacobian, Gaius Leviathan XV↑ comment by Jacob Falkovich (Jacobian) · 2018-03-04T03:15:35.372Z · LW(p) · GW(p)
If you want to me to accept JBP as an authority on technical truth (like Eliezer or Scott are), then I would like to actually see some case-specific arguments. Since I found the case-specific arguments to go against Peterson on the issues where I disagree, I'm not really going to change my mind on the basis of just your own authority backing Peterson's authority.
For example: the main proof Peterson cites to show he was right about C-16 being the end of free speech is the Lindsay Shepherd fiasco. Except her case wasn't even in the relevant jurisdiction, which the university itself admitted! The Shepherd case was about C-16, but no one thinks (anymore) that she was in any in violation of C-16 or could be punished under it. I'll admit JBP was right when Shepherd is dragged to jail by the secret police.
Where I think Peterson goes wrong most often is when he overgeneralizes from the small and biased sample of his own experience. Eating nothing but chicken and greens helped cure Peterson's own rare autoimmune disease, so now everyone should stop eating carbs forever. He almost never qualifies his opinions or the advice he gives, or specifies that it only applies to a specific group. Here's a good explanation by Scott why this approach is a problem.
This leads me to the main issue where I'd really like to know if Peterson is technically right or wrong: how much of a threat are the "postmodernist neo-Marxists" to our civilization? Peterson's answer is "100%, gulags on the way", but he's also a professor at a liberal university. That's where the postmodernists are, but it's not really representative of where civilization is. I think it would be very hard for anyone to extrapolate carefully about society at large from such an extreme situation, and I haven't seen evidence that Peterson can be trusted to do so.
Replies from: SquirrelInHell↑ comment by SquirrelInHell · 2018-03-04T12:03:25.261Z · LW(p) · GW(p)
[Please delete this thread if you think this is getting out of hand. Because it might :)]
I'm not really going to change my mind on the basis of just your own authority backing Peterson's authority.
See right here, you haven't listened. What I'm saying is that there is some fairly objective quality which I called "truth-seeking juice" about people like Peterson, Eliezer and Scott which you can evaluate by yourself. But you are just dug yourself into the same trap a little bit more. From what you write, your heuristics for evaluating sources seem to be a combination of authority and fact-checking isolated pieces (regardless of how much you understand the whole picture). Those are really bad heuristics!
The only reason why Eliezer and Scott seem trustworthy to you is that their big picture is similar to your default, so what they say is automatically parsed as true/sensible. They make tons of mistakes and might fairly be called "technically wrong on many things". And yet you don't care because you when you feel their big picture is right, those mistakes feel to you like not-really-mistakes.
Here's an example of someone who doesn't automatically get Eliezer's big picture, and thinks very sensibly from their own perspective:
On a charitable interpretation of pop Bayesianism, its message is:
Everyone needs to understand basic probability theory!
That is a sentiment I agree with violently. I think most people could understand probability, and it should be taught in high school. It’s not really difficult, and it’s incredibly valuable. For instance, many public policy issues can’t properly be understood without probability theory.
Unfortunately, if this is the pop Bayesians’ agenda, they aren’t going at it right. They preach almost exclusively a formula called Bayes’ Rule. (The start of Julia Galef’s video features it in neon.) That is not a good way to teach probability.
What about if you go read that, and try to mentally swap places. The degree to which Chapman doesn't get Eliezer's big picture is probably similar to the degree to which you don't get Peterson's big picture, with similar results.
Replies from: Jacobian↑ comment by Jacob Falkovich (Jacobian) · 2018-03-04T14:42:01.925Z · LW(p) · GW(p)
I'm worried we may be falling into an argument about definitions, which seems to happen a lot around JBP. Let me try to sharpen some distinctions.
In your quote, Chapman disagrees with Eliezer about his general approach, or perhaps about what Eliezer finds meaningful, but not about matters of fact. I disagree with JBP about matters of fact.
My best guess at what "truth-seeking juice" means comes in two parts: a desire to find the truth, and a methodology for doing so. All three of Eliezer/Scott/JBP have the first part down, but their methodologies are very different. Eliezer's strength is overcoming bias, and Scott's strength is integrating scientific evidence, and I believe they're very good at it because I've seen them do it a lot and be wrong about facts very very rarely. In this post I actually disagree with Eliezer about a matter of fact (how many people before modernity were Biblical literalists), and I do so with some trepidation.
JBP's methodology is optimized for finding his own truth, the metaphorical kind. Like Scott has a track record of being right on science debates, JBP has a track record of all his ideas fitting into a coherent and inspirational worldview - his big picture. When I say he's wrong I don't mean his big picture is bad. I mean he's wrong about facts, and that the Peterson mask is dangerous when one needs to get the facts right.
Replies from: Raemon↑ comment by Raemon · 2018-03-04T16:19:27.748Z · LW(p) · GW(p)
I notice that my default sense is that Jacob is making a reasonable ask here, but also that Squirrel seems to be trying to do something similar to what I just felt compelled to do on a different thread so I feel obliged to lean into it a bit.
I'm not sure...
a) how to handle this sort of disagreeing on vantage points, where it's hard to disentangle 'person has an important frame that you're not seeing that is worth at least having the ability to step inside' vs 'person is just wrong' and 'person is trying to help you step inside a frame' vs 'person is making an opaque-and-wrong appeal to authority' (or various shades of similar issues).
or, on the meta level:
b) what reasonable norms/expectations on LessWrong for handling that sort of thing are. Err on one side and a lot of people miss important things, err on another side and people waste a lot of time on views that maybe have interesting frames but... just aren't very good. (I like that Jacob set pretty good discussion norms on this thread but this is a thing I'm thinking a lot about right now in the general case).
As of now I have not read anything about Peterson besides this post and one friend's facebook review of his book, so I don't have a horse in the object level discussion.
Jacob: I think what Squirrel is saying is that your focus on the object level claims from within your current frame is causing you to miss important insights you could grok if you were trying harder to step inside Jordan's frame (as opposed to what you are currently doing, which looks more like "explaining his frame from inside your frame.")
[To be clear, your frame, which I share, seems like a really super great way to see the world and possibly literally the best one, but I think the mental skill of deeply inhabiting other worldviews is important, albeit for reasons I'd probably need to spend 10 hours thinking about in order to fully justify]
[[Also, insofar as chains of authority is worth listening to and insofar as I get any authority cred, I think Squirrel is pretty worth listening to as filter for directing your attention at things that might be nonsense or might be weirdly important]]
Squirrel: I'd tentatively guess that you'd make better headway trying to describe Jordan's frame and what value you got out of it than the hard-to-tell-from-argument-by-authority thing you're currently doing, although also I think it may have been correct to do the first two comments you did before getting to that point anyway, dunno.
Meta: I think it's a reasonable norm on LW for to expect people to acquire the "absorb weird frames you don't understand" skill, but also a reasonable to have the default frame be "the sort of approach outlined in the sequences", and try as best you can to make foreign frames legible within that paradigm.
Replies from: Jacobian↑ comment by Jacob Falkovich (Jacobian) · 2018-03-04T18:11:40.305Z · LW(p) · GW(p)
Ray, are you 100% sure that's what is actually going on?
Let's introduce some notation, following the OP: there are (at least) two relevant frameworks of truth, the technical, which we'll denote T, and the metaphorical, M. In this community we should be able to agree what T is, and I may or may not be confused about what M is and how it relates to T. I wrote this post specifically to talk about M, but I don't think that's where Squirrel and I are in disagreement.
My post explicitly said that I think that Peterson is M.right even though he's T.wrong-on-many-things. Squirrel didn't say they (he? she? ze?) "got some value" out of Peterson in the M-framework. They explicitly said that he's not wrong-on-many-things in the T framework, the same way Eliezer is T.correct. Well, Eliezer told me how to assess whether someone is T.correct - I look at the evidence in the object-level claims.
If someone thinks I'm doing T wrong and misapplying rationality, I'm going to need specifics. Ditto if someone thinks that Eliezer is also T.wrong-on-many-things and I don't notice that because I'm deluding myself, So far, I'm the only one who came up with an example of where I think that Eliezer it T.wrong.
My point when talking about Squirrel's authority isn't to belittle them, but to say that changing my mind would require a bit more effort, if anyone feels up to it. It should be obvious that my own framework is such that saying "truth juice" is unlikely to move me. I want to be moved! I've been spelling out the details not because I want to fight over C-16 or low carb breakfasts, but to make it easier for people who want to convince me or change my framework to see where the handles are. And I've tried to introduce specific language so we don't talk past each other (Rule 10: be precise in your speech).
Of course, that doesn't make me entitled to people's efforts. If you have something more fun to do on a Sunday, no hard feelings :)
Replies from: Raemon, SquirrelInHell↑ comment by Raemon · 2018-03-04T20:29:55.172Z · LW(p) · GW(p)
Ray, are you 100% sure that's what is actually going on?
Nope! (It was my best guess, which is why I used some words like "seems" and "I think that Squirrel is saying")
But, sounds from the other comment I got it about right.
I agree that persuading someone to step harder into a frame requires a fair bit of effort than what Squirrel has done so far (so far I've never seen anyone convince someone of this sort of thing in one sitting, and always seems to require direct chains of trust, often over years, but I think the art of talking about this usefully has a lot of room for progress)
↑ comment by SquirrelInHell · 2018-03-04T19:11:16.483Z · LW(p) · GW(p)
They explicitly said that he's not wrong-on-many-things in the T framework, the same way Eliezer is T.correct.
Frustrating, that's not what I said! Rule 10: be precise in your speech, Rule 10b: be precise in your reading and listening :P My wording was quite purposeful:
I don't think you can safely say Peterson is "technically wrong" about anything
I think Raemon read my comments the way I intended them. I hoped to push on a frame in people seem to be (according to my private, unjustified, wanton opinion) obviously too stuck in. See also my reply below.
I'm sorry if my phrasing seemed conflict-y to you. I think the fact that Eliezer has high status in the community and Peterson has low status is making people stupid about this issue, and this makes me write in a certain style in which I sort of intend to push on status because that's what I think is actually stopping people from thinking here.
Replies from: Jacobian↑ comment by Jacob Falkovich (Jacobian) · 2018-03-04T19:48:29.170Z · LW(p) · GW(p)
Your reply below says:
Yeah, these are issues outside of his cognitive expertise and it's quite clear that he's getting them wrong... you are mostly accusing him of getting things wrong about which he never cared in the first place.
What exactly did you think I meant when I said he's "technically wrong about many things" and you told me to be careful? I meant something very close to what your quote says, I don't even know if we're disagreeing about anything.
And by the way, there is plenty of room for disagreement. alkjash just wrote what I thought you were going to, a detailed point-by-point argument for why Peterson isn't, in fact, wrong. There's a big difference between alkjash's "Peterson doesn't say what you think he says" and "Peterson says what you think and he's wrong, but it's not important to the big picture". If Peterson really says "humans can't do math without terminal values" that's a very interesting statement, certainly not one that I can judge as obviously wrong.
Replies from: SquirrelInHell↑ comment by SquirrelInHell · 2018-03-04T21:45:04.981Z · LW(p) · GW(p)
I did in fact have something between those two in mind, and was even ready to defend it, but then I basically remembered that LW is status-crazy and and gave up on fighting that uphill battle. Kudos to alkjash for the fighting spirit.
Replies from: gjm, Jacobian↑ comment by gjm · 2018-03-06T16:07:18.964Z · LW(p) · GW(p)
I think you should consider the possibility that the not-very-positive reaction your comments about Peterson here have received may have a cause other than status-fighting.
(LW is one of the less status-crazy places I'm familiar with. The complaints about Peterson in this discussion do not look to me as if they are primarily motivated by status concerns. Some of your comments about him seem needlessly status-defensive, though.)
↑ comment by Jacob Falkovich (Jacobian) · 2018-03-07T17:17:13.922Z · LW(p) · GW(p)
Not to sound glib, but what good is LW status if you don't use it to freely express your opinions and engage in discussion on LW?
The same is true of other things: blog/Twitter followers, Facebook likes etc. are important inasmuch as they give me the ability to spread my message to more people. If I never said anything controversial for fear of losing measurable status, I would be foregoing all the benefits of acquiring it in the first place.
Replies from: vedrfolnir↑ comment by vedrfolnir · 2018-03-08T18:22:14.699Z · LW(p) · GW(p)
Not to sound glib, but what good is LW status if you don't use it to freely express your opinions and engage in discussion on LW?
Getting laid, for one thing.
And, you know, LW is a social group. Status is its own reward. High-status people probably feel better about themselves than low-status people do, and an increase in status will probably make people feel better about themselves than they used to.
Eric Hoffer was a longshoreman who just happened to write wildly popular philosophy books, but I think he'd agree that that's not terribly usual.
Replies from: Jacobian↑ comment by Jacob Falkovich (Jacobian) · 2018-03-09T17:40:14.301Z · LW(p) · GW(p)
Getting laid, for one thing.
Yeah, I thought it could be something like that. I don't live in Berkeley, and no woman who has ever slept with me cared one jot about my LW karma.
With that said, the kind of status that can be gained or lost by debating the technical correctness of claims JBP makes with someone you don't know personally seems too far removed from anyone's actual social life to have an impact on getting laid one way or another.
↑ comment by Gaius Leviathan XV · 2018-03-04T17:45:00.879Z · LW(p) · GW(p)
Perhaps you can explain what Peterson really means when he says that he really believes that the double helix structure of DNA is being depicted in ancient Egyptian and Chinese art.
What does he really means when he says, "Proof itself, of any sort, is impossible, without an axiom (as Godel proved). Thus faith in God is a prerequisite for all proof."?
Why does he seems to believe in Jung's paranormal concept of "synchronicity"?
Why does he think quantum mechanics means consciousness creates reality, and confuse the Copenhagen interpretation with Wheeler's participatory anthropic principle?
Peterson gets many things wrong - not just technically wrong, but deeply wrong, wrong on the level of "ancient aliens built the pyramids". He's far to willing to indulge in mysticism, and has a fundamental lack of skepticism or anything approaching appropriate rigor when it comes to certain pet ideas.
He isn't an intellectual super-heavy weight, he's Deepak Chopra for people who know how to code. We can do better.
Replies from: John_Maxwell_IV, SquirrelInHell↑ comment by John_Maxwell (John_Maxwell_IV) · 2018-03-07T01:43:49.726Z · LW(p) · GW(p)
Rationalists have also been known to talk about some kooky sounding stuff. Here's Val from CFAR describing something that sounds a lot like Peterson's "synchronicity":
After a sequence of mythic exploration and omens, it seemed clear to me that I needed to visit New York City. I was actually ready to hop on a plane the day after we’d finished with a CFAR workshop… but a bunch of projects showed up as important for me to deal with over the following week. So I booked plane tickets for a week later.
When I arrived, it turned out that the Shaolin monk who teaches there was arriving back from a weeks-long trip from Argentina that day.
This is a kind of thing I’ve come to expect from mythic mode. I could have used murphyjitsu to hopefully notice that maybe the monk wouldn’t be there and then called to check, and then carefully timed my trip to coincide with when he’s there. But from inside mythic mode, that wouldn’t have mattered: either it would just work out (like it did); or it was fated within the script that it wouldn’t work out, in which case some problem I didn’t anticipate would appear anyway (e.g., I might have just failed to think of the monk possibly traveling). My landing the same day he returned, as a result of my just happening to need to wait a week… is the kind of coincidence one just gets used to after a while of operating mythically.Replies from: habryka4
↑ comment by habryka (habryka4) · 2018-03-07T03:21:29.580Z · LW(p) · GW(p)
I would guess that the same people who objected to those paragraphs, also object to similar paragraphs by Peterson (at least I object to both on similar grounds).
↑ comment by SquirrelInHell · 2018-03-04T19:03:01.284Z · LW(p) · GW(p)
Cool examples, thanks! Yeah, these are issues outside of his cognitive expertise and it's quite clear that he's getting them wrong.
Note that I never said that Peterson isn't making mistakes (I'm quite careful with my wording!). I said that his truth-seeking power is in the same weight class, but obviously he has a different kind of power than LW-style. E.g. he's less able to deal with cognitive bias.
But if you are doing "fact-checking" in LW style, you are mostly accusing him of getting things wrong about which he never cared in the first place.
Like when Eliezer is using phlogiston as an example in the Sequences and gets the historical facts wrong. But that doesn't make Eliezer wrong in any meaningful sense, because that's not what he was talking about.
There's some basic courtesy in listening to someone's message, not words.
Replies from: Gaius Leviathan XV, MakoYass↑ comment by Gaius Leviathan XV · 2018-03-04T22:00:36.681Z · LW(p) · GW(p)
Sorry, but I think that is a lame response. It really, really isn't just lack of expertise-- it's a matter of Peterson's abandonment of skepticism and scholarly integrity. I'm sorry, but you don't need to be a historian to tell that the ancient Egyptians didn't know about the structure of DNA. You don't need to be a statistician to know that coincidences don't disprove scientific materialism. Peterson is a PhD who know the level of due diligence needed to publish in peer reviewed journals from experience. He knows better but did it anyway.
But if you are doing "fact-checking" in LW style, you are mostly accusing him of getting things wrong about which he never cared in the first place.
He cares enough to tell his students, explicitly, that he "really does believe" that ancient art depicts DNA - repeatedly! - and put it in public youtube videos with his real name and face.
Like when Eliezer is using phlogiston as an example in the Sequences and gets the historical facts wrong.
It's more like if Eliezer used the "ancient aliens built the pyramids" theory as an example in one of the sequences in a way that made it clear that he really does believe aliens built the pyramids. It's stupid to believe it in the first place, and it's stupid to use it as an example.
There's some basic courtesy in listening to someone's message, not words.
Then what makes Peterson so special? Why should I pay more attention to him than, say, Deepak Chopra? Or an Islamist Cleric? Or a postmodernist gender studies professor who thinks western science is just a tool of patriarchal oppression? Might they also have messages that are "metaphorically true" even though their words are actually bunk? If Peterson gets the benefit of the doubt when he says stupid things, why shouldn't everybody else? If uses enough mental gymnastics, almost anything can be made to be "metaphorically true".
Peterson's fans are too emotionally invested in him to really consider what he's saying rationally - akin to religious believers. Yes, he gives his audience motivation and meaning - much in the same way religion does for other demographics- but that can be a very powerful emotional blinder. If you really think that something gives your life meaning and motivation, you'll overlook its flaws, even when it means weakening your epistemology.
It's not surprising when religious believers to retreat to the claim that their holy texts are "metaphorically true" when they're confronted with the evidence that their text is literally false - but it's embarrassing to see a supposed rationalist do the same when someone criticizes their favorite guru. We're supposed to know better.
Replies from: SquirrelInHell↑ comment by SquirrelInHell · 2018-03-05T01:58:33.222Z · LW(p) · GW(p)
Then what makes Peterson so special?
This is what the whole discussion is about. You are setting boundaries that are convenient for you, and refuse to think further. But some people in that reference class you are now denigrating as a whole are different from others. Some actually know their stuff and are not charlatans. Throwing a tantrum about it doesn't change it.
↑ comment by mako yass (MakoYass) · 2018-03-05T02:18:23.804Z · LW(p) · GW(p)
he's less able to deal with cognitive bias
Then what the heck do you mean by "equal in truth-seeking ability"?
Replies from: gjm↑ comment by gjm · 2018-03-07T17:49:11.338Z · LW(p) · GW(p)
(I upvoted that comment, but:) Truth-seeking is more than avoiding bias, just as typing is more than not hitting the wrong keys and drawing is more than not making your lines crooked when you want them straight.
Someone might have deep insight into human nature; or outstanding skill in finding mathematical proofs; or a mind exceptionally fertile in generating new ideas, some of which turn out to be right; or an encyclopaedic knowledge of certain fields. Any of those would enhance their truth-seeking ability considerably. If they happen not to be particularly good at avoiding bias, that will worsen their truth-seeking ability. But they might still be better overall than someone with exceptional ability to avoid bias but without their particular skills.