Rule Thinkers In, Not Out
post by Scott Alexander (Yvain) · 2019-02-27T02:40:05.133Z · LW · GW · 67 commentsContents
67 comments
Imagine a black box which, when you pressed a button, would generate a scientific hypothesis. 50% of its hypotheses are false; 50% are true hypotheses as game-changing and elegant as relativity. Even despite the error rate, it’s easy to see this box would quickly surpass space capsules, da Vinci paintings, and printer ink cartridges to become the most valuable object in the world. Scientific progress on demand, and all you have to do is test some stuff to see if it’s true? I don’t want to devalue experimentalists. They do great work. But it’s appropriate that Einstein is more famous than Eddington. If you took away Eddington, someone else would have tested relativity; the bottleneck is in Einsteins. Einstein-in-a-box at the cost of requiring two Eddingtons per insight is a heck of a deal.
What if the box had only a 10% success rate? A 1% success rate? My guess is: still most valuable object in the world. Even an 0.1% success rate seems pretty good, considering (what if we ask the box for cancer cures, then test them all on lab rats and volunteers?) You have to go pretty low before the box stops being great.
I thought about this after reading this list of geniuses with terrible ideas. Linus Pauling thought Vitamin C cured everything. Isaac Newton spent half his time working on weird Bible codes. Nikola Tesla pursued mad energy beams that couldn’t work. Lynn Margulis revolutionized cell biology by discovering mitochondrial endosymbiosis, but was also a 9-11 truther and doubted HIV caused AIDS. Et cetera. Obviously this should happen. Genius often involves coming up with an outrageous idea contrary to conventional wisdom and pursuing it obsessively despite naysayers. But nobody can have a 100% success rate. People who do this successfully sometimes should also fail at it sometimes, just because they’re the kind of person who attempts it at all. Not everyone fails. Einstein seems to have batted a perfect 1000 (unless you count his support for socialism). But failure shouldn’t surprise us.
Yet aren’t some of these examples unforgiveably bad? Like, seriously Isaac – Bible codes? Well, granted, Newton’s chemical experiments may have exposed him to a little more mercury than can be entirely healthy. But remember: gravity was considered creepy occult pseudoscience by its early enemies. It subjected the earth and the heavens to the same law, which shocked 17th century sensibilities the same way trying to link consciousness and matter would today. It postulated that objects could act on each other through invisible forces at a distance, which was equally outside the contemporaneous Overton Window. Newton’s exceptional genius, his exceptional ability to think outside all relevant boxes, and his exceptionally egregious mistakes are all the same phenomenon (plus or minus a little mercury).
Or think of it a different way. Newton stared at problems that had vexed generations before him, and noticed a subtle pattern everyone else had missed. He must have amazing hypersensitive pattern-matching going on. But people with such hypersensitivity should be most likely to see patterns where they don’t exist. Hence, Bible codes.
These geniuses are like our black boxes: generators of brilliant ideas, plus a certain failure rate. The failures can be easily discarded: physicists were able to take up Newton’s gravity without wasting time on his Bible codes. So we’re right to treat geniuses as valuable in the same way we would treat those boxes as valuable.
This goes not just for geniuses, but for anybody in the idea industry. Coming up with a genuinely original idea is a rare skill, much harder than judging ideas is. Somebody who comes up with one good original idea (plus ninety-nine really stupid cringeworthy takes) is a better use of your reading time than somebody who reliably never gets anything too wrong, but never says anything you find new or surprising. Alyssa Vance calls this positive selection – a single good call rules you in – as opposed to negative selection, where a single bad call rules you out. You should practice positive selection for geniuses and other intellectuals.
I think about this every time I hear someone say something like “I lost all respect for Steven Pinker after he said all that stupid stuff about AI”. Your problem was thinking of “respect” as a relevant predicate to apply to Steven Pinker in the first place. Is he your father? Your youth pastor? No? Then why are you worrying about whether or not to “respect” him? Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.
I don’t want to take this too far. If someone has 99 stupid ideas and then 1 seemingly good one, obviously this should increase your probability that the seemingly good one is actually flawed in a way you haven’t noticed. If someone has 99 stupid ideas, obviously this should make you less willing to waste time reading their other ideas to see if they are really good. If you want to learn the basics of a field you know nothing about, obviously read a textbook. If you don’t trust your ability to figure out when people are wrong, obviously read someone with a track record of always representing the conventional wisdom correctly. And if you’re a social engineer trying to recommend what other people who are less intelligent than you should read, obviously steer them away from anyone who’s wrong too often. I just worry too many people wear their social engineer hat so often that they forget how to take it off, forget that “intellectual exploration” is a different job than “promote the right opinions about things” and requires different strategies.
But consider the debate over “outrage culture”. Most of this focuses on moral outrage. Some smart person says something we consider evil, and so we stop listening to her or giving her a platform. There are arguments for and against this – at the very least it disincentivizes evil-seeming statements.
But I think there’s a similar phenomenon that gets less attention and is even less defensible – a sort of intellectual outrage culture. “How can you possibly read that guy when he’s said [stupid thing]?” I don’t want to get into defending every weird belief or conspiracy theory that’s ever been [stupid thing]. I just want to say it probably wasn’t as stupid as Bible codes. And yet, Newton.
Some of the people who have most inspired me have been inexcusably wrong on basic issues. But you only need one world-changing revelation to be worth reading.
67 comments
Comments sorted by top scores.
comment by AnnaSalamon · 2019-02-28T00:02:57.342Z · LW(p) · GW(p)
Update, 8/17/2021 See my more recent comment [LW(p) · GW(p)] below.
Update, 11/28/2020: I wouldn't write the comment below today. I've been meaning to revise it for awhile, and was having trouble coming up with a revision that didn't itself seem to me to have a bunch of problems, but this comment of mine was just cited again by an outside blog as a reason why folks shouldn't associate with Michael, so maybe I should stop trying to revise my old comment perfectly and just try to do it at all. I'm posting my current, updated opinion in a comment-reply [LW(p) · GW(p)]; my original comment from Feb 27, 2019 is left unedited below, since it played a role in a bunch of community decisions and so should be recorded somewhere IMO.
----
I used to make the argument in the OP a lot. I applied it (among other applications) to Michael Vassar, who many people complained to me about (“I can’t believe he made obviously-fallacious argument X; why does anybody listen to him”), and who I encouraged them to continue listening to anyhow. I now regret this.
Here are the two main points I think past-me was missing:
1. Vetting and common knowledge creation are important functions, and ridicule of obviously-fallacious reasoning plays an important role in discerning which thinkers can (or can’t) help fill these functions.
(Communities — like the community of physicists, or the community of folks attempting to contribute to AI safety — tend to take a bunch of conclusions for granted without each-time-reexamining them, while trying to add to the frontier of knowledge/reasoning/planning. This can be useful, and it requires a community vetting function. This vetting function is commonly built via having a kind of “good standing” that thinkers/writers can be ruled out of (and into), and taking a claim as “established knowledge that can be built on” when ~all “thinkers in good standing” agree on that claim.
I realize the OP kind-of acknowledges this when discussing “social engineering”, so maybe the OP gets this right? But I undervalued this function in the past, and the term “social engineering” seems to me dismissive of a function that in my current view contributes substantially to a group’s ability to produce new knowledge.)
2. Even when a reader is seeking help brainstorming hypotheses (rather than vetting conclusions), they can still be lied-to and manipulated, and such lies/manipulations can sometimes disrupt their thinking for long and costly periods of time (e.g., handing Ayn Rand to the wrong 14-year-old; or, in my opinion, handing Michael Vassar to a substantial minority of smart aspiring rationalists). Distinguishing which thinkers are likely to lie or manipulate is a function more easily fulfilled by a group sharing info that rules thinkers out for past instances of manipulative or dishonest tactics (rather than by the individual listener planning to ignore past bad arguments and to just successfully detect every single manipulative tactic on their own).
So, for example, Julia Galef helpfully notes a case where Steven Pinker straightforwardly misrepresents basic facts about who said what. This is helpful to me in ruling out Steven Pinker as someone who I can trust not to lie to me about even straightforwardly checkable facts.
Similarly, back in 2011, a friend complained to me that Michael would cause EAs to choose the wrong career paths by telling them exaggerated things about their own specialness. This matched my own observations of what he was doing. Michael himself told me that he sometimes lied to people (not his words) and told them that the thing that would most help AI risk from them anyhow was for them to continue on their present career (he said this was useful because that way they wouldn’t rationalize that AI risk must be false). Despite these and similar instances, I continued to recommend people talk to him because I had “ruled him in” as a source of some good novel ideas, and I did this without warning people about the rest of it. I think this was a mistake. (I also think that my recommending Michael led to considerable damage over time, but trying to establish that claim would require more discussion than seems to fit here.)
To be clear, I still think hypothesis-generating thinkers are valuable even when unreliable, and I still think that honest and non-manipulative thinkers should not be “ruled out” as hypothesis-sources for having some mistaken hypotheses (and should be “ruled in” for having even one correct-important-and-novel hypothesis). I just care more about the caveats here than I used to.
Replies from: Yvain, Wei_Dai, AnnaSalamon, cousin_it, mirona, Dr_Manhattan↑ comment by Scott Alexander (Yvain) · 2019-03-02T05:36:10.298Z · LW(p) · GW(p)
Thanks for this response.
I mostly agree with everything you've said.
While writing this, I was primarily thinking of reading books. I should have thought more about meeting people in person, in which case I would have echoed the warnings you gave about Michael. I think he is a good example of someone who both has some brilliant ideas and can lead people astray, but I agree with you that people's filters are less functional (and charisma is more powerful) in the real-life medium.
On the other hand, I agree that Steven Pinker misrepresents basic facts about AI. But he was also involved in my first coming across "The Nurture Assumption", which was very important for my intellectual growth and which I think has held up well. I've seen multiple people correct his basic misunderstandings of AI, and I worry less about being stuck believing false things forever than about missing out on Nurture-Assumption-level important ideas (I think I now know enough other people in the same sphere that Pinker isn't a necessary source of this, but I think earlier for me he was).
There have been some books, including "Inadequate Equilibria" and "Zero To One", that have warned people against the Outside View/EMH. This is the kind of idea that takes the safety wheels off cognition - it will help bright people avoid hobbling themselves, but also give gullible people new opportunities to fail. And there is no way to direct it, because non-bright, gullible people can't identify themselves as such. I think the idea of ruling geniuses in is similarly dangerous, in that there's no way to direct it only to non-gullible people who can appreciate good insight and throw off falsehoods. You can only say the words of warning, knowing that people are unlikely to listen.
I still think on net it's worth having out there. But the example you gave of Michael and of in-person communication in general makes me wish I had added more warnings.
↑ comment by Wei Dai (Wei_Dai) · 2019-02-28T12:36:05.196Z · LW(p) · GW(p)
Can someone please fill me in, what are some of Michael Vassar's best ideas, that made him someone who people "ruled in" and encouraged others to listen to?
Replies from: sarahconstantin, Vaniver, Raemon↑ comment by sarahconstantin · 2019-06-08T17:16:34.988Z · LW(p) · GW(p)
Some examples of valuable true things I've learned from Michael:
- Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.
- Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you're not any smarter.
- Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it "comes out right".) Sometimes the best work of this kind doesn't look grandiose or prestigious at the time you're doing it.
- The mind and the body are connected. Really. Your mind affects your body and your body affects your mind. The better kinds of yoga, meditation, massage, acupuncture, etc, actually do real things to the body and mind.
- Science had higher efficiency in the past (late 19th-to-mid-20th centuries).
- Examples of potentially valuable medical innovation that never see wide application are abundant.
- A major problem in the world is a 'hope deficit' or 'trust deficit'; otherwise feasible good projects are left undone because people are so mistrustful that it doesn't occur to them that they might not be scams.
- A good deal of human behavior is explained by evolutionary game theory; coalitional strategies, not just individual strategies.
- Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not *all* conflicts are merely misunderstandings.
- How intersubjectivity works; "objective" reality refers to the conserved *patterns* or *relationships* between different perspectives.
- People who have coherent philosophies -- even opposing ones -- have more in common in the *way* they think, and are more likely to get meaningful stuff done together, than they can with "moderates" who take unprincipled but middle-of-the-road positions. Two "bullet-swallowers" can disagree on some things and agree on others; a "bullet-dodger" and a "bullet-swallower" will not even be able to disagree, they'll just not be saying commensurate things.
Replies from: Wei_Dai, NicholasKross, NicholasKross, Dr_Manhattan
↑ comment by Wei Dai (Wei_Dai) · 2019-06-09T09:02:26.988Z · LW(p) · GW(p)
Thanks! Here are my reactions/questions:
Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.
Seems right to me, as I was never tied to such a narrative in the first place.
Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you’re not any smarter.
What kind of risks is he talking about here? Also does he mean that people value their social positions too much, or that they're not taking enough risks even given their current values?
Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it “comes out right”.) Sometimes the best work of this kind doesn’t look grandiose or prestigious at the time you’re doing it.
Hmm, I use to spend quite a bit of time fiddling with assembly language implementations of encryption code to try to squeeze out a few more percent of speed. Pretty sure that is not as productive as more "grandiose" or "prestigious" activities like thinking about philosophy or AI safety, at least for me [LW · GW]... I think overall I'm more afraid that someone who could be doing productive "grandiose" work chooses not to in favor of "fiddly puttering", than the reverse.
The mind and the body are connected. Really. Your mind affects your body and your body affects your mind. The better kinds of yoga, meditation, massage, acupuncture, etc, actually do real things to the body and mind.
That seems almost certain to be true, but I don't see evidence that there a big enough effect for me to bother spending the time to investigate further. (I seem to be doing fine without doing any of these things and I'm not sure who is deriving large benefits from them.) Do you want to try to change my mind about this?
Science had higher efficiency in the past (late 19th-to-mid-20th centuries).
Couldn't this just be that we've picked most of the low-hanging fruit, plus the fact that picking the higher fruits require more coordination among larger groups of humans and that is very costly? Or am I just agreeing with Michael here?
Examples of potentially valuable medical innovation that never see wide application are abundant.
This seems quite plausible to me, as I used to lament that a lot of innovations in cryptography never got deployed.
A major problem in the world is a ‘hope deficit’ or ‘trust deficit’; otherwise feasible good projects are left undone because people are so mistrustful that it doesn’t occur to them that they might not be scams.
"Doesn't occur to them" seems too strong but I think I know what you mean. Can you give some examples of what these projects are?
A good deal of human behavior is explained by evolutionary game theory; coalitional strategies, not just individual strategies.
Agreed, and I think this is a big problem as far as advancing human rationality because we currently have a very poor theoretical understanding of coalitional strategies.
Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not all conflicts are merely misunderstandings.
This seems plausible but what are some examples of such "evil"? What happened to Enron, perhaps?
How intersubjectivity works; “objective” reality refers to the conserved patterns or relationships between different perspectives.
It would make more sense to me to say that objective reality refers to whatever explains the conserved patterns or relationships between different perspectives, rather than the patterns/relationships themselves. I'm not sure if I'm just missing the point here.
People who have coherent philosophies—even opposing ones—have more in common in the way they think, and are more likely to get meaningful stuff done together, than they can with “moderates” who take unprincipled but middle-of-the-road positions. Two “bullet-swallowers” can disagree on some things and agree on others; a “bullet-dodger” and a “bullet-swallower” will not even be able to disagree, they’ll just not be saying commensurate things.
I think I prefer to hold a probability distribution over coherent philosophies, plus a lot of weight on "something we'll figure out in the future".
Also a meta question: Why haven't these been written up or discussed online more? In any case, please don't feel obligated to answer my comments/questions in this thread. You (or others who are familiar with these ideas) can just keep them in mind for when you do want to discuss them online.
Replies from: Dr_Manhattan, Dr_Manhattan, ChristianKl↑ comment by Dr_Manhattan · 2019-12-13T17:03:53.876Z · LW(p) · GW(p)
I think in part these could be "lessons relevant to Sarah", a sort of a philosophical therapy that can't be completely taken out of context. Which is why some of these might seem of low relevance or obvious.
↑ comment by Dr_Manhattan · 2019-12-13T17:11:04.824Z · LW(p) · GW(p)
>> Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it “comes out right”.) Sometimes the best work of this kind doesn’t look grandiose or prestigious at the time you’re doing it.
Hmm, I use to spend quite a bit of time fiddling with assembly language implementations of encryption code to try to squeeze out a few more percent of speed. Pretty sure that is not as productive as more "grandiose" or "prestigious" activities like thinking about philosophy or AI safety, at least for me. [LW · GW].. I think overall I'm more afraid that someone who could be doing productive "grandiose" work chooses not to in favor of "fiddly puttering", than the reverse.
I suspect this might be a subtler point?
http://paulgraham.com/genius.html
suggests really valuable contributions are more bottlenecked on obsession rather than being good at directing attention in a "valuable" direction
For example, for the very ambitious, the bus ticket theory suggests that the way to do great work is to relax a little. Instead of gritting your teeth and diligently pursuing what all your peers agree is the most promising line of research, maybe you should try doing something just for fun. And if you're stuck, that may be the vector along which to break out.
↑ comment by ChristianKl · 2020-05-15T20:38:20.832Z · LW(p) · GW(p)
This seems plausible but what are some examples of such "evil"? What happened to Enron, perhaps?
According to the official narrative, the Enron scandal is mostly about people engaging in actions that benefit themselves. I don't know whether that's true as I don't have much insight into it. If it's true that's not what is meant.
It's not about actions that are actually self benefitial.
Let's say I'm at a lunch with a friend. I draw benefit most benefit from my lunch when I have a conversation as intellectual equals. At the same time there's sometimes an impulse to say something to put my friend down and to demostrate that I'm higher then him in the social pecking order. If I follow that instinct and say something to put my friend down, I'm engaging in evil in the sense Vassar talks about.
The instinct has some value in a tribal context where it's important to fight about the social pecking order but I'm drawing no value from it in the lunch with my friend.
I'm a person who has some self awareness and I try not to go down such roads when those evolutionary instincts come up. On the other hand you have people in middle management of immoral mazes who spent a lot of their time following such instincts and being evil.
↑ comment by Nicholas / Heather Kross (NicholasKross) · 2023-09-23T22:13:57.855Z · LW(p) · GW(p)
You can succeed where others fail just by being braver even if you're not any smarter.
One thing I've wondered about is, how true is this for someone who's dumber than others?
(Asking for, uh, a friend.)
Replies from: erioire↑ comment by ErioirE (erioire) · 2024-03-14T02:14:21.928Z · LW(p) · GW(p)
I would say it's possible, just at a lower probability proportional to the difference in intelligence. More intelligence will still correspond to better ideas on average.
That said, it was not acclaimed scientists or ivy-league research teams that invented the airplane. It was two random high-school dropouts in Ohio. This is not to say that education or prestige are the same thing as intelligence[1], simply that brilliant innovations can sometimes be made by the little guy who's not afraid to dream big.
- ^
By all accounts the Wright Brothers were intelligent
↑ comment by Nicholas / Heather Kross (NicholasKross) · 2023-08-29T02:30:50.502Z · LW(p) · GW(p)
Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you're not any smarter.
Oh hey, I've accidentally tried this just by virtue of my personality!
Results: high-variance ideas are high-variance. YMMV, but so far I haven't had a "hit". (My friend politely calls my ideas "hits-based ideas", which is a great term.)
↑ comment by Dr_Manhattan · 2019-12-13T16:55:59.912Z · LW(p) · GW(p)
Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it "comes out right".) Sometimes the best work of this kind doesn't look grandiose or prestigious at the time you're doing it.
http://paulgraham.com/genius.html seems to be promoting a similar idea
↑ comment by Vaniver · 2019-02-28T17:24:35.572Z · LW(p) · GW(p)
My sense is that his worldview was 'very sane' in the cynical HPMOR!Quirrell sense (and he was one of the major inspirations for Quirrell, so that's not surprising), and that he was extremely open about it in person in a way that was surprising and exciting.
I think his standout feature was breadth more than depth. I am not sure I could distinguish which of his ideas were 'original' and which weren't. He rarely if ever wrote things, which makes the genealogy of ideas hard to track. (Especially if many people who do write things were discussing ideas with him and getting feedback on them.)
Replies from: Dr_Manhattan↑ comment by Dr_Manhattan · 2019-03-01T17:59:59.053Z · LW(p) · GW(p)
Good points (similar to Raemon). I would find it useful if someone created some guidance for safe ingestion (or alternative source) of MV type ideas/outlook; I do the "subtle skill of seeing the world with fresh eyes" potentially extremely valuable, which is why I suppose Anna kept on encouraging people.
Replies from: Vaniver, Raemon↑ comment by Vaniver · 2019-03-02T18:04:10.535Z · LW(p) · GW(p)
I think I have this skill, but I don't know that I could write this guide. Partly this is because there are lots of features about me that make this easier, which are hard (or too expensive) to copy. For example, Michael once suggested part of my emotional relationship to lots of this came from being gay, and thus not having to participate in a particular variety of competition and signalling that was constraining others; that seemed like it wasn't the primary factor, but was probably a significant one.
Another thing that's quite difficult here is that many of the claims are about values, or things upstream of values; how can Draco Malfoy learn the truth about blood purism in a 'safe' way?
Replies from: Dr_Manhattan, Dr_Manhattan↑ comment by Dr_Manhattan · 2019-03-03T21:22:34.099Z · LW(p) · GW(p)
Thanks (&Yoav for clarification). So in your opinion is MV dangerous to a class of people with certain kinds of beliefs the way Harry was to Drako (the risk was pure necessity to break out of wrong ideas) or is he dangerous because of an idea package or bad motivations of his own
Replies from: Vaniver↑ comment by Vaniver · 2019-03-05T04:16:20.883Z · LW(p) · GW(p)
When someone has an incomplete moral worldview (or one based on easily disprovable assertions), there's a way in which the truth isn't "safe" if safety is measured by something like 'reversibility' or 'ability to continue being the way they were.' It is also often the case that one can't make a single small change, and then move on; if, say, you manage to convince a Christian that God isn't real (or some other thing that will predictably cause the whole edifice of their worldview to come crashing down eventually), then the default thing to happen is for them to be lost and alone.
Where to go from there is genuinely unclear to me. Like, one can imagine caring mostly about helping other people grow, in which a 'reversibility' criterion is sort of ludicrous; it's not like people can undo puberty, or so on. If you present them with an alternative system, they don't need to end up lost and alone, because you can directly introduce them to humanism, or whatever. But here you're in something of a double bind; it's somewhat irresponsible to break people's functioning systems without giving them a replacement, and it's somewhat creepy if you break people's functioning systems to pitch your replacement. (And since 'functioning' is value-laden, it's easy for you to think their system needs replacing.)
↑ comment by Dr_Manhattan · 2019-03-03T13:20:00.095Z · LW(p) · GW(p)
Ah sorry would you mind elaborating the Draco point in normie speak if you have the bandwidth?
Replies from: Yoav Ravid↑ comment by Yoav Ravid · 2019-03-03T17:34:54.869Z · LW(p) · GW(p)
He is referring to HPMOR [? · GW], where the following happens (major spoiler for the first 25 chapters):
Harry tries to show Draco the truth about blood purism, and Draco goes through a really bad crisis of faith. Harry tries to do it effectively and gracefully, but non the less it is hard, and could even be somewhat dangerous.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2019-03-03T20:13:15.891Z · LW(p) · GW(p)
I edited your comment to add the spoiler cover. FYI the key for this is > followed by ! and then a space.
Replies from: Yoav Ravid↑ comment by Yoav Ravid · 2019-03-07T11:56:21.427Z · LW(p) · GW(p)
Ah, great, thank you :)
↑ comment by Raemon · 2019-03-01T19:38:41.008Z · LW(p) · GW(p)
Alas, I spent this year juuust coming to the conclusion that it was all more dangerous than I thought and am I still wrapping my brain around it.
I suppose it was noteworthy that I don't think I got very damaged, and most of that was via... just not having prolonged contact with the four Vassar-type-people that I encountered (the two people whom I did have more extended contact with, I think may have damaged me somewhat)
So, I guess the short answer is "if you hang out with weird iconoclasts with interesting takes on agency and seeing the world, and you don't spend more than an evening every 6 months with them, you will probably get a slight benefit with little to no risk. If you hang out more than that you take on proportionately more risk/reward. The risks/rewards are very person specific."
My current take is something like "the social standing of this class of person should be the mysterious old witch who lives at the end of the road, who everyone respects but, like, you're kinda careful about when you go ask for their advice."
↑ comment by Raemon · 2019-02-28T20:51:34.359Z · LW(p) · GW(p)
FWIW, I've never had a clear sense that Vassar's ideas were especially good (but, also, not had a clear sense that they weren't). More that, Vassar generally operates in a mode that is heavily-brainstorm-style-thinking and involves seeing the world in a particular way. And this has high-variance-but-often-useful side effects.
Exposure to that way of thinking has a decent chance of causing people to become more agenty, or dislodged from a subpar local optimum, or gain some subtle skills about seeing the world with fresh eyes. The point is less IMO about the ideas and more about having that effect on people.
(With the further caveat that this is all a high variance strategy, and the tail risks do not fail gracefully, sometimes causing damage, in ways that Anna hints at and which I agree would be a much larger discussion)
↑ comment by AnnaSalamon · 2020-11-28T22:59:54.624Z · LW(p) · GW(p)
The short version of my current stance on Vassar is that:
(1) I would not trust him to conform to local rules or norms. He also still seems to me to precipitate psychotic episodes in his interlocutors surprisingly often, to come closer to advocating physical violence than I would like (e.g. this tweet), and to have conversational patterns that often disorient his interlocutors and leave them believing different things while talking to Michael than they do a bit later.
(2) I don't have overall advice that people ought to avoid Vassar, in spite of (1), because it now seems to me that he is trying to help himself and others toward truth, and I think we're bottlenecked on that enough that I could easily imagine (2) overshadowing (1) for individuals who are in a robust place (e.g., who don't feel like they are trapped or "have to" talk to a person or do a thing) and who are choosing who they want to talk to. (There were parts of Michael's conversational patterns that I was interpreting as less truth-conducive a couple years ago than I am now. I now think that this was partly because I was overanchored on the (then-recent) example of Brent, as well as because I didn't understand part of how he was doing it, but it is possible that it is current-me who is wrong.) (As one example of a consideration that moved me here: a friend of mine whose epistemics I trust, and who has known Vassar for a long time, said that she usually in the long-run ended up agreeing with her while-in-the-conversation self, and not with her after-she-left-the-conversation self.)
Also I was a bit discomfited when my previous LW comment was later cited by folks who weren't all that LW-y in their conversational patterns as a general "denouncement" of Vassar, although I should probably have predicted this, so, that's another reason I'd like to try to publicly state my revised views. To be clear, I do not currently wish to "denounce" Vassar, and I don't even think that's what I was trying to do last time, although I think the fault was mostly mine that some people read my previous comment as a general denouncement.
Also, to be clear, what I am saying here is just that on the strength of my own evidence (which is not all evidence), (1) and (2) seem true to me. I am not at all trying to be a court here, or to evaluate any objections anyone else may have to Vassar, or to claim that there are no valid objections someone else might have, or anything like that. Just to share my own revised impression from my own limited first-hand observations.
Replies from: jimrandomh↑ comment by jimrandomh · 2020-11-29T19:49:43.070Z · LW(p) · GW(p)
He also still seems to me to precipitate psychotic episodes in his interlocutors surprisingly often
This is true, but I'm confused about how to relate to it. Part of Michael's explicit strategy seems to be identifying people stuck in bad equilibria, and destabilizing them out of it. If I were to take an evolutionary-psychology steelman of what a psychotic episode is, a (highly uncertain) interpretation I might make is that a psychotic episode is an adaptation for escaping such equilibria, combined with a negative retrospective judgment of how that went. Alternatively, those people might be using psychedelics (which I believe are in fact effective for breaking people out of equilibria), and getting unlucky with the side effects. This is bad if it's not paired with good judgment about which equilibria are good vs. bad ones (I don't have much opinion on how good his judgment in this area is). But this seems like an important function, which not enough people are performing.
↑ comment by cousin_it · 2019-02-28T09:13:19.047Z · LW(p) · GW(p)
I decided to ignore Michael after our first in-person conversation, where he said I shouldn't praise the Swiss healthcare system which I have lots of experience with, because MetaMed is the only working healthcare system in the world (and a roomful of rationalists nodded along to that, suggesting that I bet money against him or something).
This isn't to single out Michael or the LW community. The world is full of people who spout nonsense confidently. Their ideas can deserve close attention from a few "angel investors", but that doesn't mean they deserve everyone's attention by default, as Scott seems to say.
Replies from: gjm↑ comment by gjm · 2019-02-28T16:32:22.543Z · LW(p) · GW(p)
There's a really good idea slipped into the above comment in passing; the purpose of this comment is to draw attention to it.
close attention from a few "angel investors"
Scott's article, like the earlier "epistemic tenure" one, implicitly assumes that we're setting a single policy for whose ideas get taken how seriously. But it may make sense for some people or communities -- these "angel investors" -- to take seriously a wider range of ideas than the rest of us, even knowing that a lot of those ideas will turn out to be bad ones, in the hope that they can eventually identify which ones were actually any good and promote those more widely.
Taking the parallel a bit further, in business there are more levels of filtering than that. You have the crazy startups; then you have the angel investors; then you have the early-stage VCs; then you have the later VCs; and then you have, I dunno, all the world's investors. There are actually two layers of filtering at each stage -- investors may choose not to invest, and the company may fail despite the investment -- but let's leave that out for now. The equivalent in the marketplace of ideas would be a sort of hierarchy of credibility-donors: first of all you have individuals coming up with possibly-crackpot ideas, then some of them get traction in particular communities, then some of those come to the attention of Gladwell-style popularizers, and then some of the stuff they popularize actually makes it all the way into the general public's awareness. At each stage it should be somewhat harder to get treated as credible. (But is it? I wouldn't count on it. In particular, popularizers don't have the best reputation for never latching onto bad ideas and making them sound more credible than they really are...)
(Perhaps the LW community itself should be an "angel investor", but not necessarily.)
↑ comment by mirona · 2021-08-17T02:48:35.373Z · LW(p) · GW(p)
Are there further details to these accusations? The linked post from 8 months ago called for an apology in absence of further details. If there are not further details, a new post with an apology is in order.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2021-08-17T17:53:27.574Z · LW(p) · GW(p)
Um, good point. I am not sure which details you're asking about, but I am probably happy to elaborate if you ask something more specific.
I hereby apologize for the role I played in Michael Vassar's ostracism from the community, which AFAICT was both unjust and harmful to both the community and Michael. There's more to say here, and I don't yet know how to say it well. But the shortest version is that in the years leading up to my original comment Michael was criticizing me and many in the rationality and EA communities intensely, and, despite our alleged desire to aspire to rationality, I and I think many others did not like having our political foundations criticized/eroded, nor did I and I think various others like having the story I told myself to keep stably “doing my work” criticized/eroded. This, despite the fact that attempting to share reasoning and disagreements is in fact a furthering of our alleged goals and our alleged culture. The specific voiced accusations about Michael were not “but he keeps criticizing us and hurting our feelings and/or our political support” — and nevertheless I’m sure this was part of what led to me making the comment I made above (though it was not my conscious reason), and I’m sure it led to some of the rest of the ostracism he experienced as well. This isn’t the whole of the story, but it ought to have been disclosed clearly in the same way that conflicts of interest ought to be disclosed clearly. And, separately but relatedly, it is my current view that it would be all things considered much better to have Michael around talking to people in these communities, though this will bring friction.
There’s broader context I don’t know how to discuss well, which I’ll at least discuss poorly:
-
Should the aspiring rationality community, or any community, attempt to protect its adult members from misleading reasoning, allegedly manipulative conversational tactics, etc., via cautioning them not to talk to some people? My view at the time of my original (Feb 2019) comment was “yes”. My current view is more or less “heck no!”; protecting people from allegedly manipulative tactics, or allegedly misleading arguments, is good — but it should be done via sharing additional info, not via discouraging people from encountering info/conversations. The reason is that more info tends to be broadly helpful (and this is a relatively fool-resistant heuristic even if implemented by people who are deluded in various ways), and trusting who can figure out who ought to restrict their info-intake how seems like a doomed endeavor (and does not degrade gracefully with deludedness/corruption in the leadership). (Watching the CDC on covid helped drive this home for me. Belatedly noticing how much something-like-doublethink I had in my original beliefs about Michael and related matters also helped drive this home for me.)
-
Should some organizations/people within the rationality and EA communities create simplified narratives that allow many people to pull in the same direction, to feel good about each others’ donations to the same organizations, etc.? My view at the time of my original (Feb 2019) comment was “yes”; my current view is “no — and especially not via implicit or explicit pressures to restrict information-flow.” Reasons for updates same as above.
It is nevertheless the case that Michael has had a tendency to e.g. yell rather more than I would like. For an aspiring rationality community’s general “who is worth ever talking to?” list, this ought to matter much less than the above. Insofar as a given person is trying to create contexts where people reliably don’t yell or something, they’ll want to do whatever they want to do; but insofar as we’re creating a community-wide include/exclude list (as in e.g. this comment on whether to let Michael speak at SSC meetups), it is my opinion that Michael ought to be on the “include” list.
Thoughts/comments welcome, and probably helpful for getting to shared accurate pictures about any of what's above.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-08-19T19:38:04.436Z · LW(p) · GW(p)
There's a bunch of different options for interacting with a person/group/information source:
- Read what they write
- Go to talks by them and ask a question
- Talk with them on comments on their blogs
- Have 1-1 online conversations with them (calls/emails)
- Invite them into your home and be friends with them
Naturally there's a difference between "telling your friend that they should ignore the CDC" and "not letting a CDC leadership staff member into your home for dinner". I'm much more sympathetic to the latter.
Related: As a somewhat extreme example I've thought about in the past in other situations with other people, I think that people who have committed crimes (e.g. theft) could be great and insightful contributors to open research problems, but might belong geographically in jail and be important to not allow into my home. Especially for insightful people with unique perspectives who were intellectually productive I'd want to put in a lot of work to ensure they can bring their great contributions in ways that aren't open to abuse or likely to leave my friends substantially hurt on some key dimension.
–––
Thx for your comment. I don't have a clear sense from your comment what you're trying to suggest for Michael specifically — I've found it quite valuable to read his Twitter, but more than that. Actually, here's what I suspect you're saying. I think you're saying that the following things seem worthwhile to you: have 1-1 convos with Michael, talk to Michael at events, reply to his emails and talk with him online. And then you're not making an active recommendation about whether to: have Michael over for dinner, have Michael stay at your house, date Michael, live with Michael, lend Michael money, start a business with Michael, etc, and you're aiming to trust people to figure that out for themselves.
It's not a great guess, but it's my best (quick) guess. Thoughts?
↑ comment by Dr_Manhattan · 2019-03-05T14:05:03.029Z · LW(p) · GW(p)
Hi Anna, since you've made the specific claim publicly (I assume intended as a warning), would you mind commenting on this
https://www.lesswrong.com/posts/u8GMcpEN9Z6aQiCvp/rule-thinkers-in-not-out#X7MSEyNroxmsep4yD
Specifically it's given there's some collateral damage when people are introduced to new ideas (or more specifically broken out of their world views). You seem to imply that with Michael it's more than that - (I think Vaniver alludes to it with the creepy comment).
Another words is Quirrell dangerous to some people and deserves a warning label or do you consider Michael Quirrell+ because of his outlook.
comment by philh · 2020-12-28T19:30:34.120Z · LW(p) · GW(p)
I think I agree with the thrust of this, but I think the comment section raises caveats that seem important. Scott's acknowledged that there's danger in this, and I hope an updated version would put that in the post.
But also...
Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.
This seems like a strange model to use. We don't know, a priori, what % are false. If 50% are obviously false, probably most of the remainder are subtly false. Giving me subtly false arguments is no favor.
Scott doesn't tell, us, in this essay, what Steven Pinker has given him / why Steven Pinker is ruled in. Has Steven Pinker given him valuable insights? How does Scott know they're valuable? (There may have been some implicit context when this was posted. Possibly Scott had recently reviewed a Pinker book.)
Given Anna's example,
Julia Galef helpfully notes a case where Steven Pinker straightforwardly misrepresents basic facts about who said what. This is helpful to me in ruling out Steven Pinker as someone who I can trust not to lie to me about even straightforwardly checkable facts.
I find myself wondering, has Scott checked Pinker's straightforwardly checkable facts?
I wouldn't be surprised if he has. The point of these questions isn't to say that Pinker shouldn't be ruled in, but that the questions need to be asked and answered. And the essay doesn't really acknowledge that that's actually kind of hard. It's even somewhat dismissive; "all you have to do is *test* some stuff to *see if it’s true*?" Well, the Large Hadron Collider cost €7.5 billion. On a less extreme scale, I recently wanted to check some of Robert Ellickson's work; that cost me [LW · GW], I believe, tens of hours. And that was only checking things close to my own specialty. I've done work that could have ruled him out and didn't, but is that enough to say he's ruled in?
So this advice only seems good if you're willing and able to put in the time to find and refute the bad arguments. Not only that, if you actually will put in that time. Not everyone can, not everyone wants to, not everyone will do. (This includes: "if you fact-check something and discover that it's false, the thing doesn't nevertheless propagate through your models influencing your downstream beliefs in ways it shouldn't".)
If you're not going to do that... I don't know. Maybe this is still good advice, but I think that discussion would be a different essay, and my sense is that Scott wasn't actually trying to give that advice here.
In the comments, cousin_it and gjm describe the people who can and will do such work as "angel investors", which seems apt.
I feel like right now, the essay is advising people to be angel investors, and not acknowledging that that's risky if you're not careful, and difficult to do carefully. That feels like an overstep. A more careful version might instead advise:
- Some people have done some great work and some silly work. If you know which is which (e.g. because others have fact checked, or time has vindicated), feel free to pay attention to the great and ignore the silly.
- Don't automatically dismiss people just because they've said some silly things. Take that fact into account when evaluating the things they say that aren't obviously silly, and deciding whether to actually evaluate them. But don't let that fact take the place of actually evaluating those things. Like, given "Steven Pinker said obviously silly things about AI", don't say "... so the rest of The Nurture Assumption isn't worth paying attention to". Instead, say "... so I don't think it's worth me spending the time to look closer at The Nurture Assumption right now". And allow for the possibility of changing that to "... but The Nurture Assumption is getting a lot of good press, maybe I'll look into it anyway".
(e: lightly edited for formatting and content)
comment by ChristianKl · 2019-02-28T17:09:14.759Z · LW(p) · GW(p)
What novel ideas did Steven Pinker publish? His role in the intellecutal discourse seems to me to provide long arguments for certain claims to be true.
The quality of his arguments matters a great deal for that role.
Replies from: Vaniver↑ comment by Vaniver · 2019-02-28T19:58:05.499Z · LW(p) · GW(p)
He appears to have had novel ideas in his technical specialty, but his public writings are mostly about old ideas that have insufficient public defense. There, novelty isn't a virtue (while correctness is).
comment by Kaj_Sotala · 2019-03-03T19:35:36.354Z · LW(p) · GW(p)
(After seeing this article many times, I only now realized that the title is supposed to be interpreted "thinkers should be ruled in, not out" rather than "there's this class of rule thinkers who are in, not out")
comment by Zvi · 2020-12-18T20:11:59.894Z · LW(p) · GW(p)
The central point here seems strong and important. One can, as Scott notes, take it too far, but mostly yes one should look where there are very interesting things even if the hit rate is not high, and it's important to note that. Given the karma numbers involved and some comments sometimes being included I'd want assurance that we wouldn't include any of that with regard to particular individuals.
That comment section, though, I believe has done major harm and could keep doing more even in its current state, so I still worry about bringing more focus on this copy of the post (as opposed to the SSC copy). Also, I worry about this giving too much of a free pass to what it calls "outrage culture" - there's an implicit "yeah, it's ok to go all essentialist and destroy someone for one statement that breaks your outrage mob's rules, I can live with that and please don't do it to me here, but let's not extend that to things that are merely stupid or wrong." I don't think you can do that, it doesn't work that way. Could be fixed with an edit if Scott wanted it fixed.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-12-18T21:00:24.212Z · LW(p) · GW(p)
Yeah, I don't expect that I would include the comments on this post in any books, they don't really fit their goal and feel too inside-basebally/non-timeless to me to make sense there.
comment by cousin_it · 2019-02-27T07:43:34.872Z · LW(p) · GW(p)
In science or art bad ideas have no downside, so we judge talent at its best. But in policy bad ideas have disproportionate downside.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2019-02-27T08:28:14.912Z · LW(p) · GW(p)
Yet policy exploration is an important job. Unless you think that someone posting something on a blog is going to change policy without anyone double-checking it first, we should encourage suggestion of radically new policies.
comment by theraven · 2019-02-27T12:40:01.032Z · LW(p) · GW(p)
The problem is, ideas are cheap and data is expensive. So separating the correct from the incorrect ideas takes lots of time and money. Hence, people often want to know whether the black box has any value before pouring money down a hole. Spouting clearly wrong ideas calls into doubt the usefulness of any of the ideas, especially for people who have no track record of being insightful on the topic.
Replies from: avturchin↑ comment by avturchin · 2019-02-27T12:58:44.166Z · LW(p) · GW(p)
Yes, this is the the pint of "Lost in math" book: arxiv is full of ideas, but testing each would cost billions. All low-hanging ideas are already tested.
Replies from: ryan_b↑ comment by ryan_b · 2019-02-27T14:53:25.900Z · LW(p) · GW(p)
How did you find that book?
Replies from: avturchin↑ comment by avturchin · 2019-02-27T15:54:42.870Z · LW(p) · GW(p)
I am subscribed to the author's Facebook and she wrote about it. https://www.facebook.com/sabine.hossenfelder
However, I don't remember why I subscribed at her, maybe some of her posts in her blog backreaction appeared in my google search about multiverse.
Replies from: ryan_b, theravencomment by Gordon Seidoh Worley (gworley) · 2019-03-01T01:56:44.997Z · LW(p) · GW(p)
You know, at first when I saw this post I was like "ugh, right, lots of people make gross mistakes in this area" but then didn't think much of it, but by coincidence today I was prompted to read something I wrote a while ago, and it seems relevant to this topic. Here's a quote from the article that was on a somewhat different topic (hermeneutics):
One methodology I’ve found especially helpful has been what I, for a long time, thought of as literary criticism but for interpreting what people said as evidence about what they knew about reality. I first started doing this when reading self-help books. Many books in that genre contain plainly incorrect reasoning based on outdated psychology that has either been disproved or replaced by better models (cf. Jeffers, Branden, Carnegie, and even Covey). Despite this, self-help still helps people. To pick on Jeffers, she goes in hard fordaily self-affirmation, but even ignoring concerns with this line of researchraised by the replication crisis, evidence suggests it’s unlikely to help much toward her instrumental goal of habit formation. Yet she makes this error in the service of giving half of the best advice I know: feel the fear and do it anyway. The thesis that she is wrong because her methods are flawed contradicts the antithesis that she is right because her advice helps people, so the synthesis must lie in some perspective that permits her both to be wrong about the how and right about the what simultaneously.
My approach was to read her and other self-help more from the perspective of the author and the expected world-view of their readers than from my own. This lead me to realize that, lacking better information about how the human mind works but wanting to give reasons for the useful patterns they had found, self-help authors often engage in rationalization to fit current science to their conclusions. This doesn’t make their conclusions wrong, but it does hide their true reasoning which is often based more on capta than data and thus phenomenological rather than strictly scientific reasoning. But given that they and we live in an age of scientism, we demand scientific reasons of our thinkers, even if they are poorly founded and later turn out to be wrong, or else reject their conclusions for lack of evidence. Thus the contradiction is sublimated by understanding the fuller context of the writing.
comment by Vanessa Kosoy (vanessa-kosoy) · 2019-02-27T14:54:28.311Z · LW(p) · GW(p)
Einstein seems to have batted a perfect 1000
Did ey? As far as I know, ey continued to resist quantum mechanics (in its ultimate form) for eir entire life, and eir attempts to create a unified field theory led to nothing (or almost nothing).
Replies from: TheMajor, mr-hire↑ comment by TheMajor · 2019-02-28T15:27:55.422Z · LW(p) · GW(p)
I feel like I'm walking into a trap, but here we go anyway.
Einstein disagreed with some very specific parts of QM (or "QM as it was understood at the time"), but also embraced large parts of it. Furthermore, on the parts Einstein disagreed with there is still to this day ongoing confusion/disagreement/lack of consensus (or, if you ask me, plain mistakes being made) among physicists. Discussing interpretations of QM in general and Einstein's role in them in particular would take way too long but let me just offer that, despite popular media exaggerations, with minimal charitable reading it is not clear that he was wrong about QM.
I know far less about Einstein's work on a unified field theory, but if we're willing to treat absence of evidence as evidence of absence here then that is a fair mark against his record.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2019-02-28T21:02:44.642Z · LW(p) · GW(p)
It seems that Einstein was just factually wrong, since ey did not expect the EPR paradox to be empirically confirmed (which only happened after eir death), but intended it as a reductio ad absurdum. Of course, thinking of the paradox did contribute to our understanding of QM, in which sense Einstein played a positive role here, paradoxically.
Replies from: TheMajor↑ comment by TheMajor · 2019-02-28T23:24:11.376Z · LW(p) · GW(p)
Yes, I think you're right. Personally I think this is where the charitable reading comes in. I'm not aware of Einstein specifically stating that there have to be hidden variables in QM, only that he explicitly disagreed with the nonlocality (in the sense of general relativity) of Copenhagen. In the absence of experimental proof that hidden variables is wrong (through the EPR experiments) I think hidden variables was the main contender for a "local QM", but all the arguments I can find Einstein supporting are more general/philosophical than this. In my opinion most of these criticisms still apply to the Copenhagen Interpretation as we understand it today, but instead of supporting hidden variables they now support [all modern local QM interpretations] instead.
Or more abstractly: Einstein backed a category of theories, and the main contender of that category has been solidly busted (ongoing debate about hidden variables blah blah blah I disagree). But even today I think other theories in that pool still come ahead of Copenhagen in likelihood, so his support of the category as a whole is justified.
Replies from: waveman↑ comment by waveman · 2019-12-14T00:18:20.894Z · LW(p) · GW(p)
experimental proof that hidden variables is wrong (through the EPR experiments)
Local hidden variable theories were disproved. But that is not at all surprising given that QM is IMHO non-local, as per Einstein's "spooky nonlocality".
It is interesting that often even when Einstein was wrong, he was fruitful. His biggest mistake, as he saw it, was the cosmological constant, now referred to as dark energy. Nietzsche would have approved.
On QM his paper led to Bell's theorem and real progress. Even though his claim was wrong.
Replies from: TheMajor↑ comment by Matt Goldenberg (mr-hire) · 2019-03-17T11:50:57.774Z · LW(p) · GW(p)
Off topic but... Is there something I don't know about Einstein's preferred pronouns? Did he prefer ey and eir over he and him?
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2019-03-18T21:27:33.841Z · LW(p) · GW(p)
Oh, I just use the pronoun "ey" for everyone. IMO the entire concept of gendered pronouns is net harmful.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-03-18T21:48:03.201Z · LW(p) · GW(p)
While I don't object to the overall statement that gender pronouns are likely to be net harmful, I do find sentences without them a lot harder to read, and the switching cost has proven to be quite significant to me (i.e. I think it takes me about twice as much time to parse a sentence with non-standard pronouns, and I don't think I have any levers of changing that faster than just getting used to it, and in the absence of widespread consensus on a specific alternative I expect that cost to mostly just continue accruing because I can't ever get used to just a small subset of non-standard pronouns).
Depending on your values it might still be worth it for you to use them, but I do think it roughly doubles the cost of reading something (relatively short) for me. I am much more used to singular "they" so if you use that, I expect the cost to be more like 1.2 or, which seems much less bad.
comment by orthonormal · 2021-01-10T05:30:51.098Z · LW(p) · GW(p)
This makes a simple and valuable point. As discussed in and below Anna's comment, it's very different when applied to a person who can interact with you directly versus a person whose works you read. But the usefulness in the latter context, and the way I expect new readers to assume that context, leads me to recommend it.
comment by Ben Pace (Benito) · 2020-12-09T01:46:47.652Z · LW(p) · GW(p)
This is a phrase I use regularly whenever this subject comes up, and it helped me think about this topic much more clearly.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-12-02T14:51:37.213Z · LW(p) · GW(p)
This post seems like good life advice for me and people like me, when taken with appropriate caution. It's well-written too, of course.
comment by Chris_Leong · 2020-12-04T11:30:27.153Z · LW(p) · GW(p)
Nominating because adopting this principle helps to create a positive intellectual culture.
comment by pcm50 (conjectures) · 2019-02-28T22:53:22.644Z · LW(p) · GW(p)
I get the point, and it's a good one. We should have tolerance for the kooky ideas of those who brought us great things.
Practically though, I expect the oracle box would gather dust in the corner if it had a 1% hit rate and the experiments it required were expensive and it didn't come with an impressive warranty. The world abounds with cranks and it is all too easy for out of the box to get binned with unhinged.
So I think this is a useful mode of thinking with some hindsight on a track record, but not without it.
comment by dominicq · 2021-01-11T23:40:22.712Z · LW(p) · GW(p)
A good explanation of the difference between intellectual exploration and promoting people. You don't need to agree with everything someone says, and you don't even need to like them, but if they occasionally provide good insight, they are worth taking into account. If you propagate this strategy, you may even get to a "wisdom of the crowds" scenario - you'll have many voices to integrate in your own thinking, potentially getting you farther along than if you just had one thought leader you liked.
Having many smart people you don't necessarily agree with, like, or respect > having an idol you always agree with.
The prerequisite for all of this is to be a "high-decoupling" person. Rationalists (by definition?) have this personality, but this post is nevertheless very useful as it sketches out why separating the messenger, the context, and the message is good. And potentially, it teaches those with lower decoupling philosophies to stop "respecting" a Person Who Is Correct, but to start listening to many voices and judge for themselves what makes sense and what doesn't.
comment by Vladimir_Nesov · 2019-02-28T13:16:20.892Z · LW(p) · GW(p)
The apparent alternative to the reliable vs. Newton tradeoff when you are the thinker is to put appropriate epistemic status around the hypotheses. So you publish the book on Bible codes or all-powerful Vitamin C, but note in the preface that you remain agnostic about whether any version of the main thesis applies to the real world, pending further development. You build a theory to experience how it looks once it's more developed, and publish it because it was substantial work, even when upon publication you still don't know if there is a version of the theory that works out.
Maybe the theory is just beautiful, and that beauty doesn't much diminish from its falsity. So call it philosophical fiction, not a description of this world, the substantial activity of developing the theory and communicating it remains the same without sacrificing reliability of your ideas. There might even be a place for an edifice of such fictions that's similar to math in mapping out an aspect of the world that doesn't connect to the physical reality for very long stretches. This doesn't seem plausible in the current practice, but seems possible in principle, so even calling such activity "fiction" might be misleading, it's more than mere fiction.
I don't think hypersensitive pattern-matching does a lot to destroy ability to distinguish between an idea that you feel like pursuing and an idea that you see as more reliably confirmed to be applicable in the real world. So you can discuss this distinction when communicating such ideas. Maybe the audience won't listen to the distinction you are making, or won't listen because you are making this distinction, but that's a different issue.
comment by Decaeneus · 2024-03-15T15:28:15.913Z · LW(p) · GW(p)
In early 2024 I think it's worth noting that deep-learning based generative models (presently, LLMs) have the property of generating many plausible hypotheses, not all of which are true. In a sense, they are creative and inaccurate.
An increasingly popular automated problem-solving paradigm seems to be bolting a slow & precise-but-uncreative verifier onto a fast & creative-but-imprecise (deep learning based) idea fountain, a la AlphaGeometry and FunSearch.
Today, in a paper published in Nature, we introduce FunSearch, a method to search for new solutions in mathematics and computer science. FunSearch works by pairing a pre-trained LLM, whose goal is to provide creative solutions in the form of computer code, with an automated “evaluator”, which guards against hallucinations and incorrect ideas. By iterating back-and-forth between these two components, initial solutions “evolve” into new knowledge. The system searches for “functions” written in computer code; hence the name FunSearch.
Perhaps we're getting close to making the valuable box you hypothesize.
comment by StartAtTheEnd · 2023-08-07T20:21:30.473Z · LW(p) · GW(p)
Let me start out by agreeing with the spirit of your post, the kind of attitude you're recommending seems healthy to me.
I believe that I have some game-changing ideas among my stupid ideas, but like a lot of other people, I'm not conscientious enough to test them out unless they're relevant enough in my life. Telling other people our weird theories is less justifiable than testing them our ourselves, and for every genius there seems to be around 10 mentally ill people. Sometimes there's an overlap, and it's hard to tell since the idea will have to be successfully communicated, read, and understood by somebody skilled enough to evaluate it.
We should be skeptical of people who don't have a high level of formal education, and that includes myself. If I theoretize something around on quantum mechanics without having a solid foundation in the topic, the possibility that I'm right is roughly zero. The formal education is already based on years of dedicated work by the worlds brightest people, so even if I was an actual genius who spend months thinking about a theory, I probably wouldn't have a success-rate above 1%.
I also want to express my disproval of moral outrage. I don't think there's much of a relation to evil like you claim, but you should probably disagree with me here for the sake of your public image.
I don't believe that facts can be evil, they're necessarily neutral. I also don't think that opinions are worth much, even if they're said by geniuses, since they're just a reflection of some issue, from some perspective, according to some set of values. Nietzsche wanted overmen because he couldn't bear to see society decline. He also said that lower men were absolutely necessary, which supports my belief that most outrage is based on misunderstanding, a sort of triggered fear that somebody is promoting a value which would degrade the fitness of the listener.
Finally, I believe that the core issue is that society is too harsh. Nobody is perfect, and even the best commit mistakes. If somebody acts in good faith, it wouldn't be fair to punish them for being wrong. Saying something wrong publicly has almost become taboo, and the mentality of those who look for imperfection in orders in order to loudly "expose" it really offend me. I sense something like revenge, insecurity or cruelty in such behaviour, and the sort of atmosphere which results from it is bad. The wrath of public opinion can really break the spirit and self-expression of the loveliest of people.
I can necro-post, right? This article was recommended to me, so I thought it was new. Now I can see that it's 4 years old, but I've already written my comment.
comment by Aaron D. Franklin (aaron-d-franklin) · 2019-02-27T03:32:33.595Z · LW(p) · GW(p)
Well, I have to think there is some balancing act here. I do look at life a lot from an evolutionary standpoint, but a flock who listened to their leader, and he was 99% wrong, would not survive long; or the sequence has to start with crushing a homer, and then getting it wrong 99 times. Whats missing here is not fully setting forth the downside of one of the 99 bad ideas.
Or maybe because we survived the basic-needs Malthusian "filter"; that explains the "Moral Outrage"; possibly just outraged at too much information and supposedly "true" memes, and the ratio keeps getting worse. A market crash of the idea ratio. (stagnation or plucked low hanging fruit)
In the end, if you hew to pragmatism (survival and reproduction), you guarantee the Great idea/ to Bad Idea ratio is relatively solid and tested. The theory is we want stability, which just a touch of creativity. Society also allocates places with high-risk reward ratio...and we give them a PHD.