Rule Thinkers In, Not Out

post by Scott Alexander (Yvain) · 2019-02-27T02:40:05.133Z · score: 101 (37 votes) · LW · GW · 43 comments

Imagine a black box which, when you pressed a button, would generate a scientific hypothesis. 50% of its hypotheses are false; 50% are true hypotheses as game-changing and elegant as relativity. Even despite the error rate, it’s easy to see this box would quickly surpass space capsules, da Vinci paintings, and printer ink cartridges to become the most valuable object in the world. Scientific progress on demand, and all you have to do is test some stuff to see if it’s true? I don’t want to devalue experimentalists. They do great work. But it’s appropriate that Einstein is more famous than Eddington. If you took away Eddington, someone else would have tested relativity; the bottleneck is in Einsteins. Einstein-in-a-box at the cost of requiring two Eddingtons per insight is a heck of a deal.

What if the box had only a 10% success rate? A 1% success rate? My guess is: still most valuable object in the world. Even an 0.1% success rate seems pretty good, considering (what if we ask the box for cancer cures, then test them all on lab rats and volunteers?) You have to go pretty low before the box stops being great.

I thought about this after reading this list of geniuses with terrible ideas. Linus Pauling thought Vitamin C cured everything. Isaac Newton spent half his time working on weird Bible codes. Nikola Tesla pursued mad energy beams that couldn’t work. Lynn Margulis revolutionized cell biology by discovering mitochondrial endosymbiosis, but was also a 9-11 truther and doubted HIV caused AIDS. Et cetera. Obviously this should happen. Genius often involves coming up with an outrageous idea contrary to conventional wisdom and pursuing it obsessively despite naysayers. But nobody can have a 100% success rate. People who do this successfully sometimes should also fail at it sometimes, just because they’re the kind of person who attempts it at all. Not everyone fails. Einstein seems to have batted a perfect 1000 (unless you count his support for socialism). But failure shouldn’t surprise us.

Yet aren’t some of these examples unforgiveably bad? Like, seriously Isaac – Bible codes? Well, granted, Newton’s chemical experiments may have exposed him to a little more mercury than can be entirely healthy. But remember: gravity was considered creepy occult pseudoscience by its early enemies. It subjected the earth and the heavens to the same law, which shocked 17th century sensibilities the same way trying to link consciousness and matter would today. It postulated that objects could act on each other through invisible forces at a distance, which was equally outside the contemporaneous Overton Window. Newton’s exceptional genius, his exceptional ability to think outside all relevant boxes, and his exceptionally egregious mistakes are all the same phenomenon (plus or minus a little mercury).

Or think of it a different way. Newton stared at problems that had vexed generations before him, and noticed a subtle pattern everyone else had missed. He must have amazing hypersensitive pattern-matching going on. But people with such hypersensitivity should be most likely to see patterns where they don’t exist. Hence, Bible codes.

These geniuses are like our black boxes: generators of brilliant ideas, plus a certain failure rate. The failures can be easily discarded: physicists were able to take up Newton’s gravity without wasting time on his Bible codes. So we’re right to treat geniuses as valuable in the same way we would treat those boxes as valuable.

This goes not just for geniuses, but for anybody in the idea industry. Coming up with a genuinely original idea is a rare skill, much harder than judging ideas is. Somebody who comes up with one good original idea (plus ninety-nine really stupid cringeworthy takes) is a better use of your reading time than somebody who reliably never gets anything too wrong, but never says anything you find new or surprising. Alyssa Vance calls this positive selection – a single good call rules you in – as opposed to negative selection, where a single bad call rules you out. You should practice positive selection for geniuses and other intellectuals.

I think about this every time I hear someone say something like “I lost all respect for Steven Pinker after he said all that stupid stuff about AI”. Your problem was thinking of “respect” as a relevant predicate to apply to Steven Pinker in the first place. Is he your father? Your youth pastor? No? Then why are you worrying about whether or not to “respect” him? Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.

I don’t want to take this too far. If someone has 99 stupid ideas and then 1 seemingly good one, obviously this should increase your probability that the seemingly good one is actually flawed in a way you haven’t noticed. If someone has 99 stupid ideas, obviously this should make you less willing to waste time reading their other ideas to see if they are really good. If you want to learn the basics of a field you know nothing about, obviously read a textbook. If you don’t trust your ability to figure out when people are wrong, obviously read someone with a track record of always representing the conventional wisdom correctly. And if you’re a social engineer trying to recommend what other people who are less intelligent than you should read, obviously steer them away from anyone who’s wrong too often. I just worry too many people wear their social engineer hat so often that they forget how to take it off, forget that “intellectual exploration” is a different job than “promote the right opinions about things” and requires different strategies.

But consider the debate over “outrage culture”. Most of this focuses on moral outrage. Some smart person says something we consider evil, and so we stop listening to her or giving her a platform. There are arguments for and against this – at the very least it disincentivizes evil-seeming statements.

But I think there’s a similar phenomenon that gets less attention and is even less defensible – a sort of intellectual outrage culture. “How can you possibly read that guy when he’s said [stupid thing]?” I don’t want to get into defending every weird belief or conspiracy theory that’s ever been [stupid thing]. I just want to say it probably wasn’t as stupid as Bible codes. And yet, Newton.

Some of the people who have most inspired me have been inexcusably wrong on basic issues. But you only need one world-changing revelation to be worth reading.

43 comments

Comments sorted by top scores.

comment by AnnaSalamon · 2019-02-28T00:02:57.342Z · score: 109 (41 votes) · LW · GW

I used to make the argument in the OP a lot. I applied it (among other applications) to Michael Vassar, who many people complained to me about (“I can’t believe he made obviously-fallacious argument X; why does anybody listen to him”), and who I encouraged them to continue listening to anyhow. I now regret this.

Here are the two main points I think past-me was missing:

1. Vetting and common knowledge creation are important functions, and ridicule of obviously-fallacious reasoning plays an important role in discerning which thinkers can (or can’t) help fill these functions.

(Communities — like the community of physicists, or the community of folks attempting to contribute to AI safety — tend to take a bunch of conclusions for granted without each-time-reexamining them, while trying to add to the frontier of knowledge/reasoning/planning. This can be useful, and it requires a community vetting function. This vetting function is commonly built via having a kind of “good standing” that thinkers/writers can be ruled out of (and into), and taking a claim as “established knowledge that can be built on” when ~all “thinkers in good standing” agree on that claim.

I realize the OP kind-of acknowledges this when discussing “social engineering”, so maybe the OP gets this right? But I undervalued this function in the past, and the term “social engineering” seems to me dismissive of a function that in my current view contributes substantially to a group’s ability to produce new knowledge.)

2. Even when a reader is seeking help brainstorming hypotheses (rather than vetting conclusions), they can still be lied-to and manipulated, and such lies/manipulations can sometimes disrupt their thinking for long and costly periods of time (e.g., handing Ayn Rand to the wrong 14-year-old; or, in my opinion, handing Michael Vassar to a substantial minority of smart aspiring rationalists). Distinguishing which thinkers are likely to lie or manipulate is a function more easily fulfilled by a group sharing info that rules thinkers out for past instances of manipulative or dishonest tactics (rather than by the individual listener planning to ignore past bad arguments and to just successfully detect every single manipulative tactic on their own).

So, for example, Julia Galef helpfully notes a case where Steven Pinker straightforwardly misrepresents basic facts about who said what. This is helpful to me in ruling out Steven Pinker as someone who I can trust not to lie to me about even straightforwardly checkable facts.

Similarly, back in 2011, a friend complained to me that Michael would cause EAs to choose the wrong career paths by telling them exaggerated things about their own specialness. This matched my own observations of what he was doing. Michael himself told me that he sometimes lied to people (not his words) and told them that the thing that would most help AI risk from them anyhow was for them to continue on their present career (he said this was useful because that way they wouldn’t rationalize that AI risk must be false). Despite these and similar instances, I continued to recommend people talk to him because I had “ruled him in” as a source of some good novel ideas, and I did this without warning people about the rest of it. I think this was a mistake. (I also think that my recommending Michael led to considerable damage over time, but trying to establish that claim would require more discussion than seems to fit here.)

To be clear, I still think hypothesis-generating thinkers are valuable even when unreliable, and I still think that honest and non-manipulative thinkers should not be “ruled out” as hypothesis-sources for having some mistaken hypotheses (and should be “ruled in” for having even one correct-important-and-novel hypothesis). I just care more about the caveats here than I used to.

comment by Scott Alexander (Yvain) · 2019-03-02T05:36:10.298Z · score: 44 (16 votes) · LW · GW

Thanks for this response.

I mostly agree with everything you've said.

While writing this, I was primarily thinking of reading books. I should have thought more about meeting people in person, in which case I would have echoed the warnings you gave about Michael. I think he is a good example of someone who both has some brilliant ideas and can lead people astray, but I agree with you that people's filters are less functional (and charisma is more powerful) in the real-life medium.

On the other hand, I agree that Steven Pinker misrepresents basic facts about AI. But he was also involved in my first coming across "The Nurture Assumption", which was very important for my intellectual growth and which I think has held up well. I've seen multiple people correct his basic misunderstandings of AI, and I worry less about being stuck believing false things forever than about missing out on Nurture-Assumption-level important ideas (I think I now know enough other people in the same sphere that Pinker isn't a necessary source of this, but I think earlier for me he was).

There have been some books, including "Inadequate Equilibria" and "Zero To One", that have warned people against the Outside View/EMH. This is the kind of idea that takes the safety wheels off cognition - it will help bright people avoid hobbling themselves, but also give gullible people new opportunities to fail. And there is no way to direct it, because non-bright, gullible people can't identify themselves as such. I think the idea of ruling geniuses in is similarly dangerous, in that there's no way to direct it only to non-gullible people who can appreciate good insight and throw off falsehoods. You can only say the words of warning, knowing that people are unlikely to listen.

I still think on net it's worth having out there. But the example you gave of Michael and of in-person communication in general makes me wish I had added more warnings.

comment by Wei_Dai · 2019-02-28T12:36:05.196Z · score: 26 (11 votes) · LW · GW

Can someone please fill me in, what are some of Michael Vassar's best ideas, that made him someone who people "ruled in" and encouraged others to listen to?

comment by sarahconstantin · 2019-06-08T17:16:34.988Z · score: 46 (13 votes) · LW · GW

Some examples of valuable true things I've learned from Michael:

  • Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.
  • Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you're not any smarter.
  • Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it "comes out right".) Sometimes the best work of this kind doesn't look grandiose or prestigious at the time you're doing it.
  • The mind and the body are connected. Really. Your mind affects your body and your body affects your mind. The better kinds of yoga, meditation, massage, acupuncture, etc, actually do real things to the body and mind.
  • Science had higher efficiency in the past (late 19th-to-mid-20th centuries).
  • Examples of potentially valuable medical innovation that never see wide application are abundant.
  • A major problem in the world is a 'hope deficit' or 'trust deficit'; otherwise feasible good projects are left undone because people are so mistrustful that it doesn't occur to them that they might not be scams.
  • A good deal of human behavior is explained by evolutionary game theory; coalitional strategies, not just individual strategies.
  • Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not *all* conflicts are merely misunderstandings.
  • How intersubjectivity works; "objective" reality refers to the conserved *patterns* or *relationships* between different perspectives.
  • People who have coherent philosophies -- even opposing ones -- have more in common in the *way* they think, and are more likely to get meaningful stuff done together, than they can with "moderates" who take unprincipled but middle-of-the-road positions. Two "bullet-swallowers" can disagree on some things and agree on others; a "bullet-dodger" and a "bullet-swallower" will not even be able to disagree, they'll just not be saying commensurate things.


comment by Wei_Dai · 2019-06-09T09:02:26.988Z · score: 14 (6 votes) · LW · GW

Thanks! Here are my reactions/questions:

Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.

Seems right to me, as I was never tied to such a narrative in the first place.

Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you’re not any smarter.

What kind of risks is he talking about here? Also does he mean that people value their social positions too much, or that they're not taking enough risks even given their current values?

Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it “comes out right”.) Sometimes the best work of this kind doesn’t look grandiose or prestigious at the time you’re doing it.

Hmm, I use to spend quite a bit of time fiddling with assembly language implementations of encryption code to try to squeeze out a few more percent of speed. Pretty sure that is not as productive as more "grandiose" or "prestigious" activities like thinking about philosophy or AI safety, at least for me [LW · GW]... I think overall I'm more afraid that someone who could be doing productive "grandiose" work chooses not to in favor of "fiddly puttering", than the reverse.

The mind and the body are connected. Really. Your mind affects your body and your body affects your mind. The better kinds of yoga, meditation, massage, acupuncture, etc, actually do real things to the body and mind.

That seems almost certain to be true, but I don't see evidence that there a big enough effect for me to bother spending the time to investigate further. (I seem to be doing fine without doing any of these things and I'm not sure who is deriving large benefits from them.) Do you want to try to change my mind about this?

Science had higher efficiency in the past (late 19th-to-mid-20th centuries).

Couldn't this just be that we've picked most of the low-hanging fruit, plus the fact that picking the higher fruits require more coordination among larger groups of humans and that is very costly? Or am I just agreeing with Michael here?

Examples of potentially valuable medical innovation that never see wide application are abundant.

This seems quite plausible to me, as I used to lament that a lot of innovations in cryptography never got deployed.

A major problem in the world is a ‘hope deficit’ or ‘trust deficit’; otherwise feasible good projects are left undone because people are so mistrustful that it doesn’t occur to them that they might not be scams.

"Doesn't occur to them" seems too strong but I think I know what you mean. Can you give some examples of what these projects are?

A good deal of human behavior is explained by evolutionary game theory; coalitional strategies, not just individual strategies.

Agreed, and I think this is a big problem as far as advancing human rationality because we currently have a very poor theoretical understanding of coalitional strategies.

Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not all conflicts are merely misunderstandings.

This seems plausible but what are some examples of such "evil"? What happened to Enron, perhaps?

How intersubjectivity works; “objective” reality refers to the conserved patterns or relationships between different perspectives.

It would make more sense to me to say that objective reality refers to whatever explains the conserved patterns or relationships between different perspectives, rather than the patterns/relationships themselves. I'm not sure if I'm just missing the point here.

People who have coherent philosophies—even opposing ones—have more in common in the way they think, and are more likely to get meaningful stuff done together, than they can with “moderates” who take unprincipled but middle-of-the-road positions. Two “bullet-swallowers” can disagree on some things and agree on others; a “bullet-dodger” and a “bullet-swallower” will not even be able to disagree, they’ll just not be saying commensurate things.

I think I prefer to hold a probability distribution over coherent philosophies, plus a lot of weight on "something we'll figure out in the future".

Also a meta question: Why haven't these been written up or discussed online more? In any case, please don't feel obligated to answer my comments/questions in this thread. You (or others who are familiar with these ideas) can just keep them in mind for when you do want to discuss them online.

comment by Vaniver · 2019-02-28T17:24:35.572Z · score: 29 (9 votes) · LW · GW

My sense is that his worldview was 'very sane' in the cynical HPMOR!Quirrell sense (and he was one of the major inspirations for Quirrell, so that's not surprising), and that he was extremely open about it in person in a way that was surprising and exciting.

I think his standout feature was breadth more than depth. I am not sure I could distinguish which of his ideas were 'original' and which weren't. He rarely if ever wrote things, which makes the genealogy of ideas hard to track. (Especially if many people who do write things were discussing ideas with him and getting feedback on them.)

comment by Dr_Manhattan · 2019-03-01T17:59:59.053Z · score: 4 (2 votes) · LW · GW

Good points (similar to Raemon). I would find it useful if someone created some guidance for safe ingestion (or alternative source) of MV type ideas/outlook; I do the "subtle skill of seeing the world with fresh eyes" potentially extremely valuable, which is why I suppose Anna kept on encouraging people.

comment by Vaniver · 2019-03-02T18:04:10.535Z · score: 21 (4 votes) · LW · GW

I think I have this skill, but I don't know that I could write this guide. Partly this is because there are lots of features about me that make this easier, which are hard (or too expensive) to copy. For example, Michael once suggested part of my emotional relationship to lots of this came from being gay, and thus not having to participate in a particular variety of competition and signalling that was constraining others; that seemed like it wasn't the primary factor, but was probably a significant one.

Another thing that's quite difficult here is that many of the claims are about values, or things upstream of values; how can Draco Malfoy learn the truth about blood purism in a 'safe' way?

comment by Dr_Manhattan · 2019-03-03T21:22:34.099Z · score: 2 (1 votes) · LW · GW

Thanks (&Yoav for clarification). So in your opinion is MV dangerous to a class of people with certain kinds of beliefs the way Harry was to Drako (the risk was pure necessity to break out of wrong ideas) or is he dangerous because of an idea package or bad motivations of his own

comment by Vaniver · 2019-03-05T04:16:20.883Z · score: 33 (8 votes) · LW · GW

When someone has an incomplete moral worldview (or one based on easily disprovable assertions), there's a way in which the truth isn't "safe" if safety is measured by something like 'reversibility' or 'ability to continue being the way they were.' It is also often the case that one can't make a single small change, and then move on; if, say, you manage to convince a Christian that God isn't real (or some other thing that will predictably cause the whole edifice of their worldview to come crashing down eventually), then the default thing to happen is for them to be lost and alone.

Where to go from there is genuinely unclear to me. Like, one can imagine caring mostly about helping other people grow, in which a 'reversibility' criterion is sort of ludicrous; it's not like people can undo puberty, or so on. If you present them with an alternative system, they don't need to end up lost and alone, because you can directly introduce them to humanism, or whatever. But here you're in something of a double bind; it's somewhat irresponsible to break people's functioning systems without giving them a replacement, and it's somewhat creepy if you break people's functioning systems to pitch your replacement. (And since 'functioning' is value-laden, it's easy for you to think their system needs replacing.)

comment by Dr_Manhattan · 2019-03-03T13:20:00.095Z · score: 2 (1 votes) · LW · GW

Ah sorry would you mind elaborating the Draco point in normie speak if you have the bandwidth?

comment by Yoav Ravid · 2019-03-03T17:34:54.869Z · score: 5 (3 votes) · LW · GW

He is referring to HPMOR [LW · GW], where the following happens (major spoiler for the first 25 chapters):

Harry tries to show Draco the truth about blood purism, and Draco goes through a really bad crisis of faith. Harry tries to do it effectively and gracefully, but non the less it is hard, and could even be somewhat dangerous.

comment by Benito · 2019-03-03T20:13:15.891Z · score: 4 (2 votes) · LW · GW

I edited your comment to add the spoiler cover. FYI the key for this is > followed by ! and then a space.

comment by Yoav Ravid · 2019-03-07T11:56:21.427Z · score: 1 (1 votes) · LW · GW

Ah, great, thank you :)

comment by Raemon · 2019-03-01T19:38:41.008Z · score: 15 (8 votes) · LW · GW

Alas, I spent this year juuust coming to the conclusion that it was all more dangerous than I thought and am I still wrapping my brain around it.

I suppose it was noteworthy that I don't think I got very damaged, and most of that was via... just not having prolonged contact with the four Vassar-type-people that I encountered (the two people whom I did have more extended contact with, I think may have damaged me somewhat)

So, I guess the short answer is "if you hang out with weird iconoclasts with interesting takes on agency and seeing the world, and you don't spend more than an evening every 6 months with them, you will probably get a slight benefit with little to no risk. If you hang out more than that you take on proportionately more risk/reward. The risks/rewards are very person specific."

My current take is something like "the social standing of this class of person should be the mysterious old witch who lives at the end of the road, who everyone respects but, like, you're kinda careful about when you go ask for their advice."

comment by Raemon · 2019-02-28T20:51:34.359Z · score: 12 (6 votes) · LW · GW

FWIW, I've never had a clear sense that Vassar's ideas were especially good (but, also, not had a clear sense that they weren't). More that, Vassar generally operates in a mode that is heavily-brainstorm-style-thinking and involves seeing the world in a particular way. And this has high-variance-but-often-useful side effects.

Exposure to that way of thinking has a decent chance of causing people to become more agenty, or dislodged from a subpar local optimum, or gain some subtle skills about seeing the world with fresh eyes. The point is less IMO about the ideas and more about having that effect on people.

(With the further caveat that this is all a high variance strategy, and the tail risks do not fail gracefully, sometimes causing damage, in ways that Anna hints at and which I agree would be a much larger discussion)

comment by cousin_it · 2019-02-28T09:13:19.047Z · score: 24 (11 votes) · LW · GW

I decided to ignore Michael after our first in-person conversation, where he said I shouldn't praise the Swiss healthcare system which I have lots of experience with, because MetaMed is the only working healthcare system in the world (and a roomful of rationalists nodded along to that, suggesting that I bet money against him or something).

This isn't to single out Michael or the LW community. The world is full of people who spout nonsense confidently. Their ideas can deserve close attention from a few "angel investors", but that doesn't mean they deserve everyone's attention by default, as Scott seems to say.

comment by gjm · 2019-02-28T16:32:22.543Z · score: 20 (9 votes) · LW · GW

There's a really good idea slipped into the above comment in passing; the purpose of this comment is to draw attention to it.

close attention from a few "angel investors"

Scott's article, like the earlier "epistemic tenure" one, implicitly assumes that we're setting a single policy for whose ideas get taken how seriously. But it may make sense for some people or communities -- these "angel investors" -- to take seriously a wider range of ideas than the rest of us, even knowing that a lot of those ideas will turn out to be bad ones, in the hope that they can eventually identify which ones were actually any good and promote those more widely.

Taking the parallel a bit further, in business there are more levels of filtering than that. You have the crazy startups; then you have the angel investors; then you have the early-stage VCs; then you have the later VCs; and then you have, I dunno, all the world's investors. There are actually two layers of filtering at each stage -- investors may choose not to invest, and the company may fail despite the investment -- but let's leave that out for now. The equivalent in the marketplace of ideas would be a sort of hierarchy of credibility-donors: first of all you have individuals coming up with possibly-crackpot ideas, then some of them get traction in particular communities, then some of those come to the attention of Gladwell-style popularizers, and then some of the stuff they popularize actually makes it all the way into the general public's awareness. At each stage it should be somewhat harder to get treated as credible. (But is it? I wouldn't count on it. In particular, popularizers don't have the best reputation for never latching onto bad ideas and making them sound more credible than they really are...)

(Perhaps the LW community itself should be an "angel investor", but not necessarily.)

comment by Dr_Manhattan · 2019-03-05T14:05:03.029Z · score: 2 (1 votes) · LW · GW

Hi Anna, since you've made the specific claim publicly (I assume intended as a warning), would you mind commenting on this

https://www.lesswrong.com/posts/u8GMcpEN9Z6aQiCvp/rule-thinkers-in-not-out#X7MSEyNroxmsep4yD

Specifically it's given there's some collateral damage when people are introduced to new ideas (or more specifically broken out of their world views). You seem to imply that with Michael it's more than that - (I think Vaniver alludes to it with the creepy comment).

Another words is Quirrell dangerous to some people and deserves a warning label or do you consider Michael Quirrell+ because of his outlook.

comment by ChristianKl · 2019-02-28T17:09:14.759Z · score: 21 (5 votes) · LW · GW

What novel ideas did Steven Pinker publish? His role in the intellecutal discourse seems to me to provide long arguments for certain claims to be true.

The quality of his arguments matters a great deal for that role.

comment by Vaniver · 2019-02-28T19:58:05.499Z · score: 16 (5 votes) · LW · GW

He appears to have had novel ideas in his technical specialty, but his public writings are mostly about old ideas that have insufficient public defense. There, novelty isn't a virtue (while correctness is).

comment by Kaj_Sotala · 2019-03-03T19:35:36.354Z · score: 17 (6 votes) · LW · GW

(After seeing this article many times, I only now realized that the title is supposed to be interpreted "thinkers should be ruled in, not out" rather than "there's this class of rule thinkers who are in, not out")

comment by ryan_b · 2019-02-27T15:16:14.572Z · score: 15 (5 votes) · LW · GW

This seems in the same vein as the Epistemic Tenure [LW · GW] post, and looks to be calling for an even lower bar than was suggested there. In the main, I agree.

comment by Vanessa Kosoy (vanessa-kosoy) · 2019-02-27T14:54:28.311Z · score: 11 (7 votes) · LW · GW

Einstein seems to have batted a perfect 1000

Did ey? As far as I know, ey continued to resist quantum mechanics (in its ultimate form) for eir entire life, and eir attempts to create a unified field theory led to nothing (or almost nothing).

comment by TheMajor · 2019-02-28T15:27:55.422Z · score: 11 (4 votes) · LW · GW

I feel like I'm walking into a trap, but here we go anyway.

Einstein disagreed with some very specific parts of QM (or "QM as it was understood at the time"), but also embraced large parts of it. Furthermore, on the parts Einstein disagreed with there is still to this day ongoing confusion/disagreement/lack of consensus (or, if you ask me, plain mistakes being made) among physicists. Discussing interpretations of QM in general and Einstein's role in them in particular would take way too long but let me just offer that, despite popular media exaggerations, with minimal charitable reading it is not clear that he was wrong about QM.

I know far less about Einstein's work on a unified field theory, but if we're willing to treat absence of evidence as evidence of absence here then that is a fair mark against his record.

comment by Vanessa Kosoy (vanessa-kosoy) · 2019-02-28T21:02:44.642Z · score: 9 (2 votes) · LW · GW

It seems that Einstein was just factually wrong, since ey did not expect the EPR paradox to be empirically confirmed (which only happened after eir death), but intended it as a reductio ad absurdum. Of course, thinking of the paradox did contribute to our understanding of QM, in which sense Einstein played a positive role here, paradoxically.

comment by TheMajor · 2019-02-28T23:24:11.376Z · score: 6 (4 votes) · LW · GW

Yes, I think you're right. Personally I think this is where the charitable reading comes in. I'm not aware of Einstein specifically stating that there have to be hidden variables in QM, only that he explicitly disagreed with the nonlocality (in the sense of general relativity) of Copenhagen. In the absence of experimental proof that hidden variables is wrong (through the EPR experiments) I think hidden variables was the main contender for a "local QM", but all the arguments I can find Einstein supporting are more general/philosophical than this. In my opinion most of these criticisms still apply to the Copenhagen Interpretation as we understand it today, but instead of supporting hidden variables they now support [all modern local QM interpretations] instead.

Or more abstractly: Einstein backed a category of theories, and the main contender of that category has been solidly busted (ongoing debate about hidden variables blah blah blah I disagree). But even today I think other theories in that pool still come ahead of Copenhagen in likelihood, so his support of the category as a whole is justified.

comment by mr-hire · 2019-03-17T11:50:57.774Z · score: 1 (1 votes) · LW · GW

Off topic but... Is there something I don't know about Einstein's preferred pronouns? Did he prefer ey and eir over he and him?

comment by Vanessa Kosoy (vanessa-kosoy) · 2019-03-18T21:27:33.841Z · score: 4 (2 votes) · LW · GW

Oh, I just use the pronoun "ey" for everyone. IMO the entire concept of gendered pronouns is net harmful.

comment by habryka (habryka4) · 2019-03-18T21:48:03.201Z · score: 10 (5 votes) · LW · GW

While I don't object to the overall statement that gender pronouns are likely to be net harmful, I do find sentences without them a lot harder to read, and the switching cost has proven to be quite significant to me (i.e. I think it takes me about twice as much time to parse a sentence with non-standard pronouns, and I don't think I have any levers of changing that faster than just getting used to it, and in the absence of widespread consensus on a specific alternative I expect that cost to mostly just continue accruing because I can't ever get used to just a small subset of non-standard pronouns).

Depending on your values it might still be worth it for you to use them, but I do think it roughly doubles the cost of reading something (relatively short) for me. I am much more used to singular "they" so if you use that, I expect the cost to be more like 1.2 or, which seems much less bad.

comment by cousin_it · 2019-02-27T07:43:34.872Z · score: 11 (6 votes) · LW · GW

In science or art bad ideas have no downside, so we judge talent at its best. But in policy bad ideas have disproportionate downside.

comment by Donald Hobson (donald-hobson) · 2019-02-27T08:28:14.912Z · score: 7 (6 votes) · LW · GW

Yet policy exploration is an important job. Unless you think that someone posting something on a blog is going to change policy without anyone double-checking it first, we should encourage suggestion of radically new policies.

comment by theraven · 2019-02-27T12:40:01.032Z · score: 7 (5 votes) · LW · GW

The problem is, ideas are cheap and data is expensive. So separating the correct from the incorrect ideas takes lots of time and money. Hence, people often want to know whether the black box has any value before pouring money down a hole. Spouting clearly wrong ideas calls into doubt the usefulness of any of the ideas, especially for people who have no track record of being insightful on the topic.

comment by avturchin · 2019-02-27T12:58:44.166Z · score: 1 (1 votes) · LW · GW

Yes, this is the the pint of "Lost in math" book: arxiv is full of ideas, but testing each would cost billions. All low-hanging ideas are already tested.

comment by ryan_b · 2019-02-27T14:53:25.900Z · score: 2 (1 votes) · LW · GW

How did you find that book?

comment by avturchin · 2019-02-27T15:54:42.870Z · score: 3 (2 votes) · LW · GW

I am subscribed to the author's Facebook and she wrote about it. https://www.facebook.com/sabine.hossenfelder

However, I don't remember why I subscribed at her, maybe some of her posts in her blog backreaction appeared in my google search about multiverse.

comment by ryan_b · 2019-02-28T15:29:38.035Z · score: 2 (1 votes) · LW · GW

Ha! I apologize, I was ambiguous. What I should have said was, how did you like that book?

comment by theraven · 2019-02-27T16:29:35.250Z · score: 1 (1 votes) · LW · GW

You should follow her blog too. Lots of good interactions and criticisms of the current state of affairs in science overall and theoretical physics in particular

comment by avturchin · 2019-02-27T17:41:29.525Z · score: 1 (1 votes) · LW · GW

She posts links on her new blogposts on her FB, so I am often read it.

comment by G Gordon Worley III (gworley) · 2019-03-01T01:56:44.997Z · score: 5 (3 votes) · LW · GW

You know, at first when I saw this post I was like "ugh, right, lots of people make gross mistakes in this area" but then didn't think much of it, but by coincidence today I was prompted to read something I wrote a while ago, and it seems relevant to this topic. Here's a quote from the article that was on a somewhat different topic (hermeneutics):

One methodology I’ve found especially helpful has been what I, for a long time, thought of as literary criticism but for interpreting what people said as evidence about what they knew about reality. I first started doing this when reading self-help books. Many books in that genre contain plainly incorrect reasoning based on outdated psychology that has either been disproved or replaced by better models (cf. Jeffers, Branden, Carnegie, and even Covey). Despite this, self-help still helps people. To pick on Jeffers, she goes in hard fordaily self-affirmation, but even ignoring concerns with this line of researchraised by the replication crisis, evidence suggests it’s unlikely to help much toward her instrumental goal of habit formation. Yet she makes this error in the service of giving half of the best advice I know: feel the fear and do it anyway. The thesis that she is wrong because her methods are flawed contradicts the antithesis that she is right because her advice helps people, so the synthesis must lie in some perspective that permits her both to be wrong about the how and right about the what simultaneously.
My approach was to read her and other self-help more from the perspective of the author and the expected world-view of their readers than from my own. This lead me to realize that, lacking better information about how the human mind works but wanting to give reasons for the useful patterns they had found, self-help authors often engage in rationalization to fit current science to their conclusions. This doesn’t make their conclusions wrong, but it does hide their true reasoning which is often based more on capta than data and thus phenomenological rather than strictly scientific reasoning. But given that they and we live in an age of scientism, we demand scientific reasons of our thinkers, even if they are poorly founded and later turn out to be wrong, or else reject their conclusions for lack of evidence. Thus the contradiction is sublimated by understanding the fuller context of the writing.
comment by conjectures · 2019-02-28T22:53:22.644Z · score: 3 (2 votes) · LW · GW

I get the point, and it's a good one. We should have tolerance for the kooky ideas of those who brought us great things.

Practically though, I expect the oracle box would gather dust in the corner if it had a 1% hit rate and the experiments it required were expensive and it didn't come with an impressive warranty. The world abounds with cranks and it is all too easy for out of the box to get binned with unhinged.

So I think this is a useful mode of thinking with some hindsight on a track record, but not without it.

comment by Vladimir_Nesov · 2019-02-28T13:16:20.892Z · score: 2 (1 votes) · LW · GW

The apparent alternative to the reliable vs. Newton tradeoff when you are the thinker is to put appropriate epistemic status around the hypotheses. So you publish the book on Bible codes or all-powerful Vitamin C, but note in the preface that you remain agnostic about whether any version of the main thesis applies to the real world, pending further development. You build a theory to experience how it looks once it's more developed, and publish it because it was substantial work, even when upon publication you still don't know if there is a version of the theory that works out.

Maybe the theory is just beautiful, and that beauty doesn't much diminish from its falsity. So call it philosophical fiction, not a description of this world, the substantial activity of developing the theory and communicating it remains the same without sacrificing reliability of your ideas. There might even be a place for an edifice of such fictions that's similar to math in mapping out an aspect of the world that doesn't connect to the physical reality for very long stretches. This doesn't seem plausible in the current practice, but seems possible in principle, so even calling such activity "fiction" might be misleading, it's more than mere fiction.

I don't think hypersensitive pattern-matching does a lot to destroy ability to distinguish between an idea that you feel like pursuing and an idea that you see as more reliably confirmed to be applicable in the real world. So you can discuss this distinction when communicating such ideas. Maybe the audience won't listen to the distinction you are making, or won't listen because you are making this distinction, but that's a different issue.

comment by Aaron D. Franklin (aaron-d-franklin) · 2019-02-27T03:32:33.595Z · score: 0 (2 votes) · LW · GW

Well, I have to think there is some balancing act here. I do look at life a lot from an evolutionary standpoint, but a flock who listened to their leader, and he was 99% wrong, would not survive long; or the sequence has to start with crushing a homer, and then getting it wrong 99 times. Whats missing here is not fully setting forth the downside of one of the 99 bad ideas.

Or maybe because we survived the basic-needs Malthusian "filter"; that explains the "Moral Outrage"; possibly just outraged at too much information and supposedly "true" memes, and the ratio keeps getting worse. A market crash of the idea ratio. (stagnation or plucked low hanging fruit)

In the end, if you hew to pragmatism (survival and reproduction), you guarantee the Great idea/ to Bad Idea ratio is relatively solid and tested. The theory is we want stability, which just a touch of creativity. Society also allocates places with high-risk reward ratio...and we give them a PHD.