Schools Proliferating Without Practicioners
post by Davis_Kingsley · 2018-10-26T05:25:03.959Z · LW · GW · 46 commentsThis is a link post for http://www.thelastrationalist.com/schools-proliferating-without-practitioners.html
Contents
46 comments
Very interesting post from "The Last Rationalist" discussing how the rationalist community seems to have been slow to update on comparative impracticality of formal Bayes and on the replication crisis in psychology.
I don't fully agree with this post - for instance, my impression is that there is in fact a replication crisis in medicine, which the author seems unaware of or understates - but I think the key points provide useful food for thought.
(Note: this is my opinion as a private individual, not an official opinion as a CFAR instructor or as a member of any other organization.)
46 comments
Comments sorted by top scores.
comment by cousin_it · 2018-10-26T09:30:52.810Z · LW(p) · GW(p)
I think the most important idea of the Sequences has aged well: the idea that you should have multiple working hypotheses, instead of falling in love with only one. It's summarized in T.C. Chamberlin's paper and Abram's post [LW · GW].
But Eliezer's sweet coating around it, the reporting on scientific controversies, hasn't aged so well:
-
The replication crisis has hit many studies that Kahneman relied on
-
Neural networks have outpaced neat AI
-
Statisticians no longer spend much time arguing Bayesianism vs frequentism
-
Physicists no longer spend much time arguing many-worlds vs collapse
Due to that, anyone trying to learn from the Sequences today will develop a certain archaic slant that isn't shared by experts. Getting rid of it took me some effort and embarrassment. So if we want an updated textbook, let's start by reporting on today's actual scientific controversies. (We should keep the CDT vs EDT controversy though, it has worked out well for us. And of course past controversies like religion or phlogiston should stay too.)
Replies from: Vaniver, TAG↑ comment by Vaniver · 2018-10-26T17:24:10.934Z · LW(p) · GW(p)
Statisticians no longer spend much time arguing Bayesianism vs frequentism
Did they, when the Sequences were written? My impression was that the camps were well-established then, and well-established now, and the main difference has been that the Bayesians have had their tools improved by additional compute more than the Frequentists have; currently the question seems to be 'Bayesianism' vs. 'pragmatism' in much the same way that the debate in physics seems to be 'MWI' vs. 'shut up and compute.'
Like, out of those four areas, I was trained in three of them before I found Less Wrong, and maybe I just had good professors / got lucky but I came in predisposed to think Eliezer's view was sensible for all of them, but also lots of people were pragmatists because it worked out better in a social context. (The few exceptions, like a decision analysis professor who flatly insisted on Bayesian probability, were because you really couldn't make sense of the class if you were interpreting everything as a frequentist. But all the analyses were simple enough that you didn't really use what a statistician would call 'Bayesian methods' rather than just the Bayesian interpretation of probability.)
Replies from: RiversHaveWings↑ comment by RiversHaveWings · 2018-10-26T20:33:03.262Z · LW(p) · GW(p)
I think Eliezer's presentation of the Bayesianism vs frequentism arguments in science came from E. T. Jaynes' posthumous book Probability Theory: The Logic of Science, which was written about arguments that took place over Jaynes' lifetime, well before the Sequences were written.
↑ comment by TAG · 2018-10-26T10:15:04.024Z · LW(p) · GW(p)
The "payoff" of of MWI, that there is a better way of doing science than the scientific method seems to have been dropped as well.
Replies from: Vaniver↑ comment by Vaniver · 2018-10-26T17:26:33.466Z · LW(p) · GW(p)
My recent post, Public Positions and Private Guts [LW · GW], is a conceptual descendant of the claim that there's a better way of doing science than the scientific method, in that it sees the scientific method as one member of a class of methods used for discovering and communicating knowledge of different types.
But "conceptual descendant" seems important; the typology in my post is perhaps something Eliezer saw then but isn't something he discusses in that post. (To be clear, that post is my take on Anna's concept, and where Anna got that typology from is unknown to me; it might even be Eliezer!)
Replies from: ingres↑ comment by namespace (ingres) · 2018-10-26T21:15:01.203Z · LW(p) · GW(p)
One of the reasons why academia has all those strict norms around plagiarism and citing sources is that it makes the "conceptual family tree" legible. Otherwise it just kind of becomes soupy and difficult to discern.
comment by Vaniver · 2018-10-26T20:47:23.991Z · LW(p) · GW(p)
I discussed a few of the points here with some people at the MIRI lunch table, and Scott Garrabrant pointed out "hey, I loudly abandoned Bayesianism!". That is, we always knew that the ideal Bayesianism required infinite computation (you don't just consider a few hypotheses, but all possible hypotheses) and wouldn't work for embedded agents, and as MIRI became more interested in embedded agency they started developing theories of how that works. There was some discussion of how much this aligned with various people's claim that the quantitative side of Bayes wasn't all that practical for humans (with, I think, the end result being seeing them as similar).
For example, in August 2013 there was some discussion of Chapman's Pop Bayesianism, where I said:
I think the actual quantitative use of Bayes is not that important for most people. I think the qualitative use of Bayes is very important, but is hard to discuss, and I don’t think anyone on LW has found a really good way to do that yet. (CFAR is trying, but hasn’t come up with anything spectacular so far to the best of my knowledge.)
Then Scott Alexander responded, identifying Bayesianism in contrast to other epistemologies, and I identified some qualitative things I learned from Bayes, as did Tyrrell McAllister [LW · GW].
How does this hold up, five years later?
I still think Bayesianism as synthesis of Aristotelianism and Anton-Wilsonism is superior to both; I think the operation underlying Bayesianism for embedded agents is not Bayesian updating, but rather something that approaches Bayesian updating in the limit, and that one of the current areas of progress in rationality is grappling with what's actually going there. (Basically, this is because of the standard Critical Rationalist critique of Bayesianism, that the Bayesian view says the equivalent of "well, you just throw infinite compute at the problem to consider the superset of all possible answers, and then you're good" which is not useful advice to current practicing scientists. But the CR answer doesn't appear to be good enough either.)
I think basically the same things I did then--that the actual quantitative use of Bayes is not that important for most people, and that CFAR's techniques for talking about the qualitative use of Bayes mostly don't refer to Bayes directly. I don't think this state of affairs represents a 'school without practitioners' / I still disagree with The Last Rationalist's assessment of things, but perhaps I'm missing what TLR is trying to point at.
Replies from: TAGcomment by Vaniver · 2018-10-26T06:49:28.275Z · LW(p) · GW(p)
I'm a CFAR instructor, but the following views are my personal opinion.
All of this is to support a claim that should be fairly obvious, but I suspect many readers will try to wriggle out of if I state it without extensive justification: Bayesian methods are a core feature of Eliezer Yudkowsky's version of rationality. You might even say that Eliezer's variant could be called "Bayesian Rationality". It's not a 'technique' or a 'tool', to Eliezer Bayes is the law, the irrefutable standard that provides a precise unchanging figure for exactly how much you should update in response to a new piece of evidence. Bayes shows you that there is in fact a right answer to this question, and you're almost certainly getting it wrong.
This in turn points toward the uncomfortable fact that Bayes does not seem to have helped the Bayesian Rationalists develop useful approximations of correct inference. In fact, it's not so much that we started with primitive approximations and then improved them. Rather, the Bayesian feature of Eliezer's philosophy seems to have left no conceptual descendants in the meme pool. For example, the Center For Applied Rationality's 2017 handbook does not include the phrase "Bayes Theorem" even once. It's taken by the current cohort as something of a status symbol, a neat novelty you can claim to have knowledge of to boost prestige.
I think section is basically wrong, both about how Bayes relates to Eliezer's overall philosophy and how it's impacted the rationality community. On Eliezer's philosophy, take this bit from Twelve Virtues of Rationality:
You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.
I find it harder to assess how the rationality community or its meme pool have been affected, mostly because I find it hard to separate out what I know and what the community knows. I was essentially a professional statistician before joining the rationality community and so was well familiar with Bayes and probabilistic reasoning; the main thing I got from Eliezer's form of Bayesianism was applying it to philosophical positions and intuitions and similar things, where I claim the primary thing of importance is the basic conceptual structure, rather than the actual math.
Which gets to the question of CFAR classes and conceptual descendants. CFAR did teach classes on Bayes, back in the old days, and found that they didn't help students--either the students already knew, or they didn't get much out of it. But whether or not current classes are conceptual descendants of Bayes presumably depends on whether the concepts are related, not whether or not Bayes is referenced. Indeed,if Bayes Theorem were mostly a status symbol, wouldn't it be surprising for it to not be present in the handbook, just as an applause light?
Taking Double Crux [LW · GW] as an example of a central CFAR technique, I claim the core feature that distinguishes double crux from other methods of communication is that it focuses on cruxes, that is, subbeliefs that would change your mind on the superbelief, or in Bayesian terms, P(A|B) is high and P(A|~B) is low, rather than justifications, that is, arguments that establish that P(A) is high. The 'double' bit means that both of us disagree on the other fact B, and both of us think it's meaningful. This means we're attempting to jointly optimize our expected Bayesian update, essentially, rather than trying to convince each other, in a way that makes it clearly a conceptual descendant of Bayes Theorem. Now, it's slightly awkward that the standard presentation of double crux treats beliefs as logical instead of probabilistic, but this seems to me to be a pedagogical tool instead of a conceptual flaw. (Indeed, the Bayesian treatment of double crux is obvious and smoothly connects to the practice of double crux in the wild.)
From the OP:
This is fairly close to the situation we find ourselves in with the bias literature. But nobody seems particularly shaken, and why should they? Our naive impression is just that, naivete. The straightforward conclusion is that if deleting knowledge from the canon causes no reaction, then it clearly wasn't important to people.
From one of Eliezer's recent posts:
(I’ve read a number of replications and variations on this research, and the effect size is blatant. I would not expect this to be one of the results that dies to the replication crisis, and I haven’t yet heard about the replication crisis touching it. But we have to put a maybe-not marker on everything now.)
From the OP again:
In my next post I'll show how this dynamic came about, and what it looks like when a community actively updates its knowledge in response to new information and events.
This seems potentially interesting, but I am somewhat worried that the model won't break down 'community' at a sufficient level of detail that it actually ends up being useful, or (more to my interests) won't engage with the tech of LW2.0. Curation, sequences, and similar things are attempts to make it such that new users see not just R:AZ but also Scott Alexander's sequences and also Inadequate Equilibria and so on. That said, a community is much more than its new members, and I suspect dealing with the ecosystem of individuals involved requires a wide view and many tools. [I am also not terribly impressed with the other posts on the blog so far, but I hope those to come will be useful.]
Replies from: SaidAchmiz, SaidAchmiz, ChristianKl↑ comment by Said Achmiz (SaidAchmiz) · 2018-10-26T09:04:08.073Z · LW(p) · GW(p)
(This comment is specifically re: Bayes being core to Yudkowskian rationality.)
From the OP:
Bayesian methods are a core feature of Eliezer Yudkowsky’s version of rationality. You might even say that Eliezer’s variant could be called “Bayesian Rationality”. It’s not a ‘technique’ or a ‘tool’, to Eliezer Bayes is the law, the irrefutable standard that provides a precise unchanging figure for exactly how much you should update in response to a new piece of evidence. Bayes shows you that there is in fact a right answer to this question, and you’re almost certainly getting it wrong.
And from your comment:
I think section is basically wrong … On Eliezer’s philosophy, take this bit from Twelve Virtues of Rationality:
You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.
But Eliezer also said this:
Now one can’t simultaneously define “rationality” as the winning Way, and define “rationality” as Bayesian probability theory and decision theory. But it is the argument that I am putting forth, and the moral of my advice to trust in Bayes [? · GW], that the laws governing winning have indeed proven to be math. If it ever turns out that Bayes fails—receives systematically lower rewards on some problem, relative to a superior alternative, in virtue of its mere decisions—then Bayes has to go out the window. “Rationality” is just the label I use for my beliefs about the winning Way—the Way of the agent smiling from on top of the giant heap of utility. Currently, that label refers to Bayescraft.
Well, and did we ever “discover [our] mistake”? Is that a thing that happened? Did Bayes fail, and consequently “go out the window”? Did the label of “rationality” ever get reassigned, to something other than “Bayescraft”? When did any of this take place?
As far as I can tell, nothing like that ever happened. So:
- Clearly, Bayes’s theorem, and Bayesian reasoning, were absolutely as central to Eliezer’s account of rationality as The Last Rationalist [henceforth, “TLR”] claims. Eliezer calls it “the laws governing winning”. He says that “rationality” is a label that “refers to Bayescraft”. (Need I dig up more quotes about how Bayes is the law and any deviation from it is guaranteed-incorrect? I can, easily; just give the word.) Saying that the quoted bit of the OP is “basically wrong” seems totally unjustifiable to me.
- There has never been any retraction, crisis of faith, renunciation, “halt and catch fire”, or any other reversal like that. Or, if there has been, it’s not widely known. (Of course, who knows what Eliezer may have said in some lost, un-google-able Facebook post? Maybe it’s in there somewhere…)
TLR overreaches when he calls Bayesian reasoning (according to Eliezer) “irrefutable”. You can excuse that as a figure of speech, or you can mark the OP down for an unsupportable claim; but either way, other than this one word, TLR’s description of Bayesian reasoning’s place in Eliezer’s rationality seems to be spot-on.
Replies from: Vaniver↑ comment by Vaniver · 2018-10-26T17:37:43.036Z · LW(p) · GW(p)
Well, and did we ever “discover [our] mistake”? Is that a thing that happened? Did Bayes fail, and consequently “go out the window”? Did the label of “rationality” ever get reassigned, to something other than “Bayescraft”? When did any of this take place?
This is sort of hard to answer, because I want to be clear that I don't think Bayes-as-fact or Bayes-as-lens failed; the thing that I think changed is Bayes-as-growth-edge went from likely to unlikely. This is the thing you would expect if Bayes is less rich and complicated than 'the whole universe'; eventually you grok it and your 'growth edge' of mistakes to correct moves somewhere else, with the lens of Bayes following you there.
I also think it's important that Eliezer mostly doesn't work on rationality things anymore, most of the technically minded rationalists I know have their noses to the grindstone (in one sense or another), and the continued development of rationality is mostly done by orgs like CFAR and individuals like Scott Alexander, and I don't think they've found the direct reference to Bayes to be particularly useful for any of their goals (while I do think they've found habits of mind inspired by Bayes to be useful, as discussed in the parallel branch).
Edit: I do think it's somewhat surprising that the best pedagogical path CFAR has found doesn't route through Bayes, but the right thing to do here seems to be to update on evidence. ;)
↑ comment by Said Achmiz (SaidAchmiz) · 2018-10-26T09:24:43.734Z · LW(p) · GW(p)
I find it harder to assess how the rationality community or its meme pool have been affected
Well, for one thing, most of the community has not read even most, much less all, of the Sequences [LW · GW]. (Linked data is from 2016. Do you think this has changed much in two years? What percentage of currently active Less Wrong posters/commenters do you think have read the entirety of the Sequences, for example?)
Taking Double Crux as an example of a central CFAR technique …
It’s an interesting example. “Bayes” appears nowhere in the linked post, I notice. (Several times in the comments—and ever there, never, as far as I can tell, in the context of linking the double crux technique to any Bayesian structure or methods.)
Now, it’s slightly awkward that the standard presentation of double crux treats beliefs as logical instead of probabilistic
Slightly?!
Indeed, the Bayesian treatment of double crux is obvious
I… beg to differ. (Which is not by any means to say that there can be no Bayesian treatment of double crux. I entirely take your word for its existence! Unquestionably, you have the expertise, here. But obvious?? No sir, not in the slightest.)
All of this is not to nitpick needlessly, but to point out that to a somewhat-outside observer, it certainly seems like Bayesianism has been quietly abandoned. If that impression is mistaken; if it is the case that actually, everything that CFAR does is based on the most unimpeachably Bayesian of foundations, and in fact constitutes the natural and optimal perfection of Eliezer’s correct, but unpolished, Bayesian approach; well… certainly you must forgive anyone who fails to come by that conclusion unaided.
Replies from: Vaniver↑ comment by Vaniver · 2018-10-26T17:55:43.815Z · LW(p) · GW(p)
Do you think this has changed much in two years?
I expect this is roughly the same. The thing that I don't know is how much of the material in the Sequences is 'in the groundwater', where someone who hangs around a bunch of people who have read the Sequences picks up what they need to know. For example, I predict a similar survey done of members of the church I grew up in would find similar percentages had read similar fractions of the Bible (here's one of Americans as a whole), but it would be confused to think that because they hadn't read the Bible, they didn't know what was in it to a sufficient degree to get by. (Of course, being the sort of person who did take reading the whole Bible seriously, I take reading the whole Sequences seriously, but I also expect I want different things out of this community than the median member.)
But obvious?? No sir, not in the slightest.
Would you mind telling me what you tried? I agree that this is more like a comment in a math textbook of "this is obvious, so we're not going to show it" than it is like a comment in other contexts that assume the reader is prepared to do less work, but if there is an obstacle to recasting double crux in Bayesian terms I don't see it and would like to (as part of learning the PCK of double crux).
All of this is not to nitpick needlessly, but to point out that to a somewhat-outside observer, it certainly seems like Bayesianism has been quietly abandoned.
I am curious if the same thing seems true for atheism. In my mind, that faded into the background, and the tone of that discussion in the Sequences 'was a product of its time' and didn't age particularly well. But fading into the background is different from being quietly abandoned, I think, because the former reads more as "obviously true and uninteresting" and the latter reads more as "either false or unproductive." [Uninteresting in the sense that I don't see it as having wisdom I haven't incorporated yet, as mentioned in the other thread under the name 'growth edge'.]
Replies from: SaidAchmiz, SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-10-26T19:56:05.737Z · LW(p) · GW(p)
The thing that I don’t know is how much of the material in the Sequences is ‘in the groundwater’, where someone who hangs around a bunch of people who have read the Sequences picks up what they need to know.
Empirically, the answer seems to be “not even close to enough”.
I have regular occasion to chat with a group of folks within the “rationalist diaspora”, of whom a few have read all of the Sequences, many have read none or almost none of them, and some are somewhere in between. Many of each of these groups are also active in multiple online and offline rationalist communities, and so interact with many rationalists on a regular basis.
It is my very strong impression, confirmed again and again over the course of years, that, to a first approximation, people pick up almost nothing of the important parts of the Sequences by “osmosis”. The gap in understanding—and, even more, and more importantly, in integration and use—of the ideas that Eliezer presents in the Sequences, between those who have read and those who have merely “absorbed”, is vast. If someone has even skimmed the Sequences instead of reading them properly, or has read scattered posts here and there instead of going through the material in order—it shows. Very clearly.
Then, of course, there is the fact that people here, on Less Wrong, routinely make conceptual mistakes which are described with perfect clarity in (to pick one depressingly common example) “A Human’s Guide to Words”—mistakes so serious, and yet so basic, that they should (and do) make people who have actually read the Sequences cringe in secondhand embarrassment. (People make those mistakes, and are not corrected, or even downvoted.)
For example, I predict a similar survey done of members of the church I grew up in would find similar percentages had read similar fractions of the Bible (here’s one of Americans as a whole), but it would be confused to think that because they hadn’t read the Bible, they didn’t know what was in it to a sufficient degree to get by.
But this is a strange example, is it not? Surely, for this to be a useful analogue, you would have to believe, and be suggesting, that most of what is in the Sequences is either extraneous or actively wrong or detrimental!
After all, what exactly does a church member need to know, of the contents of the Bible, in order to “get by”? What does “getting by” constitute? (You know the answer far better than I do, so this is not a rhetorical question!) If I don’t know anything about what is in the Book of Jonah (except, perhaps, some vague impression that it involved aquatic travel), in what way would I fail to “get by”?
And, conversely, if I know all of what the Bible says about (to pick a popular example) which materials I should make my clothes out of, and attempt to apply this in my actual life, it would seem that I would “get by” worse, not better! In short, the percentage of the Bible that one needs to know, in order to “get by”, would, in fact, seem to be quite small.
So are the Sequences like this?
Then there is the fact that if you’re a member of a church, you are treated to regular sermons, in which someone who knows far more of the Bible than you do, and whose job, in fact, it is to know all about what’s in the Bible, what’s important, etc., tells you all about these things, explains them to you, explains what to do with that information, etc.
Now, I happen to know that there are, perhaps, a half-dozen Sequences reading/study groups throughout the world, which do go through the material, together (this is just those of which I know because they use ReadTheSequences.com; there may of course be others). Still, most rationalists, or “rationalist-adjacent” folks, hear no such regular “sermons”.
“Social osmosis” and “regular lectures by an expert” are very, very different things.
Replies from: Vaniver↑ comment by Vaniver · 2018-10-26T22:46:32.836Z · LW(p) · GW(p)
I think I basically agree with your position, when it comes to 'whether the Sequences are useful' (yes) and 'the population as a whole' (not reading them enough). Most of the people I'm thinking of as counterexamples are exceptional people who are expert in something related who are closely affiliated with core members of the community, such that they could have had their visible rough edges polished off over significant interactions over the course of the last few years. But even then, I expect them to benefit from reading the Sequences, because it might expose invisible rough edges. Maybe there's an experiment here, around paying people in the reference class that seem unexposed but not underexposed to read more of the Sequences and determine whether or not they were actually underexposed?
Then there is the fact that if you’re a member of a church, you are treated to regular sermons, in which someone who knows far more of the Bible than you do, and whose job, in fact, it is to know all about what’s in the Bible, what’s important, etc., tells you all about these things, explains them to you, explains what to do with that information, etc.
Yeah, I think things like this would be quite good, and also possibly quite popular. Maybe this should even be a rationalist podcast, or something? I also note it feels closer to Slate Star Codex posts to me than Sequence posts, for some reason; it feels like Scott is often looking at something object-level and saying "and this connects to consideration X from the Sequences" in a way that makes it feel like a sermon on X with a particular example.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-10-26T23:23:53.471Z · LW(p) · GW(p)
Then there is the fact that if you’re a member of a church, you are treated to regular sermons, in which someone who knows far more of the Bible than you do, and whose job, in fact, it is to know all about what’s in the Bible, what’s important, etc., tells you all about these things, explains them to you, explains what to do with that information, etc.
Yeah, I think things like this would be quite good, and also possibly quite popular. Maybe this should even be a rationalist podcast, or something?
I agree that things like this are good. It’s important to note that some such things already exist (i.e. the reading/study groups I mentioned); one notable example is the Seattle Rationality Reading Group, which has done readings and discussions of parts of the Sequences. Do I think more things like this would be good? Yes, absolutely.
I don’t think a podcast would be all that great, though. (Or even blog posts; though a podcast would, of course, be much worse.) One of the great advantages of things along the sermon–colloquium continuum is that the interaction between lecturer / discussion-leader is in-person, so it can both be tailored to the audience, and interactive. (Would church sermons work better, or worse, as a podcast? What about discussion-style courses in college?)
(Fun fact: did you know that this is actually how the New York City rationalist scene got started? Several Overcoming Bias / Less Wrong readers—yes, this was that long ago—organized a small series of lectures / discussion groups, where one of the organizers would give a talk reviewing some part of the Sequences, and then there’d be discussion. It was great fun, intellectually stimulating, and even drew a somewhat more diverse audience than one finds at rationalist get-togethers these days. Of course, that quickly came to an end, and the gatherings stopped being “public colloquium, held in a small public meeting room” and became “hanging out with your friends, at someone’s apartment”. I do think the former format is worth doing, even today.)
Replies from: DanielFilan, Raemon↑ comment by DanielFilan · 2018-10-26T23:28:57.541Z · LW(p) · GW(p)
Would church sermons work better, or worse, as a podcast?
Note that "sermon podcasts" are definitely a thing. See this article on why they're bad, and this article on why and how to do it.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-10-27T00:28:18.588Z · LW(p) · GW(p)
Wow! That first article (“How Podcasting Hurts Preaching”) is outstanding. Thank you for linking it! (I encourage everyone here to read it, by the way.)
As for the second link (“The Definitive Guide to Starting a Podcast for Your Church”)… well.
The first article is published in Christianity Today, a Christian website / publication.
The second, meanwhile, is published on the website of Buzzsprout, which is a company that makes money by selling podcasting services.
It is fascinating to note that the Buzzsprout article (an unreflective “listicle”-type piece) is exactly everything that the Christianity Today essay explicitly notes, analyzes, and warns against. (Needless to say, the second article does not engage with, or even acknowledge the existence of, the first article’s point of view. But then, why would it?)
Replies from: Viliam, habryka4↑ comment by Viliam · 2018-11-05T23:59:56.728Z · LW(p) · GW(p)
I think the main reason against podcasting preaching is that religion is mostly about social experience. Replace "hundreds of people who meet in real space regularly" with a podcast, and all that's left is some theology, that frankly most religious people don't care about that much.
↑ comment by habryka (habryka4) · 2018-10-27T04:41:48.195Z · LW(p) · GW(p)
I am somewhat surprised that you like the first article. I didn't read it super deeply, but one of its core arguments seems to rest on this analogy, which is deeply flawed on multiple levels:
If value is a function of scarcity, then we must understand just what scarcity means. Scarcity can either be real and produce actual value, as in gold, virginity, and integrity, or it can be manufactured and produce perceived value, as in Pokémon cards, bitcoin, and diamonds.
It seems that the author does not understand how scarcity works, given that, for some completely unexplained reason "gold" has "actual value" but "diamonds" only have "perceived value". The virginity thing seems a bit weird, but I guess makes sense in a Christian context, but the integrity thing seems deeply incoherent again. Why would "integrity" derive its value from being scarce? Maybe this is some christian reference again, but my model of integrity and moral virtue has nothing to do with scarcity, and if anything, having high-integrity behavior be scarce diminishes the value of other high-integrity behavior (i.e. there are increasing marginal returns to cooperation).
I think I agree with the point the overall article makes, but it strikes me as exceptionally badly argued, and seems to provide little further evidence for its position. I probably wouldn't recommend that others read the article, and think they will probably be better served by thinking for themselves for 3 minutes about what the obvious arguments against sermon podcasts are. But then again, the article takes like 2 minutes to read, so this is probably just me bikeshedding and trying to distract myself from boring data-entry tasks.
Replies from: SaidAchmiz, Raemon↑ comment by Said Achmiz (SaidAchmiz) · 2018-10-27T06:18:50.545Z · LW(p) · GW(p)
It seems that the author does not understand how scarcity works, given that, for some completely unexplained reason “gold” has “actual value” but “diamonds” only have “perceived value”.
This seems correct to me.
Re: diamonds: I assume the author is referring to the fact that while diamonds have industrial applications—e.g., in drill bits—as I understand it, synthetic diamonds work just fine for such uses. Naturally occurring diamonds are, famously, only considered to have decorative value due to the marketing and advertising efforts of De Beers. For this and related market-manipulation reasons, the market price of natural diamonds does not reflect any true scarcity or demand.
As for gold—there is no “synthetic gold”, obviously, and the gold that goes into your gold necklace could instead be used to plate connectors in electronics, etc. It is scarce because there’s a lot of demand for it, and limited supply. If tomorrow people everywhere ceased to think that shiny things are pretty, gold would not thereby become worthless (though its market value would decrease, obviously).
The virginity thing seems a bit weird, but I guess makes sense in a Christian context …
Well, I am reluctant to get into a potentially controversial discussion, but of course there are perfectly well understood game-theoretic issues here as well. The more salient point, however, is that if you think there’s something important about virginity, it is certainly a naturally scarce good; it is generally not necessary to try and keep the supply of virgins artificially low.
… but the integrity thing seems deeply incoherent again. Why would “integrity” derive its value from being scarce? Maybe this is some christian reference again, but my model of integrity and moral virtue has nothing to do with scarcity …
It may well be a Christian reference, but if so, it sailed over my head as well. Perhaps knowing the reference would lead me to deeper understanding of this one of the author’s points; but lacking that, the bit about “integrity” makes sense to me regardless.
The key thing to note is that the author does not say that integrity derives its value from being scarce. What he says is that in the case of integrity, scarcity produces real value. (Unlike the former claim, the latter claim does not exclude the possibility of integrity being valuable even if not scarce.)
It seems like a fairly straightforward claim, to say that a quality like integrity is all the more valuable for being rare. The difference between “real” and “perceived” value generated by scarcity is precisely the difference between something that is good even if not scarce (but all the more valuable if scarce), and something that is generally worthless, but which becomes perceived as valuable if scarce (often largely because it can then function as a positional good).
Then there is the fact that if integrity is scarce, and if it is also correlated with other moral virtues (that benefit those one interacts with), then it can serve as a signal—of conscientiousness, of dedication, of professionalism, of competence… of all sorts of things. If integrity is ubiquitous, this function disappears.
… and if anything, having high-integrity behavior be scarce diminishes the value of other high-integrity behavior (i.e. there are increasing marginal returns to cooperation).
I do not say that this (the non-parenthesized part) is wrong, as written, but it is not clear to me that what you take “integrity” to mean is the same as what the author means by it. I do not think that “integrity” has all that much to do with “cooperation” (indeed, integrity can often impede prosociality).
But lest I get totally distracted in a claim-rebuttal back-and-forth—I do want to answer your broader question, i.e. “why did I like this article”.
Churches, like any other institution, cannot simply adopt the latest communication technology with impunity. There are profound consequences for doing simply what is possible and popular in our culture without considering what is prudent.
This is not an argument, per se, and it is unlikely to be convincing anyone who doesn’t already believe it (or who agrees with it, taken literally, but thinks that it’s a contentless platitude). But this sort of thing absolutely needs to be said. It is of tremendous importance to keep this perspective in mind. Many contemporary developments in technology and society are driven by the opposite view. The quoted view is a direly necessary counterbalance.
I commented above on the “real vs. perceived value” bit. I think you do that part of the author’s argument a disservice. The concepts of artificial scarcity and positional goods are not new, not nonsense, and certainly do not bear casual dismissal. The concept of constructed systems of limitations as pathways to meaning is also not new (find it in flow psychology, find it in art, find it in game design… and in many loftier places).
The concepts of mediation and time-shifting, on the other hand, are… if not new as such, certainly underexplored. There is a lot to think about, here. (And as for micro-cultures… surely you’ve read this gwern classic?)
Finally, it is hard to entirely dislike a sober and serious essay on the intersection of psychology, theology, and technology, that also contains a line like:
Replies from: Vladimir_NesovNow really is a good time to ask, “What would Jesus podcast?”
↑ comment by Vladimir_Nesov · 2018-10-27T09:48:55.926Z · LW(p) · GW(p)
It seems like a fairly straightforward claim, to say that a quality like integrity is all the more valuable for being rare.
I think the difference is between associating with producers vs. consumers. When something is more scarce, its price increases, which makes it less valuable to consumers and more valuable to producers of individual items. So for people who perceive themselves as producers of things like integrity and virginity and honest labor, scarcity would contribute to their value. And for things like medicine or the naive models of bitcoin and diamonds, decrease their value by increasing the price, since the audience of the article identify as consumers.
↑ comment by Raemon · 2018-10-27T05:35:02.139Z · LW(p) · GW(p)
I was actually going to respond with "yeah, I quite liked the article... but wow that one particular line was super weird and silly."
And yes that makes their argument suspicious, but, well, a) it's an article about how to do church right, so it's unlikely I was going to like their epistemics in the first place, b) as weird and silly as that line is, I don't actually think the argument depends on them having gotten it right.
If they posted on LW I'd criticize it on several levels, but, well, having found it on some random christian website I'm just not going to be evaluating it through that lens unless it really surprises me.
↑ comment by Raemon · 2018-10-26T23:27:14.085Z · LW(p) · GW(p)
FYI, the NY group still does this once every few years, with varying degrees of success. It's just that when most people have read the sequences it gets a bit stale. (From what I gather, most recently there's been a sequence reading group by people who haven't read them before, although I'm not 100% sure about the details since I don't live in NY anymore)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-10-26T23:46:59.462Z · LW(p) · GW(p)
Interesting! Do you happen to know how one might track down said reading group? (Or do they not want to be tracked down?)
Replies from: Raemon↑ comment by Said Achmiz (SaidAchmiz) · 2018-10-26T20:14:32.800Z · LW(p) · GW(p)
All of this is not to nitpick needlessly, but to point out that to a somewhat-outside observer, it certainly seems like Bayesianism has been quietly abandoned.
I am curious if the same thing seems true for atheism. In my mind, that faded into the background, and the tone of that discussion in the Sequences ‘was a product of its time’ and didn’t age particularly well. But fading into the background is different from being quietly abandoned, I think, because the former reads more as “obviously true and uninteresting” and the latter reads more as “either false or unproductive.” [Uninteresting in the sense that I don’t see it as having wisdom I haven’t incorporated yet, as mentioned in the other thread under the name ‘growth edge’.]
Yes, the same absolutely seems true for atheism. It didn’t fade into the background; it was actively abandoned. It wasn’t even a quiet abandonment. The atheism/secularism that permeates the Sequences and which was an explicit assumption and policy of Less Wrong 7–9 years ago would get you heavily downvoted and censured here, and lambasted and possibly banned on SSC. “Either false or unproductive” is exactly how I’d describe most rationalists’ (and certainly that of most of the ones in visible/influential online spaces) attitude toward atheism/secularism/etc.
It is all well and good to acknowledge that you have incorporated the wisdom offered by some perspective, and having nothing further to learn from it. It is entirely a different matter to reverse course, to abandon that perspective and to adopt its opposite.
Replies from: Vaniver, DanielFilan, Gurkenglas↑ comment by Vaniver · 2018-10-26T22:19:22.720Z · LW(p) · GW(p)
The atheism/secularism that permeates the Sequences and which was an explicit assumption and policy of Less Wrong 7–9 years ago would get you heavily downvoted and censured here, and lambasted and possibly banned on SSC.
I am surprised by this claim, and would be interested in seeing examples. In 2014 (closer to 7 years ago than today), Scott wrote this:
Were we ever this stupid? Certainly I got in fights about “can you still be an atheist rather than an agnostic if you’re not sure that God doesn’t exist,” and although I took the correct side (yes, you can), it didn’t seem like oh my god you are such an idiot for even considering this an open question HOW DO YOU BELIEVE ANYTHING AT ALL WITH THAT MINDSET.
Now, that's about a different question than "is God real or not?" (in the comments, Scott mentions Leah and the ~7% of rationalists who are theists).
In the R:AZ preface, Eliezer writes this:
My fifth huge mistake was that I—as I saw it—tried to speak plainly about the stupidity of what appeared to me to be stupid ideas. I did try to avoid the fallacy known as Bulverism, which is where you open your discussion by talking about how stupid people are for believing something; I would always discuss the issue first, and only afterwards say, “And so this is stupid.” But in 2009 it was an open question in my mind whether it might be important to have some people around who expressed contempt for homeopathy. I thought, and still do think, that there is an unfortunate problem wherein treating ideas courteously is processed by many people on some level as “Nothing bad will happen to me if I say I believe this; I won’t lose status if I say I believe in homeopathy,” and that derisive laughter by comedians can help people wake up from the dream.
Today I would write more courteously, I think. The discourtesy did serve a function, and I think there were people who were helped by reading it; but I now take more seriously the risk of building communities where the normal and expected reaction to low-status outsider views is open mockery and contempt.
Despite my mistake, I am happy to say that my readership has so far been amazingly good about not using my rhetoric as an excuse to bully or belittle others. (I want to single out Scott Alexander in particular here, who is a nicer person than I am and an increasingly amazing writer on these topics, and may deserve part of the credit for making the culture of Less Wrong a healthy one.)
In 2017, Scott writes How Did New Atheism Fail So Miserably?, in a way that signals that Scott is not a New Atheist and is mostly annoyed by them, but is confused by why the broader culture is so annoyed by them. But the sense that someone who is all fired up about God not being real would be 'boring at parties' is the sense that I get from Scott's 2017 post, and the sense that I get from reading Scott in 2014, or what I remember from LW in 2012. Which is quite different from "would get you banned" or religion being a protected class.
Like, when I investigate my own views, it seems to me like spending attention criticizing supernaturalist religions is unproductive because 1) materialist reductionism is a more interesting and more helpful positive claim that destroys supernaturalist religion 'on its own', and 2) materialist religions seem like quite useful tools that maybe we should be actively building, and allergies to supernaturalist religions seem unhelpful in that regard. This doesn't feel like abandoning the perspective and adopting the opposite, except for that bit where the atheist has allergies to the things that I want to do and I think those allergies are misplaced, so it at least feels like it feels that way to them.
Replies from: Viliam↑ comment by Viliam · 2018-11-06T00:12:39.326Z · LW(p) · GW(p)
I am not sure how to best handle the topic of religion in a community blog.
If it is a single-person blog, the optimal solution would probably be mostly not to even mention it (just focus on naturalistic explanations of the world), and once in a long while to explain, politely, why it is false (without offending people who disagree).
With a community blog, the problem is that being polite towards religion may be interpreted by religious people as an invitation to contribute, but their contributions would inevitably include pro-religious statements, at least sometimes.
And if you make it explicit like "religious people are welcome, but any pro-religious statements will be immediately deleted, and the author may be banned", that sounds like your atheism is a dogma, not an outcome of a logical process (which you merely don't want to repeat over and over again, because you have more interesting stuff to write about). And even here I would expect a lot of rules-lawyering, strongly hinting, etc.
↑ comment by DanielFilan · 2018-10-26T20:42:04.749Z · LW(p) · GW(p)
“Either false or unproductive” is exactly how I’d describe most rationalists’ (and certainly that of most of the ones in visible/influential online spaces) attitude toward atheism/secularism/etc.
This really surprises me. Do you mean to say that if you asked 20 randomly-selected high-karma LW users whether God as depicted in typical religions exist, at least 10 would say "yes"? If so, I strongly disagree, based on my experience hanging out and living with rationalists in the Bay Area, and would love to bet with you. (You might be right about SSC commenters, I'll snobbishly declare them "not real rationalists" by default)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-10-26T21:36:02.663Z · LW(p) · GW(p)
In these conversations, it pays to be precise. To wit:
Do you mean to say that if you asked 20 randomly-selected high-karma LW users whether God as depicted in typical religions exist, at least 10 [half of the total —SA.] would say “yes”?
(Emphasis mine.)
I do not mean to say this, no. There is, indeed, a difference between all of these:
- “The most high-status[1] members of a community (almost) all believe X.”
- “The members of a community (almost) all believe X.”
- “The vocal members of a community (almost) all believe X (and say so).”
- “The members of a community (almost) all believe X, and the high-status members of that community do not make publicly clear their disbelief in X.”
- “The vocal members of a community (almost) all believe X (and say so), and the high-status members of that community do not gainsay them.”
What I said was somewhere in the #s 2–5 region. You asked whether I really believed #1. I never claimed to.
What’s more, my use of the disjunction was deliberate. I did not mean to imply that the breakdown between “believes that atheism/secularism is false” and “considers atheism/secularism unproductive” is even. It would surprise me if it were. But if, in a community, 5% believe X, 5% believe ¬X, and 90% (including all or almost all of the highest-status individuals) probably more or less believe ¬X but consider it unproductive to discuss or even clearly state the (claimed) fact that ¬X, or possibly even unproductive to believe ¬X, then this community will be friendly to discussions of X but unfriendly to objections that, actually, ¬X; and X (and X-derived/adjacent memes) will spread more easily than ¬X (and ¬X-derived/adjacent memes).
Finally, nothing at all that I said referred to, or implied, anything about “as depicted in typical religions”. That, I regret to note, was entirely a strawman on your part.
[1] We can, perhaps—and should, probably—have a discussion about how karma on Less Wrong maps to status in rationalist communities. This is not that discussion, however.
Replies from: DanielFilan↑ comment by DanielFilan · 2018-10-26T22:30:38.996Z · LW(p) · GW(p)
I agree that it pays to be precise, which is why I was asking if you believed that statement, rather than asserting that you did. I guess I'd like to hear what proposition you're claiming - is "X" meant to stand in for "atheism/secularism" there? Atheism is almost precise (although I start wondering whether simulation hypotheses technically count, which is why I included the "as depicted in typical religions" bit), but I at least could map "secularism" to a variety of claims, some of which I accept and some of which I reject. I also still don't know what you mean by "unproductive" - if almost everybody I interact with is an atheist, and therefore I don't feel the need to convince them of atheism, does that mean that I believe atheism is unproductive? (Again, this is a question, not me claiming that your answer to the question will be "yes")
Replies from: Vaniver, ChristianKl↑ comment by Vaniver · 2018-10-26T23:10:08.971Z · LW(p) · GW(p)
if almost everybody I interact with is an atheist, and therefore I don't feel the need to convince them of atheism, does that mean that I believe atheism is unproductive?
I note an important distinction between "don't feel the need to preach to the choir" and "don't feel the need to hold people accountable for X". It's one thing if I'm operating in a high trust environment where people almost never steal from each other, and so policies that reduce the risk of theft or agitating against theft seem like a waste of time, and it's another thing if I should shrug off thefts when I witness them because thefts are pretty rare, all things considered.
By analogy, it seems pretty important if theism is in the same category as 'food preferences' (where I would hassle Alice if Alice hassled Bob over Bob liking the taste of seaweed) or as 'theft' (where I would hassle Alice over not hassling Bob over Bob stealing things). (Tolerate tolerance [LW · GW], coordinate meanness, etc.)
[Edit: to be clear, I don't think theism is obviously in the same category as stealing, but I think it is clearly an intellectual mistake and have a difficult time trusting the thinking of someone who is theist for reasons that aren't explicitly social, and when deciding how much to tolerate theism one of the considerations is something like "what level of toleration leads to the lowest number of theists in the long-run, or flips my view on atheism?".]
Replies from: Gurkenglas↑ comment by Gurkenglas · 2018-10-28T02:08:01.416Z · LW(p) · GW(p)
I would hassle Alice over not hassling Bob over Bob stealing things
Beware punishing nonpunishers [LW · GW]!
↑ comment by ChristianKl · 2018-10-30T15:21:07.353Z · LW(p) · GW(p)
In the sense of LessWrong "productive" means that you can write posts based on a certain framework that produces valuable ideas.
Bring Back the Sabbath [LW · GW]would be a post that productively uses Judaism. Multiple people in our local dojo found a lot of value in that post even when they don't have any personal connection to Judaism.
Elsewhere you find us making up gods like Omega and Morloch and using them productively.
There aren't any similar straight secular posts that come to mind in the last years that made productive use of atheism or secularism.
↑ comment by Gurkenglas · 2018-10-28T02:15:54.950Z · LW(p) · GW(p)
You mean we shouldn't bash religion [LW(p) · GW(p)], right?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-10-28T05:44:27.531Z · LW(p) · GW(p)
Yes. That comment was correctly self-described as “controversial” for a good reason.
↑ comment by ChristianKl · 2018-10-27T17:44:06.496Z · LW(p) · GW(p)
One of the core principles of Bayesianism was "0 and 1 aren't probabilities". Double crux on the other hand seems to be about switching from the binary "I hold position X" to "I got a reason to stop holding position X".
Do you understand Double Crux differently?
Replies from: Raemon, Vaniver↑ comment by Raemon · 2018-10-27T19:31:05.756Z · LW(p) · GW(p)
The think the binary version of DoubleCrux is a pretty clearly simplified, shorthand form.
"Find the evidence that would significantly change my evaluation of X's likelihood, to a degree that actually impacts my actions or overall worldview or whatever" feels like a more accurate summary of how it's used in practice.
↑ comment by Vaniver · 2018-10-27T19:01:35.160Z · LW(p) · GW(p)
This is what I mean when I say that presentation of Double Crux is logical, instead of probabilistic. The version of double crux that I use is generally probabilistic, and I claim is an obvious modification of the logical version.
Replies from: ChristianKl↑ comment by ChristianKl · 2018-10-30T14:58:59.544Z · LW(p) · GW(p)
That leaves the open question of why CFAR teaches the logical one instead of the probabilistic.
comment by namespace (ingres) · 2018-10-26T21:07:51.386Z · LW(p) · GW(p)
So how many "confirmed kills" of ideas found in the sequences actually are there? I know the priming studies got eviscerated, but the last time I looked into this I couldn't exactly find an easy list of "famous psychology studies that didn't replicate" to compare against.
Replies from: clone of saturn, SaidAchmiz, Vaniver↑ comment by clone of saturn · 2018-10-26T22:35:10.858Z · LW(p) · GW(p)
Here's the list of studies included in the original Reproducibility Project: Psychology.
↑ comment by Said Achmiz (SaidAchmiz) · 2018-10-26T21:18:28.465Z · LW(p) · GW(p)
Well, if someone were interested in this, it seems possible (though time-consuming, of course) to go through every mentioned study or result in the Sequences [LW(p) · GW(p)], research it, and figure out whether it’s been replication crisis’d, etc. This seems like valuable information to gather, and (as noted in the linked comment thread) the tools to aggregate, store, and collaborate on that gathered info already exist.
I do not know of any extant list, though.
↑ comment by Vaniver · 2018-10-26T22:34:12.586Z · LW(p) · GW(p)
I know the priming studies got eviscerated, but the last time I looked into this I couldn't exactly find an easy list of "famous psychology studies that didn't replicate" to compare against.
My understanding is that even this story is more complicated; Lauren Lee summarizes it on Facebook as follows:
OK, the Wikipedia article on priming mostly refers to effects of the first kind (faster processing on lexical decision tasks and such) and not the second kind (different decision-making or improved performance in general).
So uh. To me it just looks like psych researchers over-loaded the term 'priming' with a bunch of out-there hypotheses like "if the clipboard you're holding is heavy, you are more likely to 'feel the significance' of the job candidate." I mean REALLY, guys. REALLY.
Priming has been polluted, and this is a shame.
I would not be surprised if most of the references in the Sequences are to old-school definitions of various terms that are more likely to survive, which complicates the research task quite a bit.
comment by ChristianKl · 2018-10-27T18:00:58.001Z · LW(p) · GW(p)
Yet a newcomer to the community would get the impression that the bias literature and Bayes Theorem are central features.
I don't think that's the case. A newcomer who reads the posts that were published in the last years is unlikely come away with thinking that they are central features.
I think the people in our community mostly updated. Instead of rewriting the sequences there are now a lot of other new posts that were written.