Deflationism isn't the solution to philosophy's woes
post by Rob Bensinger (RobbBB) · 2021-03-10T00:20:07.357Z · LW · GW · 44 commentsContents
47 comments
[epistemic status: thinking out loud; reporting high-level impressions based on a decent amount of data, but my impressions might shift quickly if someone proposed a promising new causal story of what's going on]
[context warning: If you're a philosopher whose first encounter with LessWrong happens to be this post, you'll probably be very confused and put off by my suggestion that LW outperforms analytic philosophy.
To that, all I can really say without [? · GW] starting [? · GW] a very [? · GW] long [LW · GW] conversation [? · GW] is: the typical Internet community that compares itself favorably to an academic field will obviously be crazy or stupid. And yet academic fields can be dysfunctional, and low-hanging fruit can sometimes go unplucked for quite a while; so while it makes sense to have a low prior probability on this kind of thing, this kind of claim can be true, and it's important to be able to update toward it (and talk about it) in cases where it does turn out to be true.
There are about 6700 US philosophy faculty, versus about 6000 LessWrong commenters to date; but the philosophy faculty are doing this as their day job, while the LessWrong users are almost all doing it in their off time. So the claim that LW outperforms is prima facie interesting, and warrants some explanation.
OK, enough disclaimers.]
A month ago, Chana Messinger said:
Rob says, "as an analytic philosopher, I vote for resolving this disagreement by coining different terms with stipulated meanings."
But I constantly hear people complain that philosophers are failing to distinguish different things they mean by words and if they just disambiguated, so many philosophical issues would be solved, most recently from Sam and Spencer on Spencer's podcast.
What's going on here? Are philosophers good at this or bad at this? Would disambiguation clear up philosophical disputes?
My cards on the table: I understand analytic philosophers to be very into clearly defining their terms, and a lot of what happens in academic philosophy is arguments about which definitions capture which intuitions or have what properties, and how much, but I'm very curious to find out if that's wrong.
Sam Rosen replied:
Philosophers are good at coming up with distinctions. They are not good at saying, “the debate about the true meaning of knowledge is inherently silly; let’s collaboratively map out concept space instead.”
An edited version of my reply to Chana and Sam:
Alternative hypothesis: philosophers are OK at saying 'this debate is unimportant'; but...
- (a) ... if that's your whole view, there's not much to say about it.
Sometimes, philosophers do convince the whole field in one fell swoop. A Bertrand Russell comes along and closes the door on a lot of disputes, and future generations just don't hear about them anymore.
But if you fail to convince enough of your colleagues, then the people who think this is important will just keep publishing about it, while the people who think the debate is dumb will roll their eyes and work on something else. I think philosophers in a given subfield tend to think that a large number of the disputes in other subfields are silly and/or unimportant.
- (b) ... there's a culture of being relaxed, or something to that effect, in philosophy?
Philosophical fields are fine with playing around with cute conceptual questions, and largely feel no need to move on to more important things when someone gives a kinda-compelling argument for 'this is unimportant'.
Prominent 20th-century philosophers like David Lewis and Peter van Inwagen acquired a lot of their positive reputation from the fact that all their colleagues agreed that their view was obviously silly and stupid, but there was some disagreement and subtlety in saying why they were wrong, and they proved to be a useful foil for a lot of alternative views. Philosophers don't get nerd-sniped from their more important work; nerd-sniping just is the effective measure of philosophical importance.
We've still ended up with a large literature of philosophers arguing that this or that philosophical dispute is non-substantive.
There's a dizzying variety of different words used for a dizzying variety of different positions to the effect of 'this isn't important' and/or 'this isn't real'.
There are massive literatures drawing out the fine distinctions between different deflationary vs. anti-realist vs. nominalist vs. nihilist vs. reductionist vs. eliminativist vs. skeptical vs. fictionalist vs. ... variants of positions.
Thousands of pages have been written on 'what makes a dispute merely verbal, vs. substantive? and how do we tell the difference?'. Thousands of journal articles cite Goodman's 'grue and bleen' (and others discuss Hirsch's 'incar and outcar', etc.) as classic encapsulations of the problem 'when are concepts joint-carving, and when are words poorly-fitted to the physical world's natural clusters?'. And then there's the legendary "Holes," written by analytic philosophers for analytic philosophers, satirizing and distilling the well-known rhythm of philosophical debates about which things are fundamental or real vs. derived or illusory.
It's obviously not that philosophers have never heard of 'what if this dispute isn't substantive?? what if it's merely verbal??'.
They hear about this constantly. This is one of the most basic and common things they argue about. Analytic philosophers sometimes seem to be trying to one-up each other about how deflationary and anti-realist they can be. (See "the picture of reality as an amorphous lump".) Other times, they seem to relish contrarian opportunities to show how metaphysically promiscuous they can be.
I do think LW strikingly outperforms analytic philosophy. But the reason is definitely not 'analytic philosophers have literally never considered being more deflationary'.
Arguably the big story of 20th-century analytic philosophy is precisely 'folks like the logical positivists and behaviorists and Quineans and ordinary language philosophers express tons of skepticism about whether all these philosophical disputes are substantive, and they end up dominating the landscape for many decades, until in the 1980s the intellectual tide starts turning around'.
Notably, I think the tide was right to turn around. I think mid-20th-century philosophers' skepticism (even though it touched on some very LW-y themes!) was coming from a correct place on an intuitive level, but their arguments for rejecting metaphysics were total crap. I consider it a healthy development that philosophy stopped prejudicially rejecting all 'unsciencey [LW · GW]' things, and started demanding better arguments.
Why does LW outperform analytic philosophy? (Both in terms of having some individuals who have made surprisingly large progress on traditional philosophical questions; and in terms of the community as a whole successfully ending up with a better baseline set of positions and heuristics than you see in analytic philosophy? Taking into account that LW is putting relatively few person-hours into philosophy, many LWers lack formal training in philosophy, etc.)
I suspect it's a few subtler differences.
- "Something to protect [LW · GW]" is very much in the water here. It's normal and OK to actually care in your bones about figuring out which topics are unimportant—care in a tangible "lives are on the line" sort of way—and to avoid those.
No one will look at you funny if you make big unusual changes to your life to translate your ideas into practice [? · GW]. If you're making ethics a focus area, you're expected to actually get better results, and if you don't, it's not just a cute self-deprecating story to tell at dinner parties.
- LW has a culture of ambition, audacity, and 'rudeness', and historically (going back to Eliezer's sequence posts) there's been an established norm of 'it's socially OK to dive super deep into philosophical debates' and 'it's socially OK to totally dismiss and belittle philosophical debates when they seem silly to you'.
I... can't think of another example of a vibrant intellectual community in the last century that made both of those moves 'OK'? And I think this is a pretty damned important combination. You need both moves to be fully available.
- Likewise, LW has a culture of 'we love systematicity and grand Theories of Everything!' combined with the high level of skepticism and fox-ishness encouraged in modern science.
There are innumerable communities that have one or the other, but I think the magic comes from the combination of the two, which can keep a community from flanderizing in one direction or the other.
- More specifically, LWers are very into Bayesianism, and this actually matters a hell of a lot.
E.g., I think the lack of a background 'all knowledge requires thermodynamic work [LW · GW]' model in the field explains the popularity of epiphenomenalism-like [LW · GW] views in philosophy of mind.
And again, there are plenty of Bayesians in academic philosophy. There's even Good and Real, the philosophy book that independently discovered many of the core ideas in the sequences. But the philosophers of mind mostly don't study epistemology in depth, and there isn't a critical mass of 'enough Bayesians in analytic philosophy that they can just talk to each other and build larger edifices everywhere without constantly having to return to 101-level questions about why Bayes is good'.
- This maybe points at an underlying reason that academic philosophy hasn't converged on more right answers: some of those answers require more technical ability than is typically expected in analytic philosophy. So when someone publishes an argument that's pretty conclusive, but requires strong technical understanding and well-honed formal intuitions, it's a lot more likely the argument will go ignored, or will take decades (rather than months) to change minds. More subtly, the kinds of questions and interests that shape the field are ones that are (or seem!!) easier to tackle without technical intuitions and tools.
Ten years ago, Marcus Hutter made a focused effort to bring philosophers up to speed on Solomonoff induction and AIXI. But his paper has only been cited 96 times (including self-citations and citations by EAs and non-philosophers), while Schaffer's 2010 paper on whether wholes are metaphysically prior to their parts has racked up 808 citations. This seems to reflect a clear blind spot.
- A meta-explanation: LW was founded by damned good thinkers like Eliezer, Anna, Luke M, and Scott who (a) had lots of freedom to build a new culture from scratch (since they were just casually sharing thoughts with other readers of the same blog, not trying to win games within academia's existing norms), and (b) were smart enough to pick a pretty damned good mix of norms.
I don't think it's a coincidence that all these good things came together at once. I think there was deliberate reflection about what good thinking-norms and discussion-norms look like, and I think this reflection paid off in spades.
I think you can get an awful lot of the way toward understanding the discrepancy by just positing that communities try to emulate their heroes, and Anna is a better hero than Leibniz or Kant (if only by virtue of being more recent and therefore being able to build on better edifices of knowledge), and unlike most recent philosophical heroes, LW's heroes were irreverent and status-blind enough to create something closer to a clean break with the errors of past philosophy, keeping the good while thoroughly shunning and stigmatizing the clearly-bad stuff. Otherwise it's too easy for any community that drinks deeply of the good stuff in analytic philosophy to end up imbibing the bad memes too, and recapitulate the things that make analytic philosophy miss the mark pretty often.
Weirdly, when I imagine interventions that could help philosophy along [LW · GW], I feel like philosophy's mild academic style gets in the way?
When I think about why LW was able to quickly update toward good decision-theory methods and views, I think of posts like "Newcomb's Problem and Regret of Rationality [LW · GW]" that sort of served as a kick in the pants, an emotional reminder "hold on, this line of thinking is totally bonkers." The shortness and informality is good, not just for helping system 1 sit up and pay attention, but for encouraging focus on a simple stand-alone argument that's agnostic to the extra theory and details you could then tack on.
Absent some carefully aimed kicks in the pants, people are mostly happy and content to stick with the easy, cognitively natural grooves human minds find themselves falling into.
Of course, if you just dial up emotional kicks in the pants to 11, you end up with Twitter culture, not LW. So this seems like another smart-founder effect to me: it's important that smart self-aware people chose very specific things to carefully and judiciously kick each other in the pants over.
(The fact that LW is a small community surely helps when it comes to not being Twitter. Larger communities are more vulnerable to ideas getting watered down and/or viral-ized.)
Compare Eliezer's comically uncomplicated "RATIONALISTS SHOULD WIN" argument to the mild-mannered analytic-philosophy version.
(Which covers a lot of other interesting topics! But it's not clear to me that this has caused a course-correction yet. And the field's course-correction should have occurred in 2008–2009, at the latest, not 2018.)
(Also, I hear that the latter paper was written by someone socially adjacent to the rationalists? And they cite MIRI papers. So I guess this progress also might not have happened without LW.)
(Also, Greene's paper of course isn't the first example of an analytic philosopher calling for something like "success-first decision theory". As the paper notes, this tradition has a long history. I'm not concerned with priority here; my point in comparing Greene's paper to Eliezer's blog post is to speak to the sociological question of why, in this case, a community of professionals is converging on truth so much more slowly than a community of mostly-hobbyists.)
My story is sort of a Thiel-style capitalist account. It was hard to get your philosophy published and widely read/discussed except via academia. But academia had a lot of dysfunction that made it hard to innovate and change minds within that bad system.
The Internet and blogging made it much easier to compete with philosophers; a mountain of different blogs popped up; one happened to have a few unusually good founders; and once their stuff was out there and could compete, a lot of smart people realized it made more sense.
LW is academic philosophy, rebooted with better people than Plato as its Pater Patriae.
44 comments
Comments sorted by top scores.
comment by adrusi · 2021-03-10T05:46:38.033Z · LW(p) · GW(p)
I think an important piece that's missing here is that LW simply assumes that certain answers to important questions are correct. It's not just that there are social norms that say it's OK to dismiss ideas as stupid if you think they're stupid, it's that there's a rough consensus on which ideas are stupid.
LW has a widespread consensus on bayseian epistemology, physicalist metaphysics and consequentialist ethics (not an exhaustive list). And it has good reasons for favoring these positions, but I don't think LW has great responses to all the arguments against these positions. Neither do the alternative positions have great responses to counterarguments from the LW-favored positions.
Analytic philosophy in the academy is stuck with a mess of incompatible views, and philosophers only occasionally succeed in organizing themselves into clusters that share answers to a wide range of fundamental questions.
And they have another problem stemming from the incentives in publishing. Since academic philosophers want citations, there's an advantage to making arguments that don't rely on particular answers to questions where there isn't widespread agreement. Philosophers of science will often avoid invoking causation, for instance, since not everyone believes in it. It takes more work to argue in that fashion, and it constrains what sorts of conclusions you can arrive at.
The obvious pitfalls of organizing around a consensus on the answers to unsolved problems are obvious.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-10T16:08:14.253Z · LW(p) · GW(p)
I would draw an analogy like this one:
Five hundred extremely smart and well-intentioned philosophers of religion (some atheists, some Christians, some Muslims, etc.) have produced an enormous literature discussing the ins and outs of theism and the efficacy of prayer, and there continue to be a number of complexities and unsolved problems related to why certain arguments succeed or fail, even though various groups have strong (conflicting) intuitions to the effect "claim x is going to be true in the end".
In a context like this, I would consider it an important mark in favor of a group if they were 50% better than the philosophers of religion at picking the right claims to say "claim x is going to be true in the end", even if they are no better than the philosophers of religion at conclusively proving to a random human that they're right. (In fact, even if they're somewhat worse.)
To sharpen this question, we can imagine that a group of intellectuals learns that a nearby dam is going to break soon, flooding their town. They can choose to divide up their time between 'evacuating people' and 'praying'. Since prayer doesn't work (I say with confidence, even though I've never read any scholarly work about this), I would score a group in this context based on how well they avoid wasting scarce minutes on prayer. I would give little or no points based on how good their arguments for one allocation or another are, since lives are on the line and the end result is a clearer test. Having compelling-sounding arguments matters, but in the end the physical world judges you on whether you ended up getting the right answer, not on your reasoning per se.
To clarify a few things:
- Obviously, I'm not saying the difference between LW and analytic philosophy is remotely as drastic as the difference between LW and philosophy of religion. I'm just using the extreme example to highlight a qualitative point.
- Obviously, if someone comes to this thread saying 'but two-boxing is better than one-boxing', I will reply by giving specific counter-arguments (both formal and heuristic), not by just saying 'my intuition is better than yours!' and stopping there. And obviously I don't expect a random philosopher to instantly assume I'm correct that LWers have good intuitions about this, without spending a lot of time talking with us. I can notice and give credit to someone who has a good empirical track record (by my lights), without expecting everyone on the Internet to take my word for it.
- Obviously, being a LWer, I care about heuristics of good reasoning. :) And if someone gives sufficiently bad reasons for the right answer, I will worry about whether they're going to get other answers wrong in the future.
But also, I think there's such a thing as having good built-up intuitions about what kinds of conclusions end up turning out to be true, and about what kinds of evidence tend to deserve more weight than other kinds of evidence. This might actually be the big thing LW has over analytic philosophy, so I want to call attention to it and encourage people to poke at what this thing is.
Replies from: adrusi, TAG↑ comment by adrusi · 2021-03-10T23:36:43.370Z · LW(p) · GW(p)
I worry that this doesn't really end up explaining much. We think that our answers to philosophical questions are better than what the analytics have come up with. Why? Because they seem intuitively to be better answers. What explanation do we posit for why our answers are better? Because we start out with better intuitions.
Of course our intuitions might in fact be better, as I (intuitively) think they are. But that explanation is profoundly underwhelming.
This might actually be the big thing LW has over analytic philosophy, so I want to call attention to it and encourage people to poke at what this thing is.
I'm not sure what you mean here, but maybe we're getting at the same thing. Having some explanation for why we might expect our intuitions to be better would make this argument more substantive. I'm sure that anyone can give explanations for why their intuitions are more likely to be right, but it's at least more constraining. Some possibilities:
- LWers are more status-blind, so their intuitions are less distorted by things that are not about being right
- Many LWers have a background in non-phil-of-mind cognitive sciences, like AI, neuroscience and psychiatry, which leads them to believe that someways of thinking are more apt to lead to truth than others, and then adopt the better ones
- LWers are more likely than analytic philosophers to have extensive experience in a discipline where you get feedback on whether you're right, rather than merely feedback on whether others think you are right, and that might train their intuitions in a useful direction.
I'm not confident that any of these are good explanations, but they illustrate the sort of shape of explanation that I think would be needed to give a useful answer to the question posed in the article.
Replies from: RobbBB, TAG↑ comment by Rob Bensinger (RobbBB) · 2021-03-10T23:49:31.273Z · LW(p) · GW(p)
Those seem like fine partial explanations to me, as do the explanations I listed in the OP. I expect multiple things went right simultaneously; if it were just a single simple tweak, we would expect many other groups to have hit on the same trick.
↑ comment by TAG · 2021-03-11T00:11:25.448Z · LW(p) · GW(p)
Many LWers have a background in non-phil-of-mind cognitive sciences, like AI, neuroscience and psychiatry, which leads them to believe that someways of thinking are more apt to lead to truth than others, and then adopt the better ones
LWers are more likely than analytic philosophers to have extensive experience in a discipline where you get feedback on whether you’re right, rather than merely feedback on whether others think you are right, and that might train their intuitions in a useful direction.
It's common for people from other backgrounds to get frustrated with philosophy. But it isn't a good argument to the effect that philosophy is being done wrong. Since it is a separate discipline to science , engineering, and so on, there is no particular reason to think that the same techniques will work. If there are reasons why some Weird Trick would work across all disciplines , then it would work in philosophy. But is there a one weird trick?
↑ comment by TAG · 2021-03-10T17:38:30.502Z · LW(p) · GW(p)
. Having compelling-sounding arguments matters, but in the end the physical world judges you on whether you ended up getting the right answer, not on your reasoning per se.
There is a set of claims that LW holds to be true, and a set that can be tested directly and unambiguously -- where "physical reality judges you" --and they are not the same set. Ask yourself how many Lesswrongian claims other than Newcombe are directly testable.
The pragmatic or "winning" approach just doesn't go far enough.
You can objectively show that a theory succeeds or fails at predicting observations, and at the closely related problem of achieving practical results . It is is less clear whether an explanation succeeds in explaining, and less clear still whether a model succeeds in corresponding to the territory. The lack of a test for correspondence per se, ie. the lack of an independent "standpoint" from which the map and the territory can be compared, is the is the major problem in scientific epistemology. And the lack of direct testability is one of the things that characterises philosophical problems as opposed to scientific ones -- you can't test ethics for correctness,you can't test personal identity, you can't test correspondence-to-reality separately from prediction-of-obsevation -- so the "winning" or pragmatic approach is a particularly bad fit for philosophy.
Pragmatism, the "winning" approach, could form a basis of epistemology if the scope of epistemology were limited only to the things it can in fact prove, such as claims about future observations. Instrumentalism and Logcal positivism are well known forms of this approach. But rationalism rejects those approaches!
If you can't make a firm commitment to instrumentalism, then you're in the arena where, in the absence of results, you need to use reason to persuade people -- you can't have it both ways.
comment by Rob Bensinger (RobbBB) · 2021-03-10T15:42:55.041Z · LW(p) · GW(p)
A conversation prompted by this post (added: and "What I'd Change About Different Philosophy Fields [LW · GW]") on Twitter:
______________________
Ben Levinstein: Hmm. As a professional analytic philosopher, I find myself unable to judge a lot of this. I think philosophers often carve out sub-communities of varying quality and with varying norms. I read LW semi regularly but don't have an account and generally wouldn't say it outperforms.
Rob Bensinger: An example of what I have in mind: I think LW is choosing much better philosophical problems to work on than truthmakers, moral internalism, or mereology. I also think it's very bad that most decision theorists two-box, or that anyone worries about whether teleportation is death.
If the philosophical circles you travel in would strongly agree with all that, then I might agree they're on par with LW, and we might just be looking at different parts of a very big elephant.
Ben Levinstein: That could be. I realized I had no idea whether your critique of metaphysics, for instance, was accurate or not because I'm pretty disconnected from most of analytic metaphysics. Just don't know what's going on outside of the work of a very select few.
Rob Bensinger: (At least, most decision theorists two-boxed as of 2009. Maybe things have changed a lot!)
Ben Levinstein: I don't think that's changed, but I also tend not to buy the LW explanations for why decision theorists are thinking along the lines they do. E.g., Joyce and others definitely think they are trying to win but think the reference classes are wrong.
Not taking a side on the merits there, but just saying I have the impression from LW that their understanding of what CDT-defenders take the rules of the game to be tends to be inaccurate.
Rob Bensinger: Sounds like a likely sort of thing for LW to get wrong. Knowing why others think things is a hard problem. Gotta get Joyce posting on LW. :)
Ben Levinstein: I also think every philosopher I know who has looked at Solomonoff just doesn't think it's that good or interesting after a while. We all come away kind of deflated.
Rob Bensinger: I wonder if you feel more deflated than the view A Semitechnical Introductory Dialogue on Solomonoff Induction [LW · GW] arrives at? I think Solomonoff is good but not perfect. I'm not sure whether you're gesturing at a disagreement or a different way of phrasing the same position.
Ben Levinstein: I'll take a look! Basically, after working through the technicals I didn't feel like it did much of anything to solve any deep philosophical problems related to induction despite being a very cool idea. Tom Sterkenburg had some good negative stuff, e.g., http://philsci-archive.pitt.edu/12429/
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-11T16:51:55.034Z · LW(p) · GW(p)
Ben Levinstein:
I guess I have a fair amount to say, but the very quick summary of my thoughts on SI remain the same:
1. Solomonoff Induction is really just subjective bayesianism+ Cromwell's rule + prob 1 that the universe is computable. I could be wrong about the exact details here, but I think this could even be exactly correct. Like for any subjective Bayesian prior that respects Cromwell's rule and is sure the universe is computable there exists some UTM that will match it. (Maybe there's some technical tweak I'm missing, but basically, that's right.) So if that's so, then SI doesn't really add anything to the problem of induction aside from saying that the universe is computable.
2. EY makes a lot out of saying you can call shenanigans with ridiculous-looking UTMs. But I mean, you can do the same with ridiculous looking priors under subjective bayes. Like, ok, if you just start with a prior of .999999 that Canada will invade the US, I can say you're engaging in shenanigans. Maybe it makes it a bit more obvious if you use UTMs, but I'm not seeing a ton of mileage shenanigans-wise.
3. What I like about SI is that it basically is just another way to think about subjective bayesianism. Like you get a cool reframing and conceptual tool, and it is definitely worth knowing about. But I don't at all buy the hype about solving induction and even codifying Ockham's Razor.
4. Man, as usual I'm jealous of some of EY's phrase-turning ability: that line about being a young intelligence with just two bits to rub together is great.
comment by Rob Bensinger (RobbBB) · 2021-03-10T00:46:09.654Z · LW(p) · GW(p)
There are about 6700 US philosophy faculty, versus about 6000 LessWrong commenters to date
Ruby from the LW team tells me that there are 5,964 LW users who have made at least 4 (non-spam) comments ever.
The number of users with 10+ karma who have been active in the last 6 months is more like 1000—1500.
Replies from: Nonecomment by Shmi (shminux) · 2021-03-10T04:07:28.594Z · LW(p) · GW(p)
These are some extraordinary claims. I wonder if there is a metric that mainstream analytical philosophers would agree to use to evaluate statements like
LW outperform analytic philosophy
and
LW is academic philosophy, rebooted with better people than Plato as its Pater Patriae.
Without an agreed upon evaluation criteria, this is just tooting one's own horn, wouldn't you agree?
Replies from: RobbBB, RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-10T17:44:35.908Z · LW(p) · GW(p)
On the topic of "horn-tooting": see my philosopher-of-religion analogy [LW(p) · GW(p)]. It would be hard to come up with a simple metric that would convince most philosophers of religion "LW is better than you at thinking about philosophy of religion". If you actually wanted to reach consensus about this, you'd probably want to start with a long serious of discussions about object-level questions and thinking heuristics.
And in the interim, it shouldn't be seen as a status grab for LWers to toot their own horn about being better at philosophy of religion. Toot away! Every toot is an opportunity to be embarrassed later when the philosophers of religion show that they were right all along.
It would be bad to toot if your audience were so credulous that they'll just take your word for it, or if the social consequences of making mistakes were too mild to disincentivize empty boasts. But I don't think LW or analytic philosophy are credulous or forgiving enough to make this a real risk.
If anything, there probably isn't enough horn-tooting in those groups. People are too tempted to false modesty, or too tempted to just steer clear of the topic of relative skill levels. This makes it harder to get feedback about people's rationality and meta-rationality, and it makes a lot of coordination problems harder.
Replies from: shminux↑ comment by Shmi (shminux) · 2021-03-10T19:39:59.471Z · LW(p) · GW(p)
This sounds like a very Eliezer-like approach: "I don't have to convince you, a professional who spent decades learning and researching the subject matter, here is the truth, throw away your old culture and learn from me, even though I never bothered to learn what you learned!" While there are certainly plenty of cases where this is valid, in any kind of evidence-based sciences the odds of it being successful are slim to none (the infamous QM sequence is one example of a failed foray like that. Well, maybe not failed, just uninteresting). I want to agree with you on the philosophy of religion, of course, because, well, if you start with a failed premise, you can spend all your life analyzing noise, like the writers of Talmud did. But an outside view says that the Chesterton fence of an existing academic culture is there for a reason, including the philosophical traditions dating back millennia.
An SSC-like approach seems much more reliable in terms of advancing a particular field. Scott spends inordinate amount of time understanding the existing fences, how they came to be and why they are there still, before advancing an argument why it might be a good idea to move them, and how to test if the move is good. I think that leads to him being taken much more seriously by the professionals in the area he writes about.
I gather that both approaches have merit, as there is generally no arguing with someone who is in a "diseased discipline", but one has to be very careful affixing that label on the whole field of research, even if it seems obvious to an outsider. Or to an insider, if you follow the debates about whether the String Theory is a diseased field in physics.
Still, except for the super-geniuses among us, it is much safer to understand the ins and outs before declaring that the giga-IQ-hours spent by humanity on a given topic are a waste or a dead end. The jury is still out on whether Eliezer and MIRI in general qualify.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-10T19:56:00.317Z · LW(p) · GW(p)
Even if the jury's out, it's a poor courtroom that discourages the plaintiff, defendant, witnesses, and attorneys from sharing their epistemic state, for fear of offending others in the courtroom!
It may well be true that sharing your honest models of (say) philosophy of religion is a terrible idea and should never happen in public, if you want to have any hope of convincing any philosophers of religion in the future. But... well, if intellectual discourse is in as grim and lightless a state as all that, I hope we can at least have clear sights about how bad that is, and how much better it would be if we somehow found a way to just share our models of the field and discuss those plainly. I can't say it's impossible to end up in situations like that, but I can push for the conditional policy 'if you end up in that kind of situation, be super clear about how terrible this is and keep an eye out for ways to improve on it'.
You don't have to be extremely confident in your view's stability (i.e., whether you expect to change your view a lot based on future evidence) or its transmissibility in order to have a view at all. And if people don't share their views — or especially, if they are happier to share positive views of groups than negative ones, or otherwise have some systemic bias in what they share — the group's aggregate beliefs will be less accurate.
↑ comment by Rob Bensinger (RobbBB) · 2021-03-10T17:05:29.575Z · LW(p) · GW(p)
So, see my conversation [LW(p) · GW(p)] with Ben Levinstein and my reply [LW(p) · GW(p)] to adrusi for some of my reply. An example of what I have in mind by 'LWers outperforming' is the 2009 PhilPapers survey: I'd expect a survey of LW users with 200+ karma to...
- ... have fewer than 9.1% of respondents endorse "skepticism" or "idealism" about the external world.
- ... have fewer than 13.7% endorse "libertarianism" about free will (roughly defined as the view "(1) that we do have free will, (2) that free will is not compatible with determinism, and (3) that determinism is therefore false").
- ... have fewer than 14.6% endorse "theism".
- ... have fewer than 27.1% endorse "non-physicalism" about minds.
- ... have fewer than 59.6% endorse "two boxes" in Newcomb's problem, out of the people who gave a non-"Other" answer.
- ... have fewer than 44% endorse "deontology" or "virtue ethics".
- ... have fewer than 12.2% endorse the "further-fact view" of personal identity (roughly defined as "the facts about persons and personal identity consist in some further [irreducible, non-physical] fact, typically a fact about Cartesian egos or souls").
- ... have fewer than 16.9% endorse the "biological view" of personal identity (which says that, e.g., if my brain were put in a new body, I should worry about the welfare of my old brainless body, not about the welfare of my mind or brain).
- ... have fewer than 31.1% endorse "death" as the thing that happens in "teletransporter (new matter)" thought experiments.
- ... have fewer than 37% endorse the "A-theory" of time (which rejects the idea of "spacetime as a spread-out manifold with events occurring at different locations in the manifold"), out of the people who gave a non-"Other" answer.
- ... have fewer than 6.9% endorse an "epistemic" theory of truth (i.e., a view that what's true is what's knowable, or known, or verifiable, or something to that effect).
This is in no way a perfect or complete operationalization, but it at least gestures at the kind of thing I have in mind.
Replies from: shminux↑ comment by Shmi (shminux) · 2021-03-10T20:13:42.521Z · LW(p) · GW(p)
Well, it looks like you declare "outperforming" by your own metric, not by anything generally accepted.
(Also, I take issue with the last two. The philosophical ideas about time are generally not about time, but about "time", i.e. about how humans perceive and understand passage of time. So distinguishing between A and B is about humans, not about time, unlike, say, Special and General Relativity, which provide a useful model of time and spacetime.
A non-epistemic theory of truth (e.g. there is an objective truth we try to learn) is detrimental in general, because it inevitably deteriorates into debates about untestables, like other branches of a hypothetical multiverse and how to behave morally in an infinite universe.)
Also, most people here, while giving lip service to non-libertarian views of free will, sneak it in anyway, as evidenced by relying on "free choice" in nearly all decision theory discussions.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-10T20:36:17.293Z · LW(p) · GW(p)
Well, it looks like you declare "outperforming" by your own metric, not by anything generally accepted.
I am indeed basing my view that philosophers are wrong about stuff on investigating the specific claims philosophers make.
If there were a (short) proof that philosophers were wrong about X that philosophers already accepted, I assume they would just stop believing X and the problem would be solved.
The philosophical ideas about time are generally not about time, but about "time", i.e. about how humans perceive and understand passage of time.
Nope, the 20th-century philosophical literature discussing time is about time itself, not about (e.g.) human psychological or cultural perceptions of time.
There is also discussion of humans' perception and construction of time -- e.g., in Kant -- but that's not the context in which A-theory and B-theory are debated.
The A-theory and B-theory were introduced in 1908, before many philosophers (or even physicsts) had heard of special relativity; and 'this view seems unbelievably crazy given special relativity' is in fact one of the main arguments that gets cited in the literature against the A-theory of time.
A non-epistemic theory of truth (e.g. there is an objective truth we try to learn) is detrimental in general, because it inevitably deteriorates into debates about untestables, like other branches of a hypothetical multiverse and how to behave morally in an infinite universe.)
"It's raining" is true even if you can't check. Also, what's testable for one person is different from what's testable for another person. Rather than saying that different things are 'true' or 'false' or 'neither true nor false' depending on which person you are, simpler to just say that "snow is white" is true iff snow is white.
It's not like there's any difficulty in defining a predicate that satisfies the correspondence theory of truth, and this predicate is much closer to what people ordinarily mean by "true" than any epistemic theory of truth's "true" is. So demanding that we abandon the ordinary thing people mean by "truth" just seems confusing and unnecessary.
Doubly so when there's uncertainty or flux about which things are testable. Who can possibly keep track of which things are true vs. false vs. meaningless, when the limits of testability are always changing? Seems exhausting.
Also, most people here, while giving lip service to non-libertarian views of free will, sneak it in anyway, as evidenced by relying on "free choice" in nearly all decision theory discussions.
This is a very bad argument. Using the phrase "free choice" doesn't imply that you endorse libertarian free will.
Replies from: shminux↑ comment by Shmi (shminux) · 2021-03-10T21:12:04.649Z · LW(p) · GW(p)
Well, we may have had this argument before, likely more than once, so probably no point rehashing it. I appreciate you expressing your views succinctly though.
comment by lsusr · 2021-03-10T03:29:02.361Z · LW(p) · GW(p)
Just yesterday, a friend commented on the exceptionally high quality of the comments I get by posting on this website. Of your many good points, these are my favorite.
Likewise, LW has a culture of 'we love systematicity and grand Theories of Everything!' combined with the high level of skepticism and fox-ishness encouraged in modern science.
⋮
This maybe points at an underlying reason that academic philosophy hasn't converged on more right answers: some of those answers require more technical ability than is typically expected in analytic philosophy.
⋮
…unlike most recent philosophical heroes, LW's heroes were irreverent and status-blind enough to create something closer to a clean break with the errors of past philosophy, keeping the good while thoroughly shunning and stigmatizing the clearly-bad stuff.
comment by romeostevensit · 2021-03-10T12:51:19.092Z · LW(p) · GW(p)
Does anyone know of any significant effort to collect 'cute conceptual questions' in one place?
comment by Chris_Leong · 2021-03-10T03:07:40.697Z · LW(p) · GW(p)
I thought you made some excellent points about many of these ideas are in the philosophical memespace, but just haven't gained dominance.
In Newcomb's Problem and Regret of Rationality [LW · GW], Eliezer's argument is pretty much "I can't provide a fully satisfactory solution, so let's just forget about the theoretical argument which we could never be certain about anyway and use common sense". While I agree that this is a good principle, philosophers who discuss the problem generally aren't trying to figure out what they'd do if they were actually in the sitution, but to discover what this problem tells us about the principles of decision theory. The pragmatic solution wouldn't meet this aim. Further, the pragmatic principle would suggest not paying in Counterfactual Mugging.
I guess I have a somewhat interesting perspective on this given that I don't find the standard LW very satisfying for Newcomb's or Counterfactual Mugging and I've proposed my own approaches which haven't gained much traction, but I consider to be far more satisfying. Should I take the outside view and assume that I'm way too overconfident about being correct (since I have definitely been in the past and is very common among people who propose theories in general)? Or should I take the inside view and downgrade my assessment of how good LW is as a community for philosophy discussion?
Replies from: Vaniver, gworley, Chris_Leong↑ comment by Vaniver · 2021-03-10T23:37:49.724Z · LW(p) · GW(p)
Also note that Eliezer's "I haven't written this out yet" was in 2008, and by 2021 I think we have some decent things written on FDT, like Cheating Death in Damascus and Functional Decision Theory: A New Theory of Instrumental Rationality.
You can see some responses here and here [LW · GW]. I find them uncompelling.
↑ comment by Gordon Seidoh Worley (gworley) · 2021-03-10T16:09:41.985Z · LW(p) · GW(p)
I think there's something like: LessWrong sometimes tends too hard towards pragmatism and jumps past things that are deserving of closer consideration.
To be fair, though, I think LessWrong does a better job of being pragmatic enough to be useful for having an impact on the world than academic philosophy does. I just note that, like with anything, sometimes the balance seems to go too far and fails to carefully consider things that are worthy of greater consideration as a result of a desire to get on with things and say something actionable.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-10T18:11:55.203Z · LW(p) · GW(p)
I think there's something like: LessWrong sometimes tends too hard towards pragmatism and jumps past things that are deserving of closer consideration.
I agree with this. I especially agree that LWers (on average) are too prone to do things like:
- Hear Eliezer's anti-zombie [LW · GW] argument and conclude "oh good, there's no longer anything confusing about the Hard Problem of Consciousness!".
- Hear about Tegmark's Mathematical Universe Hypothesis and conclude "oh good, there's no longer anything about why there's something rather than nothing!".
On average, I think LWers are more likely to make important errors in the direction of 'prematurely dismissing things that sound un-sciencey' than to make important errors in the direction of 'prematurely embracing un-sciencey things'.
But 'tendency to dismiss things that sound un-sciencey' isn't exactly the dimension I want LW to change on, so I'm wary of optimizing LW in that direction; I'd much rather optimize it in more specific directions that are closer to the specific things I think are true and good.
Replies from: TAG↑ comment by TAG · 2021-03-10T20:00:59.943Z · LW(p) · GW(p)
Hear Eliezer’s anti-zombie argument and conclude “oh good, there’s no longer anything confusing about the Hard Problem of Consciousness!”.
The fact that so many rationalists have made that mistake is evidence against the claim rationalists are superior philosophers.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-10T20:07:18.578Z · LW(p) · GW(p)
Yep!
Replies from: TAG↑ comment by Chris_Leong · 2021-03-10T03:10:06.083Z · LW(p) · GW(p)
In short, my position on Newcomb's is as follows: Perfect predictors require determinism which means that strictly there's only one decision that you can make. To talk about choosing between options requires us to construct a counterfactual to compare against. If we construct a counterfactual where you make a different choice and we want it to be temporally consistent then given determinism we have to edit the past. Consistency may force us to also edit Omega's prediction and hence the money in the box, but all this is fine since it is a counterfactual. CDT's may deny the need for consistency, but then they'd have to justify ignoring changes in past brain state *despite* the presence of a perfect predictor which may have a way of reading this state.
As far as I'm concerned, the Counterfactual Prisoner's Dilemma [LW · GW] provides the most satisfying argument for taking the Counterfactual Mugging seriously.
comment by TAG · 2021-03-10T02:39:59.798Z · LW(p) · GW(p)
(b) … there’s a culture of being relaxed, or something to that effect, in philosophy
That is possibly a result of mainstream philosophy being better at meta philosophy... in the sense of more being skeptical. Once you have rejected the idea that you can converge on The One True Epistemology, you have to give up on the "missionary work " of telling people that they are wrong according to TOTE, and that's your "relaxation".
Philosophers are good at coming up with distinctions. They are not good at saying, “the debate about the true meaning of knowledge is inherently silly; let’s collaboratively map out concept space instead.”
If that means giving up on traditional epistemology, it's not going to help. The thing about traditional terms like "truth" and "knowledge" is that they connect to traditional social moves, like persuasion and agreement. If you can't put down the tagle stakes of truth and proof, you can't expect the payoff of agreement.
comment by Tyrrell_McAllister · 2021-03-10T01:48:17.570Z · LW(p) · GW(p)
LW is academic philosophy, rebooted with better people than Plato as its Pater Patriae.
LW should not be comparing itself to Plato. It's trying to do something different. The best of what Plato did is, for the most part, orthogonal to what LW does.
You can take the LW worldview totally onboard and still learn a lot from Plato that will not in any way conflict with that worldview.
Or you may find Plato totally useless. But it won't be your adoption of the LW memeplex alone that determines which way you go.
comment by TAG · 2021-03-10T02:48:03.360Z · LW(p) · GW(p)
...a background ‘all knowledge requires thermodynamic work’ model ...
Assumes physicalism, which epiphenomenalists don't.
If you want to talk them out of epiphenomenalism, you need to talk them into physicalism, and you can do that by supplying a reductive explanation of consciousness. But you, the rationalists, don't have one ... it's not among your achievements.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-10T17:50:48.297Z · LW(p) · GW(p)
Assumes physicalism, which epiphenomenalists don't.
Philosophy papers presumably obey thermodynamics, so it should be possible to speak of the physical processes that produce different sentences in philosophy papers, and why we should think of those processes as more or less truth-tracking.
Actual epiphenomenalism would mean that you can't have any causal influence on philosophy papers; so I assume we're not going for anything that crazy.
But if the view is something more complicated, like panprotopsychism, then I'd want to hear the story of how the non-thermodynamic stuff interacts with the thermodynamic stuff to produce true sentences in philosophy papers.
But you, the rationalists, don't have one ... it's not among your achievements.
You don't need a full theory of consciousness, personal identity, or quantum gravity in order to say with confidence that ghosts aren't real. Similarly, uncertainty about how consciousness shouldn't actually translate into uncertainty about epiphenomenalism. Compare [LW · GW]:
Replies from: TAG, TAGAn oft-encountered mode of privilege is to try to make uncertainty within a space, slop outside of that space onto the privileged hypothesis. For example, a creationist seizes on some (allegedly) debated aspect of contemporary theory, argues that scientists are uncertain about evolution, and then says, “We don’t really know which theory is right, so maybe intelligent design is right.” But the uncertainty is uncertainty within the realm of naturalistic theories of evolution—we have no reason to believe that we’ll need to leave that realm to deal with our uncertainty, still less that we would jump out of the realm of standard science and land on Jehovah in particular. That is privileging the hypothesis—taking doubt within a normal space, and trying to slop doubt out of the normal space, onto a privileged (and usually discredited) extremely abnormal target.
↑ comment by TAG · 2021-03-10T19:58:35.837Z · LW(p) · GW(p)
philosophy papers presumably obey thermodynamics, so it should be possible to speak of the physical processes that produce different sentences in philosophy papers, and why we should think of those processes as more or less truth-tracking.
Actual epiphenomenalism would mean that you can’t have any causal influence on philosophy papers; so I assume we’re not going for anything that crazy.
I don't know why you keep bringing that up. Epiphenomenalists believe they are making true statements, and they believe their statements aren't caused by consciousness , so they have to believe that their statements are caused physically by a mechanism that is truth seeking. And they have to believe that the truth of their statements about consciousness is brought about by some kind of parallelism with consciousness. Which is weird.
But you don't refute them by telling them "there is s physical explanation for you writing that paper". They already know that.
↑ comment by TAG · 2021-03-10T18:22:34.995Z · LW(p) · GW(p)
Are you willing to say that illusionism is as obviously wrong as epiphenomenalism?
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-10T18:24:39.074Z · LW(p) · GW(p)
No, though I'm willing to say that illusionism is incredibly weird and paradoxical-seeming and it makes sense to start with a strong prior against it.
Replies from: TAG↑ comment by TAG · 2021-03-10T18:29:03.967Z · LW(p) · GW(p)
Why should "my consciousness doesn't exist" be less crazy than "my consciousness exists but has no causal powers"?
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-10T19:07:12.213Z · LW(p) · GW(p)
In lieu of recapitulating the full argument, I'll give an intuition pump: 'reality doesn't exist' should get a much higher subjective probability than 'leprechauns exist' or 'perpetual motion machines exist', paradoxical though that sounds. The reason being that we have a pretty clear idea of what 'leprechauns' and 'perpetual motion machines' are, so we can be clearer about what it means for them not to exist; we're less likely to be confused on that front, it's more likely to be a well-specified question with a simple factual answer.
Whereas 'reality' is a very abstract and somewhat confusing term, and it seems at least somewhat likelier (even if it's still extremely unlikely!) that we'll realize it was a non-denoting term someday, though it's hard to imagine (and in fact it sounds like nonsense!) from our present perspective.
In this analogy, 'epiphenomenalism is true' is like 'leprechauns exist', while 'illusionism is true' is like 'reality doesn't exist'.
From my perspective, the first seems to be saying something pretty precise and obviously false. The latter is a strange enough claim, and concerns a confusing enough concept ('phenomenal consciousness'), that it's harder to reasonably say with extreme confidence that it's false. Even if our inside view says that we couldn't possibly be wrong about this, we should cautiously hedge (at least a little) in our view outside the argument [LW · GW].
And then we get to the actual arguments for and against illusionism, which I think (collectively) show that illusionism is very likely true. But I'm also claiming that even before investigating illusionism (but after seeing why epiphenomenalism doesn't make sense), it should be possible to see that illusionism is not like 'leprechauns exist'.
Replies from: CronoDAS, RobbBB, TAG↑ comment by Rob Bensinger (RobbBB) · 2021-03-10T19:10:28.962Z · LW(p) · GW(p)
I do think that a reasonable person can start off with a much higher prior probability on epiphenomenalism than on illusionism (and indeed, many intellectuals have done so), because the problems with epiphenomenalism are less immediately obvious (to many people) than the problems with illusionism. But by the time you've finished reading the Sequences, I don't think you can reasonably hold that position anymore.
Replies from: TAG↑ comment by TAG · 2021-03-10T19:22:18.229Z · LW(p) · GW(p)
I've read the sequences, and they don't argue for illusionism, and they don't argue for any other positive solution to the HP.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-10T19:41:12.929Z · LW(p) · GW(p)
They argue against epiphenomenalism, and introduce a bunch of other relevant ideas and heuristics.
Including [LW · GW] the aforementioned:
Replies from: TAGMy attitude toward questions of existence and meaning was nicely illustrated in a discussion of the current state of evidence for whether the universe is spatially finite or spatially infinite, in which James D. Miller chided Robin Hanson:
"Robin, you are suffering from overconfidence bias in assuming that the universe exists. Surely there is some chance that the universe is of size zero."
To which I replied:
"James, if the universe doesn’t exist, it would still be nice to know whether it’s an infinite or a finite universe that doesn’t exist."
Ha! You think pulling that old “universe doesn’t exist” trick will stop me? It won’t even slow me down!
It’s not that I’m ruling out the possibility that the universe doesn’t exist. It’s just that, even if nothing exists, I still want to understand the nothing as best I can. My curiosity [LW · GW] doesn’t suddenly go away just because there’s no reality, you know!
The nature of “reality” is something about which I’m still confused, which leaves open the possibility that there isn’t any such thing. But Egan’s Law still applies: “It all adds up to normality.” Apples didn’t stop falling when Einstein disproved Newton’s theory of gravity.
Sure, when the dust settles, it could turn out that apples don’t exist, Earth doesn’t exist, reality doesn’t exist. But the nonexistent apples will still fall toward the nonexistent ground at a meaningless rate of 9.8 m/s2.
You say the universe doesn’t exist? Fine, suppose I believe that—though it’s not clear what I’m supposed to believe, aside from repeating the words [LW · GW].
Now, what happens if I press this button?
↑ comment by TAG · 2021-03-10T19:47:36.257Z · LW(p) · GW(p)
By "positive solution" I mean a claim about what is the correct theory, not a claim about what is the wrong theory. I am well aware that he argues against epiphenomenalism.
Of course, it is far from the case that the heuristics you mentioned have led most or many people to illusionism.