What I'd change about different philosophy fields
post by Rob Bensinger (RobbBB) · 2021-03-08T18:25:30.165Z · LW · GW · 52 commentsContents
metaphysics decision theory philosophy of mind (+ phenomenology) philosophy of religion ethics + value theory None 53 comments
[epistemic status: speculative conversation-starter]
My guess at the memetic shifts that would do the most to improve these philosophy fields' tendency to converge on truth:
metaphysics
1. Make reductive, 'third-person' models of the brain central to metaphysics discussion.
If you claim that humans can come to know X, then you should be able to provide a story sketch in the third person for how a physical, deterministic, evolved organism could end up learning X.
You don't have to go into exact neuroscientific detail, but it should be clear how a mechanistic cause-and-effect chain [LW · GW] could result in a toy agent verifying the truth of X within a physical universe.
2. Care less about human intuitions and concepts. Care more about the actual subject matter of metaphysics — ultimate, objective reality. E.g., only care about the concept 'truth' insofar as we have strong reason to think an alien would arrive at the exact same concept, because it's carving nature closer to its joints [LW · GW].
Conduct more tests to see which concepts look more joint-carving than others.
(I think current analytic metaphysics is actually better at 'not caring about human intuitions and concepts' than most other philosophy fields. I just think this is still the field's biggest area for future improvement, partly because it's harder to do this right in metaphysics.)
decision theory
Similar to metaphysics, it should be more expected that we think of decision theories in third-person terms. Can we build toy models of a hypothetical alien or robot that actually implements this decision procedure?
In metaphysics, doing this helps us confirm that a claim is coherent and knowable. In decision theory, there's an even larger benefit: a lot of issues that are central to the field (e.g., logical uncertainty and counterlogicals) are easy to miss if you stay in fuzzy-human-intuitions land.
Much more so than in metaphysics, 'adopting a mechanistic, psychological perspective' in decision theory should often involve actual software experiments with different proposed algorithms — not because decision theory is only concerned with algorithms (it's fine for the field to care more about human decision-making than about AI decision-making), but because the algorithms are the gold standard for clarifying and testing claims.
(There have been lots of cases where decision theorists went awry because they under-specified a problem or procedure. E.g., the smoking lesion problem really needs a detailed unpacking of what step-by-step procedure the agent follows, and how 'dispositions to smoke' affect that procedure.)
philosophy of mind (+ phenomenology)
1. Be very explicit about the fact that 'we have immediate epistemic access to things we know for certain' is a contentious, confusing hypothesis. Note the obvious difficulties with making this claim make sense in any physical reasoning system. Try to make sense of it in third-person models.
Investigate the claim thoroughly, and try to figure out how a hypothetical physical agent could update toward or away from it, if the agent was initially uncertain or mistaken about whether it possesses infallible direct epistemic access to things.
Be explicit about which other claims rely on the 'we have infallible immediate epistemic access' claim.
2. More generally, make philosophy of mind heavily the same field as epistemology.
The most important questions in these two fields overlap quite a bit, and it's hard to make sense of philosophy of mind without spending half (or more) of your time on developing a background account of how we come to know things. Additionally, I'd expect the field of epistemology to be much healthier if it spent less time developing theory, and more time applying theories and reporting back about how they perform in practice.
philosophy of religion
1. Shift the model from 'scholasticism' to 'missionary work'. The key thing isn't to converge with people who already 99% agree with you. Almost all effort should instead go into debating people with wildly different religious views (e.g., Christianity vs. Buddhism) and debating with the nonreligious. Optimize for departments' intellectual diversity and interdisciplinary bridge-building.
Divide philosophy of religion into 'universal consensus-seeking' (which is about debating the most important foundational assumptions of various religions with people of other faiths, with a large focus on adversarial collaborations and 101-level arguments) and 'non-universal-consensus studies' (which includes everything else, and is mostly marginalized and not given focus in the field).
2. Discourage talking about 'religions' or 'faiths'; instead, talk about specific claims/hypotheses. Rename the field 'philosophy of religious claims', if that helps.
When we say 'religion', (a) it creates the false impression that claims must be a package deal, so we can't incrementally update toward one specific claim without swallowing the entire package; and (b) it encourages people to think of claims like theism in community-ish or institution-ish terms, rather than in hypothesis-ish terms.
Christianity is not a default it's fine to assume; Christianity is a controversial hypothesis which most religious and secular authorities in the world reject. Christian philosophers need to move fast, as if their hair's on fire. The rival camps need to fight it out now and converge on which hypothesis is right, exactly like if there were a massive scientific controversy about which of twenty competing models of photosynthesis were true.
Consider popularizing this thought experiment:
"Imagine that we'd all suddenly been plopped on Earth with no memories, and had all these holy texts to evaluate. We only have three months to figure out which, if any, is correct. What would you spend the next three months doing?"
This creates some urgency, and also discourages complacency of the 'well, this has been debated for millennia, surely little old me can't possibly resolve all of this overnight' variety.
Eternal souls are at stake! People are dying every day! Until very recently, religious scholarship was almost uniformly shit! Assuming you can't possibly crack this open is lunacy.
ethics + value theory
1. Accept as a foundational conclusion of the field, 'human values seem incredibly complicated and messy; they're a giant evolved stew of competing preferences, attitudes, and feelings, not the kind of thing that can be captured in any short simple ruleset (though different rulesets can certainly perform better or worse as simplified idealizations).'
2. Stop thinking of the project of ethics as 'figure out which simple theory is True'.
Start instead thinking of ethics as a project of trying to piece together psychological models of this insanely complicated and messy thing, 'human morality'.
Binding exceptionless commitments matter to understanding this complicated thing; folk concepts like courage and honesty and generosity matter; taboo tradeoffs and difficult attempts to quantify, aggregate, and weigh relative well-being matter.
Stop picking a 'side' and then losing all interest in the parts of human morality that aren't associated with your 'side': these are all just parts of the stew, and we need to work hard to understand them and reconcile them just right, not sort ourselves into Team Virtue vs. Team Utility vs. Team Duty.
(At least, stop picking a side at that level of granularity! Biologists have long-standing controversies, but they don't look like 'Which of these three kinds of animal exists: birds, amphibians, or mammals?')
3. Once again, apply the reductive third-person lens to everything. 'If it's true that X is moral, how could a mechanistic robot learn that truth? What would "X is moral" have to mean in order for a cause-and-effect process to result in the robot discovering that this claim is true?'
4. Care less about the distinction between 'moral values' and other human values. There are certainly some distinguishing features, but these mostly aren't incredibly important or deep or joint-carving. In practice, it works better to freely bring in insights from the study of beauty, humor, self-interest, etc. rather than lopping off one slightly-arbitrary chunk of a larger natural phenomenon.
52 comments
Comments sorted by top scores.
comment by Vaniver · 2021-03-08T21:49:14.387Z · LW(p) · GW(p)
Discourage talking about 'religions' or 'faiths'; instead, talk about specific claims/hypotheses. Rename the field 'philosophy of religious claims', if that helps.
When we say 'religion', (a) it creates the false impression that claims must be a package deal, so we can't incrementally update toward one specific claim without swallowing the entire package; and (b) it encourages people to think of claims like theism in community-ish or institution-ish terms, rather than in hypothesis-ish terms.
IMO this promotes a particular form of religious belief / way of thinking about religion into "what religion is", in a way that seems wrong-headed to me. Not all religions are about how getting into heaven depends on whether or not you believed the right things, such that we should pay intense scrutiny to propositional claims.
Like, not all religions are about holy texts; a number of them would simply evaporate (or lose critical components) if you deleted all human memories and just left behind what they've written. How can you have Zen lineages or similar things (that rely on the intense transmission from human to human outside of texts) without memories? How can you tell whether or not someone is the rightful caliph without knowing their genealogy, which isn't part of the holy text?
comment by Shmi (shminux) · 2021-03-08T19:07:15.138Z · LW(p) · GW(p)
You and I rarely agree on much, but this looks like a great post! It highlights what an outsider like me would find befuddling about philosophical discourse, and your prescriptions, usually the weakest part of any argument, actually make sense. Huh.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-08T19:15:46.613Z · LW(p) · GW(p)
Woah, that's genuinely weird.
comment by Joe Collman (Joe_Collman) · 2021-03-11T19:58:21.932Z · LW(p) · GW(p)
A nit-pick:
Binding exceptionless commitments matter to understanding this complicated thing
I don't think "exceptionless commitments" is a useful category:
Any commitment is exceptionless within some domain.
No commitment is exceptionless over all domains (likely it's not even well-defined).
"exceptionless commitment" just seems like confusion/tautology to me:
So this commitment applies everywhere it applies? Ok.
Saying that it applies "without exception (over some set)" is no simpler than saying it applies "over some set". Either way, the set is almost certainly messy.
In practical terms, claiming a commitment is exceptionless usually means falling victim to an illusion of transparency: thinking that the domain is clear when it isn't (usually even to yourself).
E.g. "I commit never to X". Does this apply:
When I'm sleep deprived? Dreaming? Sleep-walking? Hallucinating? Drunk? How drunk? On drugs? Which drugs? Voluntarily? In pain? How much? Coerced? To what extent? Role-playing? When activity in areas of my brain is suppressed by electric fields? Which areas? To what degree? During/after brain surgery? What kind? After brain injury/disease? What kinds? Under hypnosis? When I've forgotten the commitment, believe I never made it, or honestly don't believe it applies? When my understanding of X changes? When possessed by a demon? When I believe I'm possessed by a demon? When replaced by a clone who didn't make the commitment? When I believe I'm such a clone?... (most of these may impact both whether I X, and whether I believe I am violating my commitment not to X)
For almost all human undertakings X, there are clear violations of "I commit to X", there are clear non-violations, and there's a complex boundary somewhere between the two. Adding "without exception" does not communicate the boundary.
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-03-08T20:20:35.695Z · LW(p) · GW(p)
I'm surprised that you discuss philosophy of religion. Why does it matter? Shouldn't the entire field just be trashed? (Aside from studying religion as a historical and social phenomenon, of course.)
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-08T20:29:21.412Z · LW(p) · GW(p)
I included it mostly for fun. Also, as the odd man out, including it maybe illustrates some things about my generating thinking/heuristics.
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-03-08T20:28:11.973Z · LW(p) · GW(p)
Start instead thinking of ethics as a project of trying to piece together psychological models of this insanely complicated and messy thing, 'human morality'... Care less about the distinction between 'moral values' and other human values. There are certainly some distinguishing features, but these mostly aren't incredibly important or deep or joint-carving.
I think we also need sociological models, and, relatedly, the distinction between "morality" and "human values" is joint-carving. IMO "morality" is all about game [LW · GW] theory [LW · GW] masquerading [LW · GW] as values.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-08T20:41:57.122Z · LW(p) · GW(p)
I think we also need sociological models
Sounds reasonable!
IMO "morality" is all about game [LW · GW] theory [LW · GW] masquerading [LW · GW] as values.
That may be true for some conceptions of "morality", but I wouldn't want to bake it into the field at the outset regardless, since a lot of the things people normally associate with "morality" are related to things like purity, sanctity, vibe, dignity of the living and dead, refusal to think about taboo tradeoffs...
comment by Rob Bensinger (RobbBB) · 2021-03-08T18:57:05.149Z · LW(p) · GW(p)
(Yes, it's possible that after subjective centuries of research, reflection, and self-improvement, the ideal equilibrium of our values would end up looking a lot simpler than our values do today. Humans aesthetically value simplicity, and we care about things like fairness and symmetry that can cause us to reflectively endorse simpler versions of some other things we care about.
But at the current development stage of the field of ethics, fixating on that possibility is probably mostly bad, because we're so far away from that ideal, and because it's so hard to predict which simpler values we'd end up with, if our values did end up any simpler. And it's entirely possible that our values will end up more complex on the axes we care about. Fixating on this idea could be hazardous for people who are too attached to finding the One True Simple Theory of Value.)
comment by Joe Collman (Joe_Collman) · 2021-03-11T18:00:51.354Z · LW(p) · GW(p)
Stop picking a 'side' and then losing all interest in the parts of human morality that aren't associated with your 'side': these are all just parts of the stew, and we need to work hard to understand them and reconcile them just right, not sort ourselves into Team Virtue vs. Team Utility vs. Team Duty.
Is anyone serious actually doing this? My sense is that people on a Team believe that all of human morality can be seen from the perspective they've chosen (and that this is correct). This may result in convoluted transformations to fit particular pieces into a given approach. I haven't seen it involve dismissal of anything substantive. (Or when you say "loss of interest", do you only mean focusing elsewhere? Is this a problem? Not everyone can focus on everything.)
E.g. Utility-functions-over-histories can capture any virtue or duty (UFs might look degenerate in some cases, but do exist). The one duty/virtue of "judging according to consequences over histories" captures utility...
For this reason, I don't think "...parts of the stew..." is a good metaphor, or that the biological analogy fits.
Closer might be "Architects have long-standing controversies, but they don't look like 'Which is the right perspective on objects: things that take up a particular space, things that look a particular way, or things with particular structural properties?'."
I don't see it as a problem to focus on using one specific lens - just so long as you don't delude yourself into thinking that a [good simple approximation through your lens] is necessarily a [good simple approximation].
Once the desire for artificial simplicity/elegance is abandoned, I don't think it much matters which lens you're using (they'll tend to converge in the limit). To me, Team Utility seems the most natural: "You need to consider all consequences, in the broadest sense" straightforwardly acknowledges that things are a mess. However, so too would "You need to consider all duties (/virtues), in the broadest sense".
Omit the "...all... in the broadest sense", and you're in trouble on any Team.
Replies from: TurnTrout, TAG↑ comment by TurnTrout · 2021-03-11T18:20:49.111Z · LW(p) · GW(p)
Utility-functions-over-histories can capture any virtue or duty (UFs might look degenerate in some cases, but do exist).
I disagree: I think there are some kinds of preferences we can have about the "anthropic measure" allocated to different realities. Or, consider the kind of reasoning that malign consequentialists might execute within the universal prior: that's about affecting things very much separate from their own universe, and so UFs over histories don't capture it. You might have a "duty" to affect decision-making which uses the universal prior. (Who knows how that's formalized...)
So, maybe you could just say "utility 1 to histories which follow the duty", but... I feel like Team Utility isn't capturing something substantive here, and so i'm therefore left wanting something more, and not agreeing that the utility function 'captures' it. Maybe we just need better formalisms and ways to reason about "reality fluid."
Replies from: Joe_Collman↑ comment by Joe Collman (Joe_Collman) · 2021-03-11T23:05:54.801Z · LW(p) · GW(p)
This is interesting. My initial instinct was to disagree, then to think you're pointing to something real... and now I'm unsure :)
First, I don't think your examples directly disagree with what I'm saying. Saying that our preferences can be represented by a UF over histories is not to say that these preferences only care about the physical history of our universe - they can care about non-physical predictions too (desirable anthropic measures and universal-prior-based manipulations included).
So then I assume we say something like:
"This makes our UF representation identical to that of a set of preferences which does only care about the physical history of our universe. Therefore we've lost that caring-about-other-worlds aspect of our values. The UF might fully determine actions in accordance with our values, but it doesn't fully express the values themselves."
Strictly, this seems true to me - but in practice I think we might be guilty of ignoring much of the content of our UF. For example, our UF contains preferences over histories containing philosophy discussions.
Now I claim that it's logically possible for a philosophy discussion to have no significant consequences outside the discussion (I realise this is hard to imagine, but please try).
Our UF will say something about such discussions. If such a UF is both fully consistent with having particular preferences over [anthropic measures, acausal trade, universal-prior-based influence...], and prefers philosophical statements that argue for precisely these preferences, we seem to have to be pretty obtuse to stick with "this is still perfectly consistent with caring only about [histories of the physical world]".
It's always possible to interpret such a UF as encoding only preferences directly about histories of the physical world. It's also possible to think that this post is in Russian, but contains many typos. I submit that это маловероятно.
If we say that the [preferences 'of' a UF] are the [distribution over preferences we'd ascribe to an agent acting according to that UF (over some large set of environments)], then I think we capture the "something substantive" with substantial probability mass in most cases.
(not always through this kind of arguing-for-itself mechanism; the more general point is that the UF contains huge amounts of information, and it'll be surprising if the expression of particular preferences doesn't show up in a priori unlikely patterns)
If we're still losing something, it feels like an epsilon's worth in most cases.
Perhaps there are important edge cases??
Note that I'm only claiming "almost precisely the information you're talking about is in there somewhere", not that the UF is necessarily a useful/efficient/clear way to present the information.
This is exactly the role I endorse for other perspectives: avoiding offensively impractical encodings of things we care about.
A second note: in practice, we're starting out with an uncertain world. Therefore, the inability of a UF over universe histories to express outside-the-universe-history preferences with certainty may not be of real-world relevance. Outside an idealised model, certainty won't happen for any approach.
↑ comment by TAG · 2021-03-11T21:14:09.579Z · LW(p) · GW(p)
There's an arena where disputes about basic values -- order versus freedom, hierarchy versus equality, etc -- are fought out, and that is politics, not philosophy. If values naturally converged, politics would not be needed
Replies from: Joe_Collman↑ comment by Joe Collman (Joe_Collman) · 2021-03-11T23:17:13.251Z · LW(p) · GW(p)
I don't mean that values converge.
I mean that if you take a truth-seeking approach to some fixed set of values, it won't matter whether you start out analysing them through the lens of utility/duty/virtue. In the limit you'll come to the same conclusions.
comment by ChristianKl · 2021-03-10T00:29:09.892Z · LW(p) · GW(p)
2. Care less about human intuitions and concepts. Care more about the actual subject matter of metaphysics — ultimate, objective reality. E.g., only care about the concept 'truth' insofar as we have strong reason to think an alien would arrive at the exact same concept, because it's carving nature closer to its joints [LW · GW].
I think that metaphysics would progress better when it looks at the practical issues that various scientific fields have with the concepts with which they engage then when it tries to take a detached view from real world concerns.
comment by TAG · 2021-03-08T23:15:35.940Z · LW(p) · GW(p)
Accept as a foundational conclusion of the field, ‘human values seem incredibly complicated and messy; they’re a giant evolved stew of competing preferences, attitudes, and feelings, not the kind of thing that can be captured in any short simple ruleset (though different rulesets can certainly perform better or worse as simplified idealizations).
I have not noticed mainstream ethicists assuming values are simple. "Ethics is value" is a rationalist belief , not a mainstream belief.
Binding exceptionless commitments matter to understanding this complicated thing; folk concepts like courage and honesty and generosity matter;
Ditto. Who ignores, or argues against courage and honesty?
Stop thinking of the project of ethics as ‘figure out which simple theory is True’.
I agree with that, it's time to look at hybridised theories.
Start instead thinking of ethics as a project of trying to piece together psychological models of this insanely complicated and messy thing, ‘human morality
I don't agree with that. For one thing, that's already a branch of psychology. For another, it's purely descriptive , and so gives up on improving ethics.
Ethics should be grounded in an understanding of what humans use it for, but should not be limited to it.
Replies from: antimonyanthony, RobbBB↑ comment by Anthony DiGiovanni (antimonyanthony) · 2021-03-13T23:27:03.569Z · LW(p) · GW(p)
Who ignores, or argues against courage and honesty?
As an intrinsic value? Lots of utilitarians, myself included. I'm unsure if Rob's intent was to suggest these things are values worth respecting intrinsically or just instrumentally.
Replies from: TAG↑ comment by TAG · 2021-03-14T01:19:55.857Z · LW(p) · GW(p)
Who ignores, or argues against courage and honesty?
Lots of utilitarians, myself included.
But it was supposed to be a comment about mainstream philosophy. It's not a given that mainstream ethicist will be some sort of utilitarian in the way that a rationalist probably will.
↑ comment by Rob Bensinger (RobbBB) · 2021-03-11T15:16:09.852Z · LW(p) · GW(p)
I don't agree with that. For one thing, that's already a branch of psychology. For another, it's purely descriptive , and so gives up on improving ethics.
I agree with this. I don't really mind if moral philosophy ends up merging into moral psychology, but I do think there's a potentially valuable things philosophers can add here, that might not naturally occur to descriptive psychology as important: we can try to tease apart meta-values of varying strengths, and ask what we might value if we knew more, had more ability to self-modify, were wiser and more disciplined, etc.
Ethics is partly a scientific problem of figuring out what we currently value; but it's also an engineering problem of figuring out and implementing what we should value, which will plausibly end up cashing out as something like 'the equilibrium of our values under sufficient reflection and (reflectively endorsed!) self-modification'.
Replies from: TAG↑ comment by TAG · 2021-03-11T15:23:02.692Z · LW(p) · GW(p)
Again, it has not been proven that ethics is just about value.
Replies from: TAG↑ comment by TAG · 2021-03-11T21:31:13.968Z · LW(p) · GW(p)
If you think ethics should be less narrow, why focus only on values? If you think that the only function of ethics is to maximise value, you will be led to the narrow conclusion that consequentialism is the only metaethics. But if you recognise that ethics also has he functions of shaping individual behaviour, enabling coordination, and avoiding conflict , then you can take a broad, multifaceted view of metaethics.
comment by TAG · 2021-03-08T22:53:13.660Z · LW(p) · GW(p)
Christianity is not a default
Ok. But the idea that a religion is a set of claims is itself a generalisation from Christianity.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-08T23:02:52.120Z · LW(p) · GW(p)
There's a spectrum, where Christianity and Buddhism are quite doctrine-y and Judaism and ancient-Roman-religion-ism are much less so. Doesn't seem important, since we care about what claims are being made regardless of how we define 'religion' or define the boundaries of religious membership.
comment by TAG · 2021-03-08T22:36:25.337Z · LW(p) · GW(p)
the field of epistemology to be much healthier if it spent less time developing theory, and more time applying theories and reporting back about how they perform in practice.
Only certain aspects of a theory can be tested that way.
You can objectively show that a theory succeeds or fails at making observations . It is is less clear whether an explanation succeeds in explaining, and less clear still whether a model succeeds in corresponding to the territory. The lack of a test for correspondence per se, ie. the lack of an independent "standpoint" from which the map and the territory can be compared, is the is the major problem in scientific epistemology. Its the main thing that keeps non-materialst ontology going.
The thing scientific realists care about is having an accurate model of reality, knowing what things are. If you want that, then instrumentalism is giving up something of value to you. So long as it s possible. If realistic reference is impossible , then theres no loss of value, but proving realistic reference is impossible isn't easy either.
comment by TAG · 2021-03-08T20:41:31.712Z · LW(p) · GW(p)
we have immediate epistemic access to things we know for certain’ is a contentious, confusing hypothesi
Everyone knows that. Adopting it doesn't give you an easy dissolution of the Hard Problem anything like that. You can have a problem of qualia without the incorrigibility of qualia.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-08T20:42:52.293Z · LW(p) · GW(p)
My advice for philosophy of mind isn't aimed at insta-dissolving the Hard Problem. The Hard Problem is hard!
Replies from: TAGcomment by Slider · 2021-03-09T01:36:28.788Z · LW(p) · GW(p)
So you would turn metaphysics into physics. It exists already go to that deparment if that is more your forte.
Sure it is important to be strict and obvious about the epistemological special character of the objective of study. But contentious? Are you seriously saying that "You can not be sure how the world seems to you" has significant plausbility? The clues that there exists an outside world is imprinted in such impressions, the existence of a third person viewpoint can't be the foundation for the existence of the first person viewpoint.
Replies from: Signer, RobbBB↑ comment by Signer · 2021-03-09T14:34:32.159Z · LW(p) · GW(p)
Are you seriously saying that “You can not be sure how the world seems to you” has significant plausbility?
How sure are you, that this sentence seems to you the same it seemed to you 1ms ago? If you can't precisely quantify difference between experiences, you can't have perfect certainty in your beliefs about experience. And it gets worse when you leave the zone that the brain's reflective capabilities were optimized for.
Replies from: Slider↑ comment by Slider · 2021-03-09T17:53:36.507Z · LW(p) · GW(p)
Past experiences do not directly seem to me and indeed I can't make such crosstemporal comparisons. However the memorty image I have of the past is a seeming. This is often much less than what direct current experiencing is.
As compared to belief-in-belief one can have belief-in-experience but it can't be all belief there needs to be some actual experience in there.
That is sentence-m1 and sentence-m2 might be mistakenly believed to be two slices of a crosstemporal object sentence-eternal. But what you actually percieve is two separate objects (in two separate perceptions).
Whatever kind of things we "read into" our perception (such as sentence-eternal) there is something we read "with". Those kinds of things can't fail to exist. And even with the kinds of things we build there is the fact whether or not they get built, when faced with some black on white whether a hallucination of sentence-eternal takes place or not.
Replies from: Signer↑ comment by Signer · 2021-03-09T22:01:38.128Z · LW(p) · GW(p)
Yes, there are experiences, not only beliefs about them. But as with beliefs about external reality, beliefs can be imprecise.
It is possible to create a more precise description of how something seems to you and for which your internal representation with integer count of built things is just approximation. And you can even define some measure of the difference between experiences, instead of just talking about separate objects.
It is not extremely bad approximation to say "it seems like two sentences to me" so it is not like being sure in the absence of experience is the right way.
The only thing you can be sure of is that something exist, because otherwise nothing could produce any approximations. But if you can't precisely specify temporal or spatial or whatever characteristics of your experience, there is no sense in which you can be sure what something seems to you.
Replies from: RobbBB, Slider↑ comment by Rob Bensinger (RobbBB) · 2021-03-09T23:19:58.620Z · LW(p) · GW(p)
Oh jeez, Signer and Slider are two different user names.
↑ comment by Slider · 2021-03-12T16:56:01.103Z · LW(p) · GW(p)
Even with beliefs about internal events there is the direct evidence and then there is the pattern seen in them. On the neurnal level this means that a neuron is either on or off. Whatever it signfies or tells about is secondary but the firing event itself is the world here-now rather than "out there". Now you could have more abstract parts of the brain that do not have direct access to what happens in the subconcious parts. There is the eye, there is the visual cortex and there is the neocortex. The neocortex might separately build a model for itself what happens in the visual cortex. This is inherently guesswork and is subject to uncertainty. However the concrete objects that the visual cortex passes up are "concrete firings" it would not make sense and the brain need not make a model of those.
I get that you are gesturing at a model where there is some nebolous truth and the more and sophisticated ways one can measured it then a more faitful representation can be given. Yes, if your measuring appartus has more LED lights in it to go off it will extract more bits from the thing measured. But if one installs additional lights then the trigger conditions of the old lights just retain rather than improving in some way. Sure you can be uncertaswin whether a ligth goes off because a photon was caught or because a earthquake tripped it. But the fact that the light did trip ie the data itself is not subject to this kind of speculation.
In principle I could just have a list of LED firings without a good model how such triggering could have come about. I would still have a seeming without knowing how to build anything from it.
Replies from: Signer↑ comment by Signer · 2021-03-13T08:02:41.393Z · LW(p) · GW(p)
The LEDs are physical objects and so your list of firings could be wrong about physical fact of actual firing if you had hallucination when making that list. Same with the neurons: it's either indirect knowledge about them, or no one actually knows whether some neuron is on or off.
Well, except you can say that neurons or LEDs themselves know about themselves. But first, it's just renaming "knowledge and reality" to "knowledge and direct knowledge" and second, it still leaves almost all seemings (except "left half of a rock seems like left half of a rock to a left half of a rock") as uncertain - even if your sensations can be certain about themselves, you can't be certain, that you having them.
Or you could have an explicitly Cartesian model where some part the chain "photons -> eye -> visual cortex -> neocortex -> expressed words" is arbitrary defined as always true knowledge. Like if the visual cortex says "there is an edge at (123, 123) of visual space", you interpret it as true or as an input. But now you have a problem of determining "true about what?". It can't be certain knowledge about eye, because visual cortex could be wrong about eye, and it can't be about visual cortex for any receiver of that knowledge, because it could be spoofed in transit. I guess implementing Cartesian agent would be easier or maybe even some part of any reasonable agent is required to be Cartesian, but I don't see how certainty in inputs can be justified.
Replies from: Slider↑ comment by Slider · 2021-03-25T23:53:18.319Z · LW(p) · GW(p)
There are some forms of synesthesia where certain letters get colored as certain colors. If an "u" is supposed to be red producing that dataconstruct to give to the next layer doesn't need to conform to outside world. "U"s are not inherently red but seeing letters in colors can make a brain perform more/easier in certain tasks.
Phenomenology is concerned with what kind of entities these representations that are passed around are. There it makes sense to say that in the synesthesia a lettr concept invokes the qualia of color.
I was forming a rather complex view where eachsubsystem has direct knowlegde about the interfaces it has but indirect knowledge on what goes in other systems. This makes it so that a given representation is direct infallible knowledge to some system and fallible knowledge to other systems (seeing a red dot doesn't mean one has seen a red photon, just the fact that you need a bunch of like 10 or so photons for the signal to carry forward from the eye).
Even if most of the interesting stuff is indirect knowledge the top level always needs its interface to the nearby bit. For the system to do the subcalculation /experience that it is doing it needs to be based on solid signals. The part that sees words from letters might be at the mercy and error rate of the letter seeing part. That is the word part can function one way if "u" is seen and "f" is not seen and another way if "u" is unseen and "f" is not seen, but should it try to produce words without hints or help from the letter seeing part it can not be sensitive to the wider universe.
↑ comment by Rob Bensinger (RobbBB) · 2021-03-09T03:05:34.605Z · LW(p) · GW(p)
So you would turn metaphysics into physics.
Nope! This is very different from what I proposed. Physics departments are not generally places you go to debate what a "property" is, whether properties are fundamental, how (and whether) brains could know what properties are and whether they're fundamental, etc. To take one example of a metaphysics-y topic.
Are you seriously saying that "You can not be sure how the world seems to you" has significant plausbility?
Yep!
he clues that there exists an outside world is imprinted in such impressions, the existence of a third person viewpoint can't be the foundation for the existence of the first person viewpoint.
I suggest reading https://www.lesswrong.com/rationality [? · GW] to get a better picture of my view here. I'm a Bayesian, and I don't think all knowledge can or should be based on infallible foundations.
Replies from: Slider↑ comment by Slider · 2021-03-09T10:30:37.844Z · LW(p) · GW(p)
How one would operationalise questions about what a property is?
It would be hard to update your propabilities if you are unsure what the evidence is and whether you have seen it. In a straighforward formulation on bayesian updating what the evidence is is unproblematic. In order fo bayesian updating to be relevant evidence and hypothesis needs to have different epistemological character. You can not make the system work with all hypotheses, you need evidence-type things aswell so you can not do without infallibility.
Negation of "All knowledge can be infallibly founded" is "There exists knowledge that can't not be infallibly founded". Negation of "There exists a bit of infallible knowledge" is "All knowledge is incapable of being founded infallibly". Sure thinking that most interesting types of knowledge is fallible is a workable direction. But one can not have a map-territority mismatch if there is no map. While reading random scribbles on a map tells nothing of the outside world it does tell of the existence of the map itself. If one would need to guess what is written on the map, there would be a need to represent a map. But why wouldn't you need a map to read such a meta-map? (see also wittgenstein about requiring instruction books how to read signs). One can avoid an infinite regression if there is a level of map that just gets used (a map that doesn't require a map to use).
The suggested reading is so wide that it not a practical method of addressing disagreement. And I am already familiar with a lot of it. Which parts you think are relevant here?
comment by TAG · 2021-03-08T20:27:12.925Z · LW(p) · GW(p)
Care less about human intuitions and concepts
-
Can you demonstrate a form of epistemology that works with no intuitions (unfounded assumptions) at all?
-
If not, can you show that philosoohy is actually using using more than he unavoidable minimum of intuitiins.?
↑ comment by Rob Bensinger (RobbBB) · 2021-03-08T20:31:41.461Z · LW(p) · GW(p)
"Care less about human intuitions and concepts" here means "care less about intuitions and concepts to the extent they're human-specific" (e.g., care more about bottom quarks, less about beauty). It of course doesn't mean "don't use intuitions or concepts".
Replies from: TAG↑ comment by TAG · 2021-03-08T20:37:30.559Z · LW(p) · GW(p)
Would you consider rephrasing it, then?
And why should people stop caring about what they care about? What meta-value is that based on?
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-08T20:44:30.802Z · LW(p) · GW(p)
My advice is based on my observing the track record of different people thinking about metaphysics, and generalizing habits that I think explain why some metaphysicians do better than others.
Before accepting my advice, I'd expect philosophers to first want to hash out a bunch of object-level disputes with me to establish why I think this vs. that philosopher is performing better in the first place.
Replies from: TAG↑ comment by TAG · 2021-03-08T23:29:30.703Z · LW(p) · GW(p)
You seem to be saying that humanistic philosophy fails..but how are you judging that?
Replies from: supposedlyfun↑ comment by supposedlyfun · 2021-03-09T02:29:40.807Z · LW(p) · GW(p)
For your readers' benefit, maybe just say what you mean, or actively look for a crux, instead of fencing/sparring? I'm having a very hard time figuring out what you are thinking.
Replies from: TAGcomment by TAG · 2021-03-08T20:15:56.342Z · LW(p) · GW(p)
. Make reductive, ‘third-person’ models of the brain central to metaphysics discussion.
The mind body problem is key to much of metaphysics.
If we had satisfactory reductive models of all prima facie mental phenomena, there would be no mind body problem in the first place, any more than there is a heat-atom problem. So what does this piece of advice mean?
-
Lower the bar on what counts as satisfactory explanation?
-
Or dismiss the existence of what you can't explain?
There's plenty of evidence of both in the rationalsphere. There's a reluctance to treat reductive explanation as something that is capable of failing, a tendency to treat reductionism as something you believe by faith rather than something that produces predictions that can be confirmed or not.
And also a level of popularity to illusionism.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-08T20:34:02.488Z · LW(p) · GW(p)
You're quoting from my section on metaphysics, not my section on philosophy of mind.
Replies from: TAG↑ comment by TAG · 2021-03-08T20:38:12.781Z · LW(p) · GW(p)
I know. "The mind body problem is key to much of metaphysics".
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-08T20:46:36.650Z · LW(p) · GW(p)
I don't see why. Even if I thought panpsychism were true, I don't think it would help resolve most of the things analytic philosopher metaphysicians are debating.
Replies from: TAG↑ comment by TAG · 2021-03-08T21:16:57.013Z · LW(p) · GW(p)
Things like idealsm versus dualism versus materialism are topics in metaphysics that are downstream of the MBP. -- there's no motivation to reject materliasim other than accounting for subjectivity/consciousness. Other topics, like realism versus conceptualism versus nominalism are not particularly downstream of the MBP.
comment by TAG · 2021-03-08T20:33:52.311Z · LW(p) · GW(p)
Care more about the actual subject matter of metaphysics — ultimate, objective reality
Whether reality is entirely objective is a valid question in metaphysics. You seem t be in the grip of a strong intuition.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-03-08T20:35:55.235Z · LW(p) · GW(p)
Analytic philosophers aren't generally very interested in the question of whether all of reality is socially or mentally constructed, and will freely say that it's not, if that's what you're alluding to. (Which I'm already happy about, so I don't feel a need to encourage it in my advice.)
Replies from: TAG, TAG