Looking for ideas about Epistemology related topics
post by Onemorenickname · 2017-07-19T18:56:05.007Z · LW · GW · Legacy · 9 commentsContents
Good Experiments Good Priors Epistemies Questions None 9 comments
Notes :
- "Epistemy" refers to the second meaning of epistemology : "A particular theory of knowledge".
- I'm more interested in ideas to further the thoughts exposed here than exposing them.
Good Experiments
The point of "Priors are useless" is that if you update after enough experiments, you tend to the truth distribution regardless of your initial prior distribution (assuming its codomain doesn't include 0 and 1, or at least that it doesn't assign 1 to a non-truth and 0 to a truth). However, "enough experiments" is magic :
- The pure quantitative aspect : you might not have time to do these experiments in your lifetime.
- Having independent experiments is not defined. Knowing which experiments are pairwise independent embeds higher-level knowledge that could easily be used to derive truths directly. If we try to prove a mathematical theorem, comparing the pairwise success probability correlations of different approaches would give much more insights and results than trying to prove it as usual.
- We don't need pairwise independence. For instance, assuming we assume P=/=NP because we couldn't prove it, we assume so because we expect all used techniques not to be all correlated together. However, this expectation is ether wrong (Small list of fairly accepted conjectures that were later disproved), or stems from higher-order knowledge (knowledge about knowledge). Infinite regress.
Good Priors
However, conversely, having a good prior distribution is magic too. You can have a prior distribution affecting 1 to truths, and 0 to non-truths. So you might want the additional requirement that the prior distribution has to be computable. But there are two problems :
- There aren't many known computable prior distribution. Occam's razor (in term of Kolmogorov complexity in a given language) is one. But fails miserably in most interesting situations. Think of poker, or a simplified version thereof : A+K+Q. If someone bets, the simplest explanation is that he has good cards. Most interesting situations where we want to apply bayesianism are from human interactions (we managed to do hard sciences before bayesianism, and we still have troubles with social sciences). As such, failing to take into account bluff is a big epistemic fault for a prior distribution.
- Evaluating the efficiency of a given prior distribution will be done over the course of several experiments, and hence requires a higher order prior distribution (a prior distribution over prior distributions). Infinite regress.
Epistemies
In real-life, we don't encounter these infinite regresses. We use epistemies. An epistemy is usually a set of axioms, and a methodology to derive truths with these axioms. They form a trusted core, that we can use if we understood the limits of the underlying meta-assumptions and methodology.
Epistemies are good, because instead of thinking about the infinite chain of higher priors every time we want to prove a simple statement, we can rely on an epistemy. But they are regularly not defined, not properly followed or not even understood. Leading to epistemic faults.
Questions
As such, I'm interested in the following :
- When and how do we define new epistemies ? Eg, "Should we define an epistemy for evaluating the Utility of actions for EA ?", "How should we define an epistemy to build new models of human psychology ?", etc.
- How to account for epistemic changes in Bayesianism ? (This requires self-reference, which Bayesianism lacks.)
- How to make sense of of Scott Alexander's yearly predictions ? Is it only a blackbox telling us to bet more on future predictions, or do we have a better analysis ?
- What prior distributions are interesting to study human behavior ? (For a given restricted class of situations, of course.)
- Are answers to the previous questions useful ? Are the previous questions meaningful ?
I'm looking for ideas and pointers/links.
Even if your thought seems obvious, if I didn't explicitly mention it, it's worth commenting it. I'll add it to this post.
Even if you only have idea for one of the question, or a particular criticism of a point made in the post, go on.
Thank you for reading this far.
9 comments
Comments sorted by top scores.
comment by JenniferRM · 2017-07-25T06:54:24.553Z · LW(p) · GW(p)
I suspect that you are leaping to the idea of "infinite regress" much too quickly, and also failing to look past it or try to simply "patch" the regress in a practical way when you say:
Evaluating the efficiency of a given prior distribution will be done over the course of several experiments, and hence requires a higher order prior distribution (a prior distribution over prior distributions). Infinite regress.
Consider the uses that the Dirichlet distribution is classically put to...
Basically, if you stack your distributions two or three (or heaven forbid four) layers deep, you will get a LOT of expressiveness and yet the number of steps up the abstraction hierarchy still can be counted with the fingers of one hand. Within only a few thousand experiments even the topmost of your distributions will probably start acquiring a bit of shape that usefully informs subsequent experiments.
Probably part of the reason you seem to give up at the first layer of recursion and just assume that it will recurse unproductively forever is that you're thinking in terms of some small number of slogans (axioms?) that can be culturally transmitted in language by relatively normal people engaging in typical speech patterns, perhaps reporting high church Experiments that took weeks or months or years to perform, and get reported in a peer reviewed journal and so on.
Rather than conceptually center this academic practice, perhaps it would make more sense to think of "beliefs" as huge catalogues of microfacts, often subverbal, and "experiments" as being performed by even normal humans on the time scales of milliseconds to minutes?
The remarkable magical thing about humans is not that we can construct epistemies, the remarkable thing is that humans can walk, make eye contact and learn things from it, feed ourselves, and pick up sticks to wave around in a semi-coordinated fashion. This requires enormous amounts of experimentation, and once you start trying to build them from scratch yourself you realize the models involved here are astonishing feats of cognitive engineering.
Formal academic science is hilariously slow by comparison to babies.
The problems formal intellectual processes solve is not the problem of figuring things out quickly and solidly, but rather (among other things) the problem of lots of people independently figuring out many of the same things in different orders with different terminology and ending up with the problem of Babel.
Praise be to Azathoth, for evolution already solved "being able to learn stuff pretty good" on its own and delivered this gift to each of us as a birthright. The thing left to us to to solve something like the "political economy of science". Credit assignment. Re-work. Economies of scale... (In light of social dynamics, Yvain's yearly predictions start to make a lot more sense.)
A useful keyword here is "social epistemology" and a good corpus of material is the early work of Kevin Zollman, including this overview defending the conceptual utility of social epistemology as a field.
Replies from: Onemorenickname↑ comment by Onemorenickname · 2017-07-28T18:49:21.434Z · LW(p) · GW(p)
I suspect that you are leaping to the idea of "infinite regress" much too quickly, and also failing to look past it or try to simply "patch" the regress in a practical way when you say
No. I mention the practical patch right after : epistemies.
The remarkable magical thing about humans is not that we can construct epistemies, the remarkable thing is that humans can walk, make eye contact and learn things from it, feed ourselves, and pick up sticks to wave around in a semi-coordinated fashion.
Formal academic science is hilariously slow by comparison to babies.
Those are two different fields, with different problems. My answer to your thing is that we have embedded epistemological/ontological when we are born. From a different line of comments :
However, let's say we consider naive observation and innate reasoning as being part of a proto-epistemy. Then we have to acknowledge too that we have a fair-share of embedded ontological knowledge that we don't gain through experience, but that we have when we are born. (Time, space, multiplicity, weight, etc.). This is paramount, as without that, we would actually be trapped in infinite regress.
The problems formal intellectual processes solve is not the problem of figuring things out quickly and solidly
Well, formal verification, proof systems, NLP and AGI are a thing. So I disagree.
The thing left to us to to solve something like the "political economy of science".
No, there are plenty of other things. Including the aforementioned one. But more primary is fixing the "gift as a birthright". That's the point of rationalism. Our innate epistemy is a bad one. It lets us walk, gather sticks and talk with people, but it makes for bad science most of the time.
A useful keyword here is "social epistemology"
Thanks for the pointer. Checking it.
comment by Gordon Seidoh Worley (gworley) · 2017-07-19T21:41:29.934Z · LW(p) · GW(p)
I guess it's somewhat unclear to me just what work "epistemy" is doing given how you try to use it in your first question. Certainly a person's epistemology affects their understanding of many things, and recognizing weaknesses in epistemology may be exposed by pursuing particular fields of inquiry, but then you ask to "define an epistemy to build new models of human psychology" and that seems like a teleological approach to epistemology which, if I'm honest, seems entirely backwards from the rationalist approach (but maybe that's what you're going for?).
I guess I'm also somewhat unclear on what binds these ideas/questions together. I think you know but it's not immediately obvious to me beyond saying it's very broadly all about knowing, but then so is everything.
Replies from: Onemorenickname↑ comment by Onemorenickname · 2017-07-20T14:02:43.286Z · LW(p) · GW(p)
Certainly a person's epistemology affects their understanding of many things
I think having an epistemy to deal with everything is a mistake. It stems from the post that the strength of an epistemy lies from its specialization.
I guess it's somewhat unclear to me just what work "epistemy" is doing
I don't understand "what work is [X] doing" means in this context.
that seems like a teleological approach to epistemology
It's more that different fields of inquiry lead to different epistemies. If you want to study different fields, you have no a priori reason to use the same epistemy for both.
I guess I'm also somewhat unclear on what binds these ideas/questions together.
I don't know of a Bayesianist account of epistemies. As such, I'm shotgunning questions aiming to reveal it. The questions are spread on different fields, and different position on the abstract-concrete spectrum.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2017-07-20T20:22:10.538Z · LW(p) · GW(p)
It's more that different fields of inquiry lead to different epistemies. If you want to study different fields, you have no a priori reason to use the same epistemy for both.
But you do because fields are just an after-the-fact construction to make understanding reality more manageable. There's just one reality (for a phenomenologically useful sense of "reality" as the thing which you experience), fields just pick a part of it to focus on, and as such there is much overlap between how we know things in fields.
To be concrete about it, there are many fields we consider part of science and they all use the shared epistemological methods of science to explore particular topics. We don't reinvent science for physics, biology, etc. each time because each field is really just choosing to focus on a particular part of the questions science is designed to answer.
I think there's also a deeper confusion here where you seem to be thinking as if ontology comes first. That is, you are taking a transcendental stance. Otherwise you would see an a priori reason to use the same epistemology in multiple fields because epistemology would be prior to ontology. However the only way the transcendental stance is defensible is if it's unnecessary: that is ontology comes first but we have to experience it so there's no way for us to know that where epistemology isn't prior. Failing the test of parsimony, we should then reject transcendentalism anyway within our understanding.
Replies from: Onemorenickname↑ comment by Onemorenickname · 2017-07-21T02:38:16.329Z · LW(p) · GW(p)
But you do because fields are just an after-the-fact construction to make understanding reality more manageable. There's just one reality (for a phenomenologically useful sense of "reality" as the thing which you experience), fields just pick a part of it to focus on, and as such there is much overlap between how we know things in fields.
I disagree thoroughly with that paragraph.
Science is not about "understanding reality". Or at least, not the "reality" as "the thing which you experience". The impact of science in "the thing which we experience" can only be seen through pragmatism. Quantum physics is good not because it gives to some of us a more manageable understanding of reality, but because it gives to all of us tools relying on quantum effects.
If we talk about science as "understanding reality", then it's not "the thing which we experience". And in that case, science understands many different, sometimes independent realities.
"as such there is much overlap between how we know things in fields". There are only small overlaps between NLP, linguistics and cognitive psychology, all three studying natural languages. There are strong differences between logic from a philosophical point of view, logic from a mathematical foundations point of view and logic from a CS point of view., all three studying logic. A science is defined by its object and by its method.
To be concrete about it, there are many fields we consider part of science and they all use the shared epistemological methods of science to explore particular topics.
Well, if you put all the methods used in different sciences in the common sets of "shared epistemological methods of science", then I have to tautologically agree. But as well as concrete differences (a chemical experimental protocol is very different from a physical one), there are abstract differences (controlled experiments, natural experiments, historical inquiry, formal proof, naked human reasoning). So I don't understand your point.
We don't reinvent science for physics, biology, etc. each time because each field is really just choosing to focus on a particular part of the questions science is designed to answer.
Well, if a field is solely an "after-the-fact construction", there is no intention or design in fields.
Putting that aside, my explanation to the fact that we don't reinvent science every time is more down-to-earth : tragedy of the commons and chronology. Focusing on epistemology is hard and time consuming, and doesn't benefit individuals, but everyone at the same time. Except in particular instances (foundational crisis), researchers won't take the burden on themselves. Also, epistemology advances came after these fields were set.
I think there's also a deeper confusion here where you seem to be thinking as if ontology comes first. That is, you are taking a transcendental stance.
I take ontology and epistemology as separate. A science is defined by its object (ontos), and by its method (epistemy). Given I can make both arguments for different sciences (where their ontos come before their epistemy, and where their epistemy come before their ontos), I see them as separate.
When you say "that is ontology comes first but we have to experience it so there's no way for us to know that where epistemology isn't prior", you beg the question : you assume there is no higher order ontological knowledge, but that there is higher order epistemological knowledge (without which we couldn't have relevant experiments). And I can derive epistemological knowledge from observation and ontological knowledge as much as I can derive ontological knowledge from experiments and epistemological knowledge.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2017-07-21T21:38:48.235Z · LW(p) · GW(p)
I take ontology and epistemology as separate. A science is defined by its object (ontos), and by its method (epistemy). Given I can make both arguments for different sciences (where their ontos come before their epistemy, and where their epistemy come before their ontos), I see them as separate.
Great! My misunderstanding.
When you say "that is ontology comes first but we have to experience it so there's no way for us to know that where epistemology isn't prior", you beg the question : you assume there is no higher order ontological knowledge, but that there is higher order epistemological knowledge (without which we couldn't have relevant experiments). And I can derive epistemological knowledge from observation and ontological knowledge as much as I can derive ontological knowledge from experiments and epistemological knowledge.
I suppose I do insofar as the very act of experiencing experience is experience and thus by at all noticing your experience you know a way of knowing. And although you may infer things about epistemology from ontology, you cannot derive them because ontology must be constructed from knowledge gained through experience (at least if we demand a phenomenological account of knowledge), and thus all ontology is tainted by the epistemological methods of experience used to gain such knowledge.
Replies from: Onemorenickname↑ comment by Onemorenickname · 2017-07-22T05:35:07.100Z · LW(p) · GW(p)
I suppose I do insofar as the very act of experiencing experience is experience and thus by at all noticing your experience you know a way of knowing. And although you may infer things about epistemology from ontology, you cannot derive them because ontology must be constructed from knowledge gained through experience (at least if we demand a phenomenological account of knowledge), and thus all ontology is tainted by the epistemological methods of experience used to gain such knowledge.
Naive observation precedes any epistemic method to gain knowledge. However, let's say we consider naive observation and innate reasoning as being part of a proto-epistemy. Then we have to acknowledge too that we have a fair-share of embedded ontological knowledge that we don't gain through experience, but that we have when we are born. (Time, space, multiplicity, weight, etc.).
This is paramount, as without that, we would actually be trapped in infinite regress.
comment by ImmortalRationalist · 2017-07-30T10:13:09.457Z · LW(p) · GW(p)
Eliezer Yudkowsky wrote this article a while ago, which basically states that all knowledge boils down to 2 premises: That "induction works" has a sufficiently large prior probability, and that there exists some single large ordinal that is well-ordered.