Epistemology vs Critical Thinking

post by Onemorenickname · 2017-06-10T02:41:39.684Z · LW · GW · Legacy · 12 comments

Contents

  Epistemies work.
    General approaches don't work.
    Specific approaches work.
  Implications.
    Stopping redundancy.
    Epistemic effort.
  Misc.
    "Bayesian probabilities"
    Bayesian framework
    Not even wrong
  Questions
None
12 comments

Short vocabulary points :


Epistemic ... :
  • Effort. Much more reasoning behind that post. I'm mostly trying to see if people are interested. If they are, much more writing will ensue.
  • Status. Field : Rationalist epistemology. Phase : pre-epistemy.

Epistemies work.

General approaches don't work.

Model-checking, validity and proof-search can be hard. Like, NP, PSPACE, non-elementary hard or even undecidable. Particularly, validity of propositions in first-order logic is undecidable.

Our propositions about the world are more complex than what's described by first order logic. Making it impossible to prove validity of propositions in the general case. As such, trying to find a general logic to deal with the world (ie, critical thinking) is energy badly spent.

Specific approaches work.

This problem has been answered in fields relying on logic. For instance : model checking, type theory or not-statistical computational linguistics. The standard answer is finding specialized and efficiently computable logics.

However, not every field can afford a full formalization. At least, as humans studying the world, we can't. Epistemies can be seen as detailed informal efficient logics. They give us a particular way to study some particular thing, just like logics. They don't provide mathematical guarantees, but they still can offer guarantees.

Science faced that problem, as the study of world by humans. However, critical thinking wasn't enough. That's partly why we moved from philosophy to sciences. The Science solution was to subdivide the world into several overlapping-but-independently-experimentable parts.

Thus, rather than by its object of study, a science is defined by a combination of its object of study and its epistemy. This explains why 3 different articles studying logic can be discriminated by their science : to Philosophy, Math and Theoretical Computer Science.

Implications.

Stopping redundancy.

Valuing critical thought led to a high amount of redundancy. Anyone can dump their ideas and have it judged by the community, provided a bit of critical thinking has been done. The core insight being that critical thinking should filter most of the bad ideas.

However, if the subject of the idea relies a bit on a non-epistemied technical field, obfuscating lack of consistency/thorough-thinking becomes very easy. As such, community time is spent finding obvious flaws in a reasoning when the author could do it alone provided there was an appropriate epistemy.

Epistemic effort.

As such, before suggesting new models, one should confront it to the standard epistemy of the fields the model belong to. That epistemy can be as simple as some sanity checks, eg : "Does this model lead to X, Y or Z contradiction ? If it does, it's a bad one. Else, it is a good one.". If there is no standard epistemy in the given field, working on one is critical.

I agree Raemon's post about using "epistemic effort" instead of "epistemic status". Following the previous line of thought, I think "epistemic status" should refer to an epistemic status (and the field relative to which it is defined) instead of the epistemic effort. I see 3 kinds of epistemic status, that could be refined :

1. Pre-epistemy : Thoughts meant to gather comments. Models trying to see if modelling a particular subject is worth it or works well.

2. Epistemy building : Defining the epistemy meta-assumptions. Defining the epistemy logic. Defining the epistemy facts (eg, Which sources are relevant in that field ? Which meta-facts are relevant in that field ?).

3. Post-epistemy : Once the epistemy is defined, anything benefiting the science's episteme. Facts, models, questioning the epistemy (might lead to forks, eg, math and computer science).

Misc.

"Bayesian probabilities"

Initially, I thought that someone putting a probability in front of a belief had an objective meaning. I asked around for an epistemy, and I have been told that it was only a way to express more precisely a subjective feeling.

However, it looks like there might be a confusion between the map and the territory when I see things like bet-to-update. Because when I see "Bayesian rational agent", it feels like we should be supposed to be bayesian rational agents in the general case. (Which I think is an AGI-complete problem.)

Bayesian framework

Bayesian rules and its derivatives define the "proof rules" part of an agent's epistemy. But axioms are still required, a world, a way to gather facts and such. It also relies on meta-assumption for efficiency and relevancy. Bayesian rules are not enough to define an epistemy.

Therefore, not only I am strongly prejudiced against someone self-describing as a bayesianist because of the "I apply the same epistemy everywhere"-approach, but also because it isn't a proper epistemy.

There are better ways to say "I know the Bayes' rule, and how it might apply to real-life situation." than "I'm bayesianist".

Maybe "bayesianist" solely means "I update my beliefs based on evidence", but I think "open-minded" is the right word for that.

Not even wrong

Showing not-even-wrong-ness is possible in sciences with an epistemy. (Well, it's possible to show it to people who know that epistemy. Showing someone who don't know maths that his "informal" maths aren't even maths is hard.)

In other fields, we are subject to too much not-even-wrong-ness. I'd like to link some LW posts to exemplify my point, but I think it might violate the local BNBR policy.

Questions

Do you think defining a meta-epistemy (ie, an epistemy to the Rationalist epistemology) is important ?

Do you think defining sub-epistemies is important ?

If you don't, why ?

 

 

 

 agree : The direct transitivity is meant. To agree something and to agree with/to something have different connotations.

12 comments

Comments sorted by top scores.

comment by denimalpaca · 2017-06-10T18:43:40.598Z · LW(p) · GW(p)

I think you wrote some interesting stuff. As for your question on a meta-epistemy, I think what you said about general approaches mostly holds in this case. Maybe there's a specific way to classify sub-epistemies, but it's probably better to have some general rules of thumb that weed out the definitely wrong candidates, and let other ideas get debated on. To save community time, if that's really a concern, a group could employ a back-off scheme where ideas that have solid rebuttals get less and less time in the debate space.

I don't know that defining sub-epistemies is so important. You give a distinction between math and theoretical computer science, but unless you're in those fields the distinction is near meaningless. So maybe it's more important to define these sub-epistemies as your relation to them increases.

Replies from: Onemorenickname
comment by Onemorenickname · 2017-06-11T01:06:25.933Z · LW(p) · GW(p)

I think you wrote some interesting stuff.

Thanks

As for your question on a meta-epistemy, I think what you said about general approaches mostly holds in this case. Maybe there's a specific way to classify sub-epistemies, but it's probably better to have some general rules of thumb that weed out the definitely wrong candidates, and let other ideas get debated on.

I agree. I don't expect a full-fledged meta-epistemy. Again, "That epistemy can be as simple as some sanity checks".

I don't know that defining sub-epistemies is so important. You give a distinction between math and theoretical computer science, but unless you're in those fields the distinction is near meaningless. So maybe it's more important to define these sub-epistemies as your relation to them increases.

I agree. I picked that distinction because I assumed many rationalists are in CS or have strong mathematical foundations. It might have been a less precise example.

But there are two answers to your remark :

  • That people who aren't in math or theoretical CS and thus can't distinguish them should not post their related ideas is not a bug, it's a feature. I have tCS or math aberrations on LW that made the community lose time.
  • That we shouldn't lose time defining epistemies on new ideas. I agree, that's what the "pre-epistemy phase", and the phase status more generally are meant to convey. But if a group of related ideas gets enough traction (Rationalism, Utilitarianism), defining an epistemy becomes more and more important.
comment by whpearson · 2017-06-10T15:58:33.254Z · LW(p) · GW(p)

I agree Raemon's post

I agree has a broken link.

By meta-epistemy do you mean something that can explain how the current rationalist epistemology came about or do you want something that can explain how one should make it better in the future?

Understanding intelligence would give you the first. Or at least it would at least give you an explanation of the development of epistemes of the sort evolution gives you for the development of creatures. Intelligence might be contingent on history in the same way evolutionary fitness is.

I'm not sure if the second is at all tractable.

Can you clarify what sub-epistemies are in this framework?

Replies from: Onemorenickname
comment by Onemorenickname · 2017-06-10T16:48:48.341Z · LW(p) · GW(p)

I agree has a broken link.

I don't know LW editing. (First post.) How do internal links work ? Edit : Simple HTML internal links, I had to add "#".

By meta-epistemy do you mean something that can explain how the current rationalist epistemology came about or do you want something that can explain how one should make it better in the future?

By meta-epistemy, I meant an epistemy that we should follow to define and evaluate new sub-epistemies.

Can you clarify what sub-epistemies are in this framework?

Basically, instead of coming with a new thought. Trying to see to which more general field that thought belong too, and if there are basic associated rules that could help checking the validity of the thought. It'd be easier with some examples, but that could be taken negatively by the source material's author. It's late where I am. If you want an example, I can produce an artificial one tomorrow.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-06-12T09:57:24.411Z · LW(p) · GW(p)

By meta-epistemy, I meant an epistemy that we should follow to define and evaluate new sub-epistemies.

You need an epistemology to decide an epistemology, , and when you really understand that, you stop being a rationalist in the local sense.

comment by ChristianKl · 2017-06-14T21:15:29.699Z · LW(p) · GW(p)

Every paradigm of knowledge that's used in practice has explicit and implicit parts. Definitions can be useful, but they don't fully describe how people subscribe to those definitions reason. Reasoning processes of real humans are more complex than the models we build. Historians of science like Thomas Kuhn generally assume that most scientists don't have a good explicit model of what they are doing when they are doing science. The explicit model for the scientific process isn't required.

On LW we do have interesting discussion about what it means to be a Bayesian with posts like http://lesswrong.com/lw/iat/what_bayesianism_taught_me/ .

That post isn't simply about applying Bayes rule to real life situations. It doesn't provide a clear definition of Bayesianism but it provides information about what it happens to be in the eyes of the author. I think what's the author is talking about is also what the larger LW community means when they speak of being a Bayesian.

You can argue that the informal math in the post isn't even real math, but that's besides the point. The author thinks differently because of his engagement with Bayesianism.

It's easy to say that someone is not even wrong when you don't understand their position.

Replies from: Onemorenickname
comment by Onemorenickname · 2017-06-25T15:13:54.280Z · LW(p) · GW(p)

Historians of science like Thomas Kuhn generally assume that most scientists don't have a good explicit model of what they are doing when they are doing science. The explicit model for the scientific process isn't required.

History is indeed descriptive, while my article is prescriptive :

  • Description. The explicit model isn't required, science worked without it, up to a point. And even that has to be nuanced, pre-experimental science and pre-popper science are very different from current sciences. The foundational crisis wasn't purely philosophical.

  • Prescription. Putting more stuff in the explicit part of the epistemology is better. People abuse the implicit part : replication crisis in social sciences, epistemic fails in bionformatics etc.

About the second part of your post, I can't find a clear thesis. I'll try to answer you nevertheless.

It's easy to say that someone is not even wrong when you don't understand their position.

The "not even wrong" section is not the section "bayesianism".

That post isn't simply about applying Bayes rule to real life situations. It doesn't provide a clear definition of Bayesianism but it provides information about what it happens to be in the eyes of the author.

I already read the post you linked, and I don't see what I wrote that contradicts what you are saying. The author states lessons learnt from Bayesianism, and is interested in lessons other learnt.

Funny, in the post you linked, you can read stuff like :

Many bits of "common sense" rationality can be precisely stated and easily proved within the austere framework of Bayesian probability.

Not being defined makes it easier to use it in various different contexts. And to be wrong. They aren't proved in an "austere framework", but with strong axioms.

Replies from: ChristianKl
comment by ChristianKl · 2017-06-25T21:45:12.028Z · LW(p) · GW(p)

And even that has to be nuanced, pre-experimental science and pre-popper science are very different from current sciences.

Most scientists haven't read Popper and those people in history of science that analyze what scientists actually do, don't find that scientists follow Popper's maxims.

I agree that for psychologists and many people in biology there isn't enough explicit attention paid to epistemology. On the other hand it's still import to be aware that you will never get 100% explicit.

Given LessWrong base rates I'm also not sure whether it makes sense to encourage this crowd to focus more to be explicit about the meta-level.

The "not even wrong" section is not the section "bayesianism".

It seems I haven't fully understood your criticism of what you perceive to be bayesianism.

Replies from: Onemorenickname
comment by Onemorenickname · 2017-06-26T01:12:06.867Z · LW(p) · GW(p)

Most scientists haven't read Popper and those people in history of science that analyze what scientists actually do, don't find that scientists follow Popper's maxims.

As far as I know, this is still subject of debates. cf https://en.wikipedia.org/wiki/Falsifiability

I agree that for psychologists and many people in biology there isn't enough explicit attention paid to epistemology. On the other hand it's still import to be aware that you will never get 100% explicit.

I don't see what is your criterium to agree with my point on a given field. Also, my point isn't about "100% explicit". My point is that if a field of study is interesting enough, defining an epistemy becomes primary. Else, too much time will be wasted. Similarly, in some existing fields, an epistemy that is too implicit / loosely specified leads to noise production at best, and to counter-productive efforts at worst.

Considering the opportunity cost of having very smart people working on useless things, this is bad.

It seems I haven't fully understood your criticism of what you perceive to be bayesianism.

Indeed, I think I was too brief and that it could have been an article in itself. I might write one if you are interested. If you aren't, basically : bayesianism as the core of an approach to the world is too losely specified. It isn't a complete epistemy, nor even a complete logic.

On the article you sent, the author tried to find uses of bayesianism. However, substituting religion for bayesianism leads to the same epistemic problems : "I did that, that and that because of religion. And religion even proves some bits of rationalist common sense !"

Replies from: ChristianKl
comment by ChristianKl · 2017-06-26T12:57:31.368Z · LW(p) · GW(p)

There's little attempt to falsify most core positions. Most core positions aren't falsifiable. Physicists generally don't reject their belief in string theory because a particular experiment didn't produce the results they hoped for.

In Evidence-Based Medicine, nobody cares about falsifying the core tenets of Evidence-Based Medicine. The paper that proposes the term Evidence-Based Medicine doesn't discuss the Rand study.

The project of writing the DSM-V didn't include running experiments to try to falsify the DSM.

The Wikipedia article towards which you linked doesn't point to a single instance where someone tried to falsify Popper's theory unsuccessfully.

It isn't a complete epistemy, nor even a complete logic.

What do you mean with "complete" if you don't mean "100% explicit"?

comment by ChristianKl · 2017-06-14T20:33:26.761Z · LW(p) · GW(p)

I'd like to link some LW posts to exemplify my point, but I think it might violate the local BNBR policy.

I don't think we have a BNBR policy that would say that you shouldn't link to examples that you criticize. As long as you are arguing in good faith and don't strawman, everything is fine.

comment by TheAncientGeek · 2017-06-13T07:54:22.759Z · LW(p) · GW(p)

Valuing critical thought led to a high amount of redundancy.

On LW specifically?

Anyone can dump their ideas and have it judged by the community, provided a bit of critical thinking has been done. The core insight being that critical thinking should filter most of the bad ideas.

You are phrasing your objection in terms of critical thinking somehow being bad, but the problem seems to be expecting other people to do all the criticism...?