My Skepticism

post by G0W51 · 2015-01-31T02:00:52.864Z · LW · GW · Legacy · 100 comments

Standard methods of inferring knowledge about the world are based off premises that I don’t know the justifications for. Any justification (or a link to an article or book with one) for why these premises are true or should be assumed to be true would be appreciated.


Here are the premises:




Edit: Why was this downvoted? Should it have been put in the weekly open thread instead?

100 comments

Comments sorted by top scores.

comment by gjm · 2015-01-31T02:28:56.504Z · LW(p) · GW(p)

With the (important) proviso that "knowledge", "trustworthy", and "are true" need to be qualified with something like "approximately, kinda-sorta, most of the time", I think these premises should be assumed as working hypotheses on the grounds that if they are badly wrong then we're completely screwed anyway, we have no hope of engaging in any sort of rational thought, and all bets are off.

Having adopted those working hypotheses, we look at the available evidence and it seems like it fits them pretty well (note: this isn't a trivial consequence of having taken those things as working hypotheses; e.g., if we'd assumed that our perception, reasoning and memory are perfectly accurate then we'd have arrived quickly at a contradiction). Those propositions now play two roles in our thinking: as underlying working assumptions that are mostly assumed implicitly, and as conclusions based on thinking about things as clearly as we know how.

There's some circularity here, but how could there not be? One can always keep asking "why?", like a small child, and sooner or later one must either refuse to answer or repeat oneself.

Somewhat-relevant old LW post: The lens that sees its flaws. My memory was of it being more relevant than it appears on rereading; I wonder whether there's another post from the "Sequences" that I was thinking of instead.

Replies from: G0W51
comment by G0W51 · 2015-01-31T03:08:20.582Z · LW(p) · GW(p)

There's some circularity here, but how could there not be? One can always keep asking "why?", like a small child, and sooner or later one must either refuse to answer or repeat oneself.

I see that circularity seems inevitable in order to believe anything, but would that really make circularity okay?

Somewhat-relevant old LW post: The lens that sees its flaws. My memory was of it being more relevant than it appears on rereading; I wonder whether there's another post from the "Sequences" that I was thinking of instead.

I recall a post about Yudkowsky made that seems like what you're talking about, but I can't find it. I think it was in Highly Advanced Epistemology 101 for Beginners

Replies from: gjm, TheAncientGeek
comment by gjm · 2015-01-31T09:51:55.367Z · LW(p) · GW(p)

Well, what does "OK" mean?

Suppose, e.g., that something like the following is true: those assumptions are the minimal assumptions you need to make to get rational thinking off the ground, and making them does suffice to support everything else you need to do, and they are internally consistent in the sense that when you make those assumptions and in estimate further you don't turn up any reason to think those assumptions were wrong.

What more would it take to make assuming them "OK"? If "OK" means that they're provable from first principles without any assumptions at all and suffice to ground rational empirical investigation of the world then no, they're not OK -- but there's good reason to think nothing else is or could be.

If what you're aiming for is a rational empiricism with minimal assumptions then I think something like this is optimal. I'm happy calling that "OK".

Replies from: G0W51
comment by G0W51 · 2015-01-31T12:57:46.186Z · LW(p) · GW(p)

By okay, I mean an at least least somewhat accurate method of determining reality (i.e. the generator of percepts). Given I don't know how to tell what percepts I've perceives, I don't see how standard philosophy reflects reality.

Replies from: gjm
comment by gjm · 2015-01-31T13:08:23.826Z · LW(p) · GW(p)

It sure seems (to me) as if those assumptions give an at least somewhat accurate method of determining reality. Are you saying you don't think they do -- or is your actual objection that we don't know that with whatever degree of certainty you're looking for?

Replies from: G0W51
comment by G0W51 · 2015-01-31T13:18:46.735Z · LW(p) · GW(p)

I don't see how we have any evidence at all that those assumptions give at least a somewhat accurate method of determining reality. The only way I know of of justifying those axioms is by using those axioms.

Replies from: gjm
comment by gjm · 2015-01-31T18:30:24.175Z · LW(p) · GW(p)

The other ways would be (1) because they seem obviously true, (2) because we don't actually have the option of not adopting them, and (3) because in practice it turns out that assuming them gives what seem like good results. #1 and #3 are pretty much the usual reasons for adopting any given set of axioms. #2 also seems very compelling. Again, what further OK-ness could it possibly be reasonable to look for?

Replies from: G0W51
comment by G0W51 · 2015-01-31T21:58:13.361Z · LW(p) · GW(p)

(1) because they seem obviously true

I don't see how this is evidence.

(2) because we don't actually have the option of not adopting them

Why can't we? Can't we simply have no beliefs at all?

(3) because in practice it turns out that assuming them gives what seem like good results.

What makes you think it has good results? Don't you need to except the axioms in order to show that they have good results? E.g. You see that you follow the axioms and have a good life, but doing so assumes you know your percepts, your memory of using the axioms and being fine is true, and that your current reasoning about this being a good reason to believe these axioms is valid.

Replies from: gjm
comment by gjm · 2015-01-31T22:59:32.802Z · LW(p) · GW(p)

Can't we simply have no beliefs at all?

I dare say you can in principle have no conscious beliefs at all. Presumably that's roughly the situation of an ant, for instance. But your actions will still embody various things we might as well call beliefs (the term "alief" is commonly used around here for a similar idea) and you will do better if those match the world better. I'm betting that I can do this better by actually having beliefs, because then I get to use this nice big brain evolution has given me.

Don't you need to accept the axioms in order to [...]

Yes. (I've said this from the outset.) Note that this doesn't make the evidence for them disappear because it's possible (in principle) for the evidence to point the other way -- as we can see from the closely parallel case where instead we assume as a working hypothesis that our perception and reasoning and memory are perfect, engage in scientific investigation of them, and find lots of evidence that they aren't perfect after all.

It seems that you want a set of axioms from which we can derive everything -- but then you want justification for adopting those axioms (so they aren't really serving as axioms after all), and "they seem obvious" won't do for you (even though that is pretty much the standard ground for adopting any axioms) and neither will the other considerations mentioned in this discussion. So, I repeat: What possibly conceivable outcome from this discussion would count as "OK" for you? It seems to me that you're asking for something provably impossible.

(That is: If even axioms that seem obvious, can't be avoided, and appear to work out well in practice aren't good enough for you to treat them as axioms, then it looks like your strategy is to keep asking "why?" in response to any proposed set of axioms. But in that case, as you keep doing this one of two things must necessarily happen: either you will return to axioms already considered, in which case you have blatant circular reasoning of the sort you are already objecting to only worse, or else the axioms under consideration will get unboundedly longer and eventually you'll get ones too complicated to fit into your brain or indeed the collective understanding of the human race. In none of these cases are you going to be satisfied.)

Replies from: G0W51
comment by G0W51 · 2015-02-01T02:12:34.377Z · LW(p) · GW(p)

I dare say you can in principle have no conscious beliefs at all. Presumably that's roughly the situation of an ant, for instance. But your actions will still embody various things we might as well call beliefs (the term "alief" is commonly used around here for a similar idea) and you will do better if those match the world better. I'm betting that I can do this better by actually having beliefs, because then I get to use this nice big brain evolution has given me.

I don't know why you think ants presumably have no conscious beliefs, but I suppose that's irrelevant. Anyways, I don't disagree with what you said, but I don't see how it entails that one is incapable of having no beliefs. You just suggest that having beliefs is beneficial.

What possibly conceivable outcome from this discussion would count as "OK" for you? It seems to me that you're asking for something provably impossible.

(That is: If even axioms that seem obvious, can't be avoided, and appear to work out well in practice aren't good enough for you to treat them as axioms, then it looks like your strategy is to keep asking "why?" in response to any proposed set of axioms. But in that case, as you keep doing this one of two things must necessarily happen: either you will return to axioms already considered, in which case you have blatant circular reasoning of the sort you are already objecting to only worse, or else the axioms under consideration will get unboundedly longer and eventually you'll get ones too complicated to fit into your brain or indeed the collective understanding of the human race. In none of these cases are you going to be satisfied.)

"Okay," as I have said before, means to have a reasonable chance of being true. Anyways, I see your point; I really do seem to be asking for an impossible answer.

comment by TheAncientGeek · 2015-01-31T21:06:47.363Z · LW(p) · GW(p)

would this make circularity OK?

Coherentist epistemology can be seen as an attempt to make circularity OK.

comment by Richard_Kennaway · 2015-02-01T09:55:17.601Z · LW(p) · GW(p)

With such skepticism, how are you even able to write anything, or understand the replies? Or do anything at all?

Replies from: G0W51, TheAncientGeek
comment by G0W51 · 2015-02-01T17:48:27.267Z · LW(p) · GW(p)

Because I act as if I'm not skeptical. (Of course, I can't actually know that the last sentence as true, or this statement, or this, and so on).

comment by TheAncientGeek · 2015-02-01T15:44:31.433Z · LW(p) · GW(p)

Thats one of the standard responses to scepticism:

Tu quoque, or performative contradiction.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-02-01T19:44:43.910Z · LW(p) · GW(p)

I wasn't intending the question rhetorically. If G0W51 is so concerned with universal scepticism, how does he manage to act as if he wasn't, which he observes he does?

comment by knb · 2015-01-31T07:24:46.124Z · LW(p) · GW(p)

I think you misspelled "skepticism" in the title.

Replies from: G0W51
comment by G0W51 · 2015-01-31T12:53:17.890Z · LW(p) · GW(p)

Thanks. Edited. I proofread the article multiple times, but I suppose I forgot about the title.

comment by Unknowns · 2015-02-01T18:35:00.117Z · LW(p) · GW(p)

Also, your argument (including what you have said in the comments) is something like this:

Every argument is based on premises. There may be additional arguments for the premises, but those are arguments will themselves have premises. Therefore either 1) you have an infinite regress of premises; or 2) you have premises that you do not have arguments for; or 3) your arguments are circular.

Assuming (as you seem to) that we do not have an infinite regress of premises, that means either that some premises do not have arguments for them, or that the arguments are circular. Either way, you say, that means we have unjustified beliefs which are not known to be true.

This may be true, given a particular arbitrary definition of knowledge that there is no reason for anyone to accept. But if knowledge is defined in such a way as to be a contradiction, who would want it anyway? It would be like wanting a round square.

comment by Ishaan · 2015-02-01T04:41:26.417Z · LW(p) · GW(p)

Memories can be collapsed under percepts.

In answer to your broader question - yup: you've hit upon epistemic nihilism, and there is no real way around it. Reason is Dead, and we have killed it. Despair.

...Or, just shrug and decide that you are probably right but you can't prove it. There's plenty of academic philosophy addressing this (See: Problem of Criterion) and Lesswrong covers it fairly extensively as well.

http://lesswrong.com/lw/t9/no_license_to_be_human/ and related posts.

http://lesswrong.com/lw/iza/no_universally_compelling_arguments_in_math_or/

Rather than going on a reading binge I recommend to just continue mulling it over until it clicks into place, because, similar to the whole "dissolve free will" thing, it feels clear in hindsight yet it is not easy to explain or understand explanations others provide.

I'll give it a shot anyway: The essential point is that ultimately you are a brain and you gonna do things the way your brain is designed to do them. Assuming you've satisfactorily resolved the whole moral nihilism thing (even though there is no divine justification for morality, we can still talk about what is moral and what isn't because morality is inside us), resolving epistemic nihilism follows an analogous chain of thought: There is not and cannot be any justification for human methods of inference[morality], but it still is our method of inference[morality] and we're gonna use it regardless.

Replies from: Fivehundred, TheAncientGeek
comment by Fivehundred · 2015-02-01T05:23:01.205Z · LW(p) · GW(p)

Why does everyone refer to it as "epistemic nihilism"? Philosophical skepticism ('global' skepticism) was always the term I read and used.

Replies from: gjm, Ishaan, Richard_Kennaway
comment by gjm · 2015-02-01T09:33:45.635Z · LW(p) · GW(p)

Everyone? In this discussion right here, the only occurrences of the word "nihilism" are in Ishaan's comment and your reply?

Replies from: Fivehundred
comment by Fivehundred · 2015-02-01T11:21:17.976Z · LW(p) · GW(p)

In general. I hear the word used but I haven't ever encountered it in literature (which isn't very surprising since I haven't read much literature). Seriously, Google 'epistemic nihilism' right now and all you get are some cursory references and blogs.

Replies from: gjm
comment by gjm · 2015-02-01T12:13:29.921Z · LW(p) · GW(p)

Maybe I wasn't clear: I'm questioning whether the premise of your question

Why does everyone refer to it as "epistemic nihilism"?

is correct. I don't think everyone does refer to it that way, whether "everyone" means "everyone globally", "everyone on LW", "everyone in the comments to this post", or in fact anything beyond "one or two people who are making terminology up on the fly or who happen to want to draw a parallel with some other kind of nihilism".

Replies from: Fivehundred
comment by Fivehundred · 2015-02-02T05:01:31.761Z · LW(p) · GW(p)

I've heard it from various people on the internet. Perhaps I don't have a large sample size, but it seems to consistently pop up when global skepticism is discussed.

comment by Ishaan · 2015-02-01T15:33:59.395Z · LW(p) · GW(p)

At first I just made it up, feeling that it was appropriate name due to the many parallels with moral nihilism, then I googled it, and description that came up roughly matched what I was talking about, so I just went on using it after that. I'm guessing everyone goes roughly through that process. Normally I add a little disclaimer about not being sure that if it is the correct term, but I didn't this time.

I didn't know the term "philosophical skepticism", thanks for giving me the correct one. In philosophy I feel there is generally problem where the process of figuring out the names that other people who separately came up with your concept before you did use to describe the concept you want ends up involving more work and reading than just re-doing everything...and at the end of the day others who read your text (as if anyone is reading that closely!) won't understand what you meant unless they too go back and read the citations. So I think it's often better to just throw aside the clutter and start fresh for everything, doing your best with plain English, and it's okay if you redundantly rederive things (many strongly disagree with me here).

I feel that the definition of "Epistemic nihilism" is self evident as long as one knows the words "epistemic" and "nihilism". The term "Skepticism" implies the view that one is asking "how do you know", whereas nihilism implies that one is claiming that there is no fundamental justification of the chosen principles. If indeed I'm describing the same thing, I kinda think "epistemic nihilism" is a more descriptive term from a "plain english" perspective overall.

(Also, re: everyone - I haven't actually seen that term used in the wild by people who are not me unless explicitly googling it. Maybe your impression results from reading my comments somewhere else?)

Replies from: Fivehundred
comment by Fivehundred · 2015-02-02T05:05:11.717Z · LW(p) · GW(p)

I didn't know the term "philosophical skepticism", thanks for giving me the correct one.

'Global skepticism' is really the correct one. 'Philosophical skepticism' is just a broad term for the doubting of normative justifications or knowledge.

(Also, re: everyone - I haven't actually seen that term used in the wild by people who are not me unless explicitly googling it. Maybe your impression results from reading my comments somewhere else?)

I doubt it very much. But some of the comments gave me the impression that it is in literature somewhere.

comment by Richard_Kennaway · 2015-02-01T13:51:06.318Z · LW(p) · GW(p)

"Epistemic nihilism" is not a name, but a description. Philosophical skepticism covers a range of things, of which this is one.

comment by TheAncientGeek · 2015-02-01T15:58:00.195Z · LW(p) · GW(p)

However, our concepts of truth and goodness allow us to pose the questions:standard responsesIsm persuaded of it, but is it really true? I approve of it, but is it really good?

The simple version of internalising truth and goodness by rubber stamping prevailing attitudes is not satisfactory; the complex version......is complex.

Replies from: Ishaan
comment by Ishaan · 2015-02-01T16:30:13.666Z · LW(p) · GW(p)

I'm persuaded of it, but is it really true? I approve of it, but is it really good?

Yes, that is precisely the relevant question - and my answer is that there's no non-circular justification for a mind's thoughts and preferences (moral or otherwise), and so for both practical and theoretical purposes we must operate on a sort of "faith" that there is some validity to at least some of our faculties, admitting that it is ultimately just simple faith that lies at the bottom of it all. (Not faith as in "believing without evidence", but faith as in "there shall never be any further evidence or justification for this, yet I find that I am pursuaded of it / do believe it is really good.)

The simple version of internalising truth and goodness by rubber stamping prevailing attitudes is not satisfactory; the complex version......is complex.

It's not really that bad or complex - all you have to believe in is the concept of justification and evidence itself. To attempt to go deeper is to ask justification for the concept of justification, and evidence for what is or is not evidence, and that's nonsense.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-02T15:46:16.325Z · LW(p) · GW(p)

To attempt to go deeper is to ask justification for the concept of justification, and evidence for what is or is not evidence, and that's nonsense.

Ostensibly, those are meaningful questions. It would be comvemeimt if any question you couldn't answer was nonsense, but...

Replies from: Ishaan
comment by Ishaan · 2015-02-02T18:28:41.059Z · LW(p) · GW(p)

Agreed, I don't think the question is meaningless. I do, however, think that it's "provably" unanswerable (assuming you provisionally accept all the premises that go into proving things)

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-02T18:55:48.820Z · LW(p) · GW(p)

But that isn't the same thing at all. If you have a fiundationalistic epistemic structure resting on proveably unprovable foundations, you are in big trouble.

comment by LawrenceC (LawChan) · 2015-02-01T19:23:27.738Z · LW(p) · GW(p)

gjm has mentioned most of what I think is relevant to the discussion. However, see also the discussion on Boltzmann brains.

comment by Unknowns · 2015-02-01T18:23:52.760Z · LW(p) · GW(p)

Obviously, if you say you are absolutely certain that everything we think is either false or unknown, including your own certainty of this, no one will ever be able to "prove" anything to you, since you just said you would not admit any premise that might be used in such a proof.

But in the first place, such a certainty is not useful for living, and you do not use it, but rather assume that many things are true, and in the second place, this is not really relevant to Less Wrong, since someone with this certainty already supposes that he knows that he can never be less wrong, and therefore will not try.

Replies from: G0W51
comment by G0W51 · 2015-02-03T01:47:49.028Z · LW(p) · GW(p)

I never said I was absolutely certain everything we think is either false or unknown. I'm saying that I have no way of knowing if it is false or unknown -- I am absolutely uncertain.

comment by DanielLC · 2015-01-31T03:37:09.358Z · LW(p) · GW(p)

Assume they're approximately true because if you don't you won't be able to function. If you notice flaws, by all means fix them, but you're not going to be able to prove modus ponens without using modus ponens.

Replies from: G0W51
comment by G0W51 · 2015-01-31T13:01:59.729Z · LW(p) · GW(p)

Given that I don't know of a justification for the premises, why should I believe that they are needed to function?

A side note: One can prove modus ponens with truth tables.

Replies from: DanielLC
comment by DanielLC · 2015-01-31T17:56:29.342Z · LW(p) · GW(p)

If you assume truth tables, you can prove modus ponens. If you assume modus ponens, you can prove truth tables. But you can't prove anything from nothing.

If you're looking for an argument that lets you get somewhere without assumptions, you won't find anything. There is no argument so convincing you can convince a rock.

Replies from: G0W51
comment by G0W51 · 2015-01-31T18:05:26.452Z · LW(p) · GW(p)

I suppose the question then is why we should make the necessary assumptions.

Replies from: dxu
comment by dxu · 2015-01-31T19:24:32.201Z · LW(p) · GW(p)

There is no "why". If there was, then the assumptions wouldn't be called "assumptions". If you want to have a basis for believing anything, you have start from your foundations and build up. If those foundations are supported, then by definition they are not foundations, and the "real" foundations must necessarily be further down the chain. Your only choice is to pick suitable axioms on which to base your epistemology or to become trapped in a cycle of infinite regression, moving further and further down the chain of implications to try and find where it stops, which in practice means you'll sit there and think forever, becoming like unto a rock.

The chain won't stop. Not unless you artificially terminate it.

Replies from: TheAncientGeek, G0W51
comment by TheAncientGeek · 2015-01-31T21:09:03.524Z · LW(p) · GW(p)

So it's Ok to use non rationalist assumptions?

Replies from: dxu
comment by dxu · 2015-01-31T23:02:40.820Z · LW(p) · GW(p)

I haven't the slightest idea what you mean by "non rationalist" (or "Ok" for that matter), but I'm going to tentatively go with "yes", if we're taking "non rationalist" to mean "not in accordance with the approach generally advocated on LessWrong and related blogs" and "Ok" to mean "technically allowed". If you mean something different by "non rationalist" you're going to have to specify it, and if by "Ok" you mean "advisable to do so in everyday life", then heck no. All in all, I'm not really sure what your point is, here.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-01T11:46:49.038Z · LW(p) · GW(p)

Your guesses are about right:.

The significance is that if rationalists respond to sceptical challenges by assuming what they can't prove, then they are then in the same position as reformed epistemology. That is, they can't say why their axioms are rational, and can't say why theists are irrational, because theists who follow RE are likewise taking the existence of God as something they are assuming because they can't prove it: rationalism becomes a label with little meaning.

Replies from: dxu
comment by dxu · 2015-02-01T17:44:44.086Z · LW(p) · GW(p)

So you're saying that taking a few background axioms that are pretty much required to reason... is equivalent to theism.

I think you may benefit from reading The Fallacy of Grey, as well as The Relativity of Wrong.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-01T19:49:15.988Z · LW(p) · GW(p)

The axioms of rationality are required to reason towards positive conclusions about a real world. They are not a minimal set, because sceptics have a smaller set, which can do less.

Replies from: dxu
comment by dxu · 2015-02-01T19:50:30.136Z · LW(p) · GW(p)

which can do less.

Most people probably aren't satisfied with the sort of "less" that universal skepticism can do.

Also, some axioms are required to reason, period. Let's say I refuse to take ~(A ∧ ~A) as an axiom. What now? (And don't bring up paraconsistent logic, please--it's silly.)

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-01T19:53:48.542Z · LW(p) · GW(p)

Rational axioms do less than theistic axioms, and a lot of people arent happy with that "less" either.

Replies from: dxu
comment by dxu · 2015-02-01T19:55:10.752Z · LW(p) · GW(p)

Rational axioms do less than theists axiom

Not in terms of reasoning "towards positive conclusions about a real world", they don't.

a lot of people arent happy with that "less" either.

Most of whom are theists trying to advance an agenda. "Rational" axioms, on the other hand, are required to have an agenda.

Replies from: TheAncientGeek, TheAncientGeek
comment by TheAncientGeek · 2015-02-01T20:18:14.486Z · LW(p) · GW(p)

From the scepti.cs perspective, rationalists are advancing the agenda that there is a knowable external world.

comment by TheAncientGeek · 2015-02-01T20:02:54.373Z · LW(p) · GW(p)

No. They do less in terms of the soul and things like that, which theists care about, and rationalists don't.

Meanwhile, sceptics don't care about the external world.

So everything comes down, to epistemology, and epistemology comes down to values. Is that problem?

Replies from: dxu
comment by dxu · 2015-02-01T20:07:28.498Z · LW(p) · GW(p)

Meanwhile, sceptics don't care about the external world.

And yet strangely enough, I have yet to see a self-proclaimed "skeptic" die of starvation due to not eating.

EDIT: Actually, now that I think about it, this could very easily be a selection effect. We observe no minds that behave this way, not because such minds can't exist, but because such minds very quickly cease to exist.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-01T20:12:40.438Z · LW(p) · GW(p)

They have answers to that objection , just as rationalists have answers to theists' objections.

comment by G0W51 · 2015-01-31T22:02:52.927Z · LW(p) · GW(p)

If there is no why, is any set of axioms better than any other? Could one be just as justified believing that, say, what actually happened is the opposite of what one's memories say?

Replies from: dxu
comment by dxu · 2015-01-31T23:03:17.328Z · LW(p) · GW(p)

(Note: I'm going to address your questions in reverse order, as the second one is easier to answer by far. I'll go into more detail on why the first one is so hard to answer below.)

Could one be just as justified believing that, say, what actually happened is the opposite of what one's memories say?

Certainly, if you decide to ignore probability theory, Occam's Razor, and a whole host of other things. It's not advisable, but it's possible if you choose your axioms that way. If you decide to live your life under such an assumption, be sure to tell me how it turns out.

If there is no why, is any set of axioms better than any other?

At this point, I'd say you're maybe a bit confused about the meaning of the word "better". For something to be "better" requires a criterion by which to judge that something; you can't just use the word "better" in a vacuum and expect the other person to be able to immediately answer you. In most contexts, this isn't a problem because both participants generally understand and have a single accepted definition of "better", but since you're advocating throwing out pretty much everything, you're going to need to define (or better yet, Taboo) "better" before I can answer your main question about a certain set of axioms being better than any other.

Replies from: G0W51
comment by G0W51 · 2015-02-01T02:17:25.583Z · LW(p) · GW(p)

Certainly, if you decide to ignore probability theory, Occam's Razor, and a whole host of other things. It's not advisable, but it's possible if you choose your axioms that way. If you decide to live your life under such an assumption, be sure to tell me how it turns out.

Why would one need to ignore probability theory and Occam's Razor? Believing that the world is stagnant and that the memories one is currently thinking of are false, and that the memory of having more memories is false, seems to be a simple explanation to the universe.

At this point, I'd say you're maybe a bit confused about the meaning of the word "better". For something to be "better" requires a criterion by which to judge that something; you can't just use the word "better" in a vacuum and expect the other person to be able to immediately answer you. In most contexts, this isn't a problem because both participants generally understand and have a single accepted definition of "better", but since you're advocating throwing out pretty much everything, you're going to need to define (or better yet, Taboo) "better" before I can answer your main question about a certain set of axioms being better than any other.

By better, I mean "more likely to result in true beliefs." Or if you want to taboo true, "more likely to result in beliefs that accurately predict percepts."

Replies from: ike, dxu
comment by ike · 2015-02-01T02:56:05.773Z · LW(p) · GW(p)

Or if you want to taboo true, "more likely to result in beliefs that accurately predict percepts."

If I were to point out that my memories say that making some assumptions tend to lead to better perception predictions (and presumably yours also), would you accept that?

Are you actually proposing a new paradigm that you think results in systematically "better" (using your definition) beliefs? Or are you just saying that you don't see that the paradigm of accepting these assumptions is better at a glance, and would like a more rigorous take on it? (Either is fine, I'd just respond differently depending on what you're actually saying.)

Replies from: G0W51
comment by G0W51 · 2015-02-01T04:20:33.501Z · LW(p) · GW(p)

If I were to point out that my memories say that making some assumptions tend to lead to better perception predictions (and presumably yours also), would you accept that?

I'd only believe it if you gave evidence to support it.

Are you actually proposing a new paradigm that you think results in systematically "better" (using your definition) beliefs? Or are you just saying that you don't see that the paradigm of accepting these assumptions is better at a glance, and would like a more rigorous take on it? (Either is fine, I'd just respond differently depending on what you're actually saying.)

The latter. What gave you the suggestion that I was proposing an improved paradigm?

Replies from: ike
comment by ike · 2015-02-01T04:30:21.650Z · LW(p) · GW(p)

What gave you the suggestion that I was proposing an improved paradigm?

You seemed to think that not taking some assumptions could lead to better beliefs, and it wasn't clear to me how strong your "could" was.

You seem to accept induction, so I'll refer you to http://lesswrong.com/lw/gyf/you_only_need_faith_in_two_things/

Replies from: G0W51
comment by G0W51 · 2015-02-01T17:38:02.581Z · LW(p) · GW(p)

Though the linked article stated that one only needs to believe that induction has a non-super-exponentially small chance of working and that a single large ordinal is well-ordered, but it did really justify this. It spoke nothing about why belief in one's percepts and reasoning skills is needed.

comment by dxu · 2015-02-01T06:24:13.823Z · LW(p) · GW(p)

Believing that the world is stagnant and that the memories one is currently thinking of are false, and that the memory of having more memories is false, seems to be a simple explanation to the universe.

Not in the sense that I have in mind.

"more likely to result in true beliefs."

Unfortunately, this still doesn't solve the problem. You're trying to doubt everything, even logic itself. What makes you think the concept of "truth" is even meaningful?

comment by is4junk · 2015-01-31T02:41:19.845Z · LW(p) · GW(p)

I would agree if you can't trust your reasoning then you are in a bad spot. Even Descartes 'Cogito ergo sum' doesn't get you anywhere if you think the 'therefore' is using reasoning. Even that small assumption won't get you too far but I would start with him.

Replies from: None, G0W51
comment by [deleted] · 2015-01-31T15:54:53.066Z · LW(p) · GW(p)

A better translation (maybe -- I don't speak french) would be "I think, I am". Or so said my philosophy teacher..

Replies from: Kindly
comment by Kindly · 2015-01-31T17:58:42.286Z · LW(p) · GW(p)

That seems false if taken at face value: "ergo" means "therefore", ergo, "Cogito ergo sum" means "I think, therefore I am". Also, I have no clue how to parse "I think, I am". Does it mean "I think and I am"?

There's probably a story behind that translation and how it corresponds to Descartes's other beliefs, but I don't think that "I think, I am" makes sense without that story.

(A side note: it's Latin, not French. I originally added here that Descartes wrote in Latin, but apparently he originally made the statement in French as "Je pense donc je suis.")

comment by G0W51 · 2015-01-31T03:09:35.070Z · LW(p) · GW(p)

The problem with that is that I don't see how "Cogito ergo sum" is reasonable.

Replies from: is4junk, g_pepper
comment by is4junk · 2015-01-31T05:24:13.607Z · LW(p) · GW(p)

I don't think there is a way out. Basically, you have to continue to add some beliefs in order to get somewhere interesting. For instance, with just belief that you can reason (to some extent) then you get to a self existence proof but you still don't have any proof that others exist.

Like Axioms in Math - you have to start with enough of them to get anywhere but once you have a reasonable set then you can prove many things.

comment by g_pepper · 2015-01-31T03:46:41.312Z · LW(p) · GW(p)

You obviously could not be thinking if you do not exist, right?

Replies from: G0W51
comment by G0W51 · 2015-01-31T13:04:08.900Z · LW(p) · GW(p)

I don't know, as I don't truly know if I am thinking. Even if you proved I was thinking, I still don't see why I would believe I existed, as I don't know why I should trust my reasoning.

Replies from: g_pepper
comment by g_pepper · 2015-01-31T15:57:03.077Z · LW(p) · GW(p)

You may not know you are thinking, but you think you are thinking. Therefore, you are thinking.

Replies from: G0W51
comment by G0W51 · 2015-01-31T18:09:57.117Z · LW(p) · GW(p)

I don't actually think I am thinking. I am instead acting as if I thought I was thinking. Of course, I don't actually believe that last statement, I just said it because I act as if I believed it, and I just said the previous sentence for the same reasoning, and so on.

Replies from: g_pepper, dxu
comment by g_pepper · 2015-02-01T16:52:07.406Z · LW(p) · GW(p)

I am instead acting as if I thought I was thinking

It seems to me that this statement implies your existence; after all, the first two words are an existential declaration.

Furthermore, just as (per Descartes) cognition implies existence, so it would seem that action implies existence, so the fact that you are acting in a certain way implies your existence. Actio, ergo sum.

Replies from: G0W51
comment by G0W51 · 2015-02-01T17:44:46.290Z · LW(p) · GW(p)

But how can I know that I'm acting?

Replies from: g_pepper
comment by g_pepper · 2015-02-01T18:58:15.999Z · LW(p) · GW(p)

You stated that you were acting:

I am instead acting as if I thought I was thinking....

I took you at your word on that :).

Anyway, it seems to me that you either are thinking, think you are thinking, are acting, or think you are acting. Any of these things implies your existence. Therefore, you exist.

Replies from: G0W51
comment by G0W51 · 2015-02-03T01:54:44.498Z · LW(p) · GW(p)

I think we're not on the same page. I'll try to be more clear. I don't really believe anything, nor do I believe the previous statement, nor do I believe the statement before this, nor the one before this, and so on. Essentially, I don't believe anything I say. That doesn't mean what I say is wrong, of course; it just means that it can't really be used to convince me of anything. Similarly, I say that I'm acting as if I accepted the premises, but I don't believe in this either.

Also, I'm getting many dislikes. Do you happen to know why that it? I want to do better.

comment by dxu · 2015-01-31T19:29:21.156Z · LW(p) · GW(p)

It seems to me that at this point, your skepticism is of the Cartesian variety, only even more extreme. There's a reason that Descartes' "rationalism" was rejected, and the same counterargument applies here, with even greater force.

Replies from: G0W51
comment by G0W51 · 2015-01-31T22:04:08.638Z · LW(p) · GW(p)

What's the counterargument? Googling is didn't find it.

Replies from: dxu
comment by dxu · 2015-01-31T23:11:05.621Z · LW(p) · GW(p)

Basically, Cartesian rationalism doesn't really allow you to believe anything other than "I think" and "I am", which is not the way to go if you want to hold more than two beliefs at a time. Your version is, if anything, even less defensible (but interestingly, more coherent--Descartes didn't do a good job defining either "think" or "am"), because it brings down the number of allowable beliefs from two--already an extremely small number--to zero. Prescriptively speaking, this is a Very Bad Idea, and descriptively speaking, it's not at all representative of the way human psychology actually works. If an idea fails on both counts--both descriptively and prescriptively--you should probably discard it.

Replies from: G0W51
comment by G0W51 · 2015-02-01T02:27:54.234Z · LW(p) · GW(p)

In order to create an accurate model of psychology, which is needed to show the beliefs are wrong, you need to accept the very axioms I'm disagreeing with. You also need to accept them in order to show that not accepting them is a bad idea.

I don't see any way to justify anything that isn't either based on unfounded premises or circular reasoning. After all, I can respond to any argument, no matter how convincing, and say, "Everything you said makes sense, but I have no reason to believe my reasoning's trustworthy, so I'll ignore what you say." My question really does seem to have no answer.

I question how important justifying the axioms is, though. Even though I don't believe any of the axioms are justified, I'm still acting as if I did believe them.

Replies from: dxu
comment by dxu · 2015-02-01T06:13:27.070Z · LW(p) · GW(p)

You keep on using the word "justified". I don't think you realize that when discussing axioms, this just plain doesn't make sense. Axioms are, by definition, unjustifiable. Requesting justification for a set of axioms makes about as much sense as asking what the color of the number 3 is. It just doesn't work that way.

Replies from: G0W51, TheAncientGeek
comment by G0W51 · 2015-02-01T17:55:02.698Z · LW(p) · GW(p)

I used incorrect terminology. I should have asked why I should have axioms.

comment by TheAncientGeek · 2015-02-01T15:00:51.689Z · LW(p) · GW(p)

It may be unacceptable to ask for justification of axioms, but that does not make it acceptable to assume axioms without justification.

Replies from: dxu
comment by dxu · 2015-02-01T17:41:34.732Z · LW(p) · GW(p)

In what meaningful sense are those two phrasings different?

comment by ChristianKl · 2015-02-01T13:36:34.217Z · LW(p) · GW(p)

Standard methods of inferring knowledge about the world are based off premises that I don’t know the justifications for.

How do you come to that conclusion?

Replies from: G0W51
comment by G0W51 · 2015-02-01T17:45:29.577Z · LW(p) · GW(p)

By realizing that the aforementioned premises seem necessary to prove anything.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-01T21:25:00.257Z · LW(p) · GW(p)

Why do you believe "inferring knowledge" is about "proving"?

Replies from: G0W51
comment by G0W51 · 2015-02-03T02:11:41.359Z · LW(p) · GW(p)

Because to infer means to conclude knowledge from evidence, and proving means to show something is true by using evidence. They are essentially synonyms.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-03T09:55:47.300Z · LW(p) · GW(p)

There are many cases of knowledge that aren't about X is true. When it comes to the knowledge required to tie the shoelaces of a shoe there isn't a single thing that has to be shown to be true by evidence.

Basically you lack skepticism about the issue that you think you know what knowledge is about.

Replies from: G0W51
comment by G0W51 · 2015-02-04T22:34:10.099Z · LW(p) · GW(p)

There are many cases of knowledge that aren't about X is true. When it comes to the knowledge required to tie the shoelaces of a shoe there isn't a single thing that has to be shown to be true by evidence.

There are multiple things that must be true by evidence to tie shoelaces successfully, including:

  • One's shoes are untied.
  • Having untied shoes generally decreases utility.
  • Performing a series of muscle movements that is commonly known as "tying your shoes" typically results in one's shoelaces being tied.

Edit: fixed grammar.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-05T00:13:27.596Z · LW(p) · GW(p)

You are make assumptions that are strong for claiming to be a skeptic.

To go through them: 1) Tied shoelaces also allow you to tie them again. Untiedness is no requirement for tying. 2) If you are in a social environment where untied shoes are really cool then tying them might decrease your utility. At the same time tying them still makes them tied. 3) It's quite possible to tie your shoes through muscles movements that are not commonly used for tying your shoes.

Replies from: G0W51
comment by G0W51 · 2015-02-05T00:56:14.598Z · LW(p) · GW(p)

You are make assumptions that are strong for claiming to be a skeptic.

To go through them: 1) Tied shoelaces also allow you to tie them again. Untiedness is no requirement for tying. 2) If you are in a social environment where untied shoes are really cool then tying them might decrease your utility. At the same time tying them still makes them tied.

Okay, I really shouldn't have stated those specifics. Instead, in order to tie shoe-laces successfully, all one really needs to know is that performing a series of actions that are commonly known as "tying your shoes" typically results in one's shoelaces being tied.

3) It's quite possible to tie your shoes through muscles movements that are not commonly used for tying your shoes.

I never said that the muscle movement were common, just that they typically resulted in tied shoes.

That said, I'm not really sure how this is relevant. Could you explain?

comment by Richard_Kennaway · 2015-02-01T10:27:33.384Z · LW(p) · GW(p)

What the Tortoise Said to Achilles.

Replies from: None
comment by [deleted] · 2015-02-01T19:56:15.460Z · LW(p) · GW(p)

I think the way Eliezer deals with this is a fine example of appeal to consequences.

comment by [deleted] · 2015-01-31T15:46:11.130Z · LW(p) · GW(p)

Any introduction or reader on contemporary epistemology that you'd find on amazon would address these three points.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-01T11:56:48.563Z · LW(p) · GW(p)

Addresses rather than resolves. There are many responses, but no universally satisfactory ones.

comment by Slider · 2015-02-01T20:05:17.351Z · LW(p) · GW(p)

You can think of what the points mean in the technical sense and try not to read anything more into them.

1) You sense something, your brain state is conditional on atleast some part of the universe. Do not make assumtions on whether it's a "fair" or "true" representation. At the most extreme you could have a single bit of information and for example no insight on how that bit is generated (ie by default and from epistemoligcal first grounds our behaviour is opaque).

2) We move from one computation state to another based on non-vanishingly on what state we were previously on. Ie our thoughts go in trains, it does matter what we thought before on what we will think next. Do not make any assumtion s that these trains have any "fair" or "true" representation about anything. We simply work in a not the way. We don't know how sane we are but we know that there is a method to our functioning, it is not adequately modeled as dice throws.

3) Our current computational state is the result of our past computations. Again no guarantees of "representing" or being "faitfull" to the past states are given.

Now your reservation about 1) does a photoreceptor need any other basis to fire rather than triggering? Is it possible for it to fire in "error"? For what we see is preciously how it fires. Then compared to a tactile sensor does a photoreceptor or it behave more "right" than other? From the inside we might not be able to distinguish which is which. So from the inside it seems taht mystery sense X fires sometimes and mystery sense Y fires sometimes. What would it even mean for these senses to be "misleading" or "false"?

The gripes about 2 can be framed as if you are able to grasp a situation one way then you can work on that. If you have no grasp of anything that would be equivalent like being like a dice. There need not be an all encompassing competency. Even if you have a basic abiity to imagine alternatives and then pick one you can work forward. What woudl be bad would be the inability to try a solution or being unable to be mentally stimulated by ones experience. Even if you begin with a flavour of madness you can come to know that flavour of madness and take into account it continuing forward. Ie you could have that when you think you see green you actually see red. If you then install a color reversal you can be more compatible/succesull with your surroundings/qualia-jungle. There is no "innate color ability" but being able to contextualise your thoughts with other experiences gives them a certain class of practicality (such as such-and-such perception being involved with a noun with letters R, E and D) .

About 3 your memories aqre not given by some outswide actor. They were crafted by you so you know their format and the circumstances where you would create them. If you call a hitpoint a smeerph, it doesn't still make memories about smeerphs "false". It has to be given also that memories are also one possibly confusing and distracting part of qualia-jungle. Maybe you are experiencing your feelings of anxiety when you are remembering an object as big. But how can it be said that such a memory would be wrong when there is no strict demand to interpret memories to only be about physical size? Ie given proper relation to what mental mechnisms generate them memories can only be correct because mental notekeeping strategies are freely chooseable.

comment by [deleted] · 2015-02-01T19:54:57.928Z · LW(p) · GW(p)

Sure, everything you made sense within your frame of reference, but there are no privileged frames of reference. Indeed, proving that there are privileged frames of reference requires a privileged frame of reference and is thus an impossible philosophical act. I can't prove anything I just said, which proves my point, depending on whether you think it did or not.

Also, I've asked my philosophy professor pretty much the same question and he referred me to Hegel. I'm probably terribly misinterpreting what he said but here it is: the awareness of the impossibility of knowledge gives us some kind of meta-knowledge which somehow gets nicely dealt with, and apparently allows for at least some knowledge.

Related discussion on SSC.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-07T21:59:36.237Z · LW(p) · GW(p)

Referring someone to Hegel seems to me like saying: "Screw you. Here a book that contains the answer to your question but you won't finish the book anyway."

comment by Fivehundred · 2015-02-01T04:57:08.066Z · LW(p) · GW(p)

“One has knowledge of one’s own percepts.” Percepts are often given epistemic privileges, meaning that they need no justification to be known, but I see no justification for giving them epistemic privileges. It seems like the dark side of epistemology to me.

Why? I realize that Yudkowsky isn't the most coherent writer in the universe, but how the heck did you get from here to there?

A simple qualia-based argument against skepticism (i.e. percepts are simply there and can't be argued with) is problematic- even if you conceded direct knowledge of percepts, you couldn't really know that you had such knowledge. They do not deal with rationality and there aren't any premises you could create from them. It seems less of a foundational tree of justification than a collection of meaningless smells, sounds and colors.

This doesn't mean that there are no qualia-based arguments that are worth looking at; in fact I think it is the most fruitful path to epistemic justification. I'm just trying to explain (what I think is) your objection more properly.

Replies from: G0W51
comment by G0W51 · 2015-02-01T17:51:34.062Z · LW(p) · GW(p)

Why? I realize that Yudkowsky isn't the most coherent writer in the universe, but how the heck did you get from here to there?

I'm afraid we're not on the same page. From where to where?

A simple qualia-based argument against skepticism (i.e. percepts are simply there and can't be argued with) is problematic- even if you conceded direct knowledge of percepts, you couldn't really know that you had such knowledge. They do not deal with rationality and there aren't any premises you could create from them. It seems less of a foundational tree of justification than a collection of meaningless smells, sounds and colors.

I understand that believing in qualia is not sufficient to form sizable beliefs, but it is necessary, is it not?

Replies from: Fivehundred
comment by Fivehundred · 2015-02-02T04:55:39.662Z · LW(p) · GW(p)

I' afraid we're not on the same page. From where to where?

What does 'dark side epistemology' have to do with an argument that seems like a non-sequitur to you?

I understand that believing in qualia is not sufficient to form sizable beliefs, but it is necessary, is it not?

The hell I know. There certainly are arguments that don't involve qualia and are taken seriously by philosophy; I'm not going to be the one to tackle them all! This website might have some resources, if you're interested.

Replies from: G0W51
comment by G0W51 · 2015-02-03T02:19:17.391Z · LW(p) · GW(p)

What does 'dark side epistemology' have to do with an argument that seems like a non-sequitur to you?

The arguments in the OP don't seem like non-sequiturs, as they are assumed without evidence, not with faulty reasoning from premises. Believing one doesn't need evidence for beliefs is what dark side epistemology is all about.