Skepticism about Probability
post by Carinthium · 2014-01-27T09:49:08.814Z · LW · GW · Legacy · 129 commentsContents
129 comments
I've raised arguments for philosophical scepticism before, which have mostly been argued against in a Popper-esque manner of arguing that even if we don't know anything with certainty, we can have legitimate knowledge on probabilities.
The problem with this, however, is how you answer a sceptic about the notion of probability having a correlation with reality. Probability depends upon axioms of probability- how are said axioms to be justified? It can't be by definition, or it has no correlation to reality.
129 comments
Comments sorted by top scores.
comment by gjm · 2014-01-27T11:57:08.949Z · LW(p) · GW(p)
A sufficiently skeptical position is completely immune to criticism, or to any other form of argument. I don't see what anyone could hope to do about that, beyond not bothering arguing with people who profess such extreme skepticism.
(I am reminded of a little fable I think I saw in an old OB post. Human space travellers encounter an alien planet whose inhabitants have adopted an anti-inductive principle, with the unsurprising result that pretty much everything they do is miserably unsuccessful. The humans ask them "So why do you keep on doing this?" and they say "Well, it's never worked for us before...")
Replies from: Carinthium↑ comment by Carinthium · 2014-01-27T12:00:20.001Z · LW(p) · GW(p)
Which means that anti-scepticism is a position taken on faith in the religious sense. It is, after all, the anti-sceptic who claims something can be known.
What I'm looking for is an argument that starts from no assumptions whatsoever but the self-evident, that gets to a justifiable probability theory. That would get around arguments such as the Evil Demon argument.
Replies from: Richard_Kennaway, ChristianKl, pragmatist, Wes_W, HoverHell↑ comment by Richard_Kennaway · 2014-01-27T12:27:09.263Z · LW(p) · GW(p)
It is, after all, the anti-sceptic who claims something can be known.
The sceptic also claims that something can be known: that nothing can be known.
What I'm looking for is an argument that starts from no assumptions whatsoever but the self-evident
Any conclusion can be reached by choosing the right "self-evident" assumptions; any conclusion can be denied by denying the self-evidence. It is self-evident to John C. Wright that God exists and His angels spoke to him in a vision. It had previously been self-evident to him that no such thing was possible. His rational mind pursues the implications of his latter beliefs as relentlessly as it did his previous beliefs. (His blog makes a fascinating case study in taking ideas seriously.)
What will be self-evident to you? Only you can decide that. You would be better off reading diverse books on the foundations of probability (Jaynes, Feller, Savage, de Finetti, Kolmogorov, ...) and seeing whether their arguments convince you, than asking someone to come up with an argument that you will find compelling.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-27T12:42:29.018Z · LW(p) · GW(p)
I highly doubt those authors address the idea of scepticism in the sense of showing from first principles that probability is legitimate, in a way that addresses such things as Descarte's Evil Demon Argument. They are discussing how to implement probability, not whether the entire concept is legitimate.
You are denying the validity of self-evidence here, and admittedly there is a problem with properly establishing self-evident ideas. That's part of my problem, admittedly, and what I am trying to bypass somehow.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-01-27T13:05:50.919Z · LW(p) · GW(p)
I highly doubt those authors address the idea of scepticism in the sense of showing from first principles that probability is legitimate, in a way that addresses such things as Descarte's Evil Demon Argument.
Nothing can address the Evil Demon Argument. Descartes thought it was self-evident that he thought, and therefore that he existed, but you can find various modern philosophers, and ancient ones of the Buddhist traditions, who declare that the self does not exist and claim to have none.
You are denying the validity of self-evidence here
As a foundation for knowledge, yes, I am.
and admittedly there is a problem with properly establishing self-evident ideas.
Doesn't "self-evident" mean that they don't need establishing?
Replies from: Carinthium↑ comment by Carinthium · 2014-01-27T13:14:41.724Z · LW(p) · GW(p)
"Self-evident" in the sense that they don't need any starting assumptions whatsoever. The point I am making repeatedly because others don't seem to get it is that if there is no way to justify the premise that the world exists without resort to assumptions, then we're no better than the people who believe in God on faith.
I am searching for a way to deal with the Evil Demon Argument etc. for that reason. As for said philosophers, they have a different concept of a self from Descartes and so to an extent are talking about different things.
Replies from: gjm, Richard_Kennaway↑ comment by gjm · 2014-01-27T14:00:49.226Z · LW(p) · GW(p)
if there is no way to justify the premise that the world exists without resort to assumptions, then we're no better than the people who believe in God on faith.
Let us imagine two people. One believes "on faith" that (1) what their senses tell them has some correlation with how things really are, (2) their memory has some correlation with the past, and (3) their reasoning isn't completely random and broken. The other believes "on faith" those three things, and also that every statement in a certain collection of ancient documents is true, that a certain person who lived 2000 years ago was really a god in human form, that our true selves are immortal immaterial entities, and that after our deaths we will be judged and consigned to eternal bliss or eternal torment.
I'm quite happy saying that the first of those people is doing better than the second. S/he needs to assume far less; the things s/he assumes are more obviously true and more obviously unavoidable assumptions; there is less arbitrariness to them.
Both of them, indeed, fail if you judge them according to the following principle: "Everything you believe should be derived from absolutely incontestable axioms with which no one could possibly disagree". But why on earth should we do that?
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T02:46:23.899Z · LW(p) · GW(p)
You really should add a fourth- the principle of induction. You also misinterpret my premise- it is not that nobody could possibly disagree, but that the ideas could not possibly be false, even under an Evil Demon argument.
The problem is the same as the Isolation Objection to Coherentism- that if there is any actual correlation to reality, it is merely by chance rather than through actual evidence. This is because both groups have no basis for their assumptions.
↑ comment by Richard_Kennaway · 2014-01-27T13:51:16.746Z · LW(p) · GW(p)
"Self-evident" in the sense that they don't need any starting assumptions whatsoever.
Have you ever seen such a proposition? I don't think that I have. Not a single sage of recorded history has been able to come up with something whose self-evidence convinced everyone. And if someone is unconvinced, how shall you convince them, if it's "self-evident"?
The point I am making repeatedly because others don't seem to get it is that if there is no way to justify the premise that the world exists without resort to assumptions, then we're no better than the people who believe in God on faith.
What is this? If you have any unjustified belief, you are identical with someone who pays no heed to rationality at all?
And what does this have to do with probability in particular? You originally asked about probability, so I recommended works on the foundations. Even if none of them persuade you that they are a sound basis, at least you will be informed about the arguments and conceptual structures that people have created, at which point you may be able to productively search for something better.
But now you have broadened this to a requirement for a refutation of the Evil Demon/Matrix scenario. I see no possibility of any such refutation, because sufficient powers can always be attributed to the Demon/Skynet/Lizard Overlords/NSA to explain away any putative refutation. If there is a refutation, you will have to find it yourself.
I mentioned John C. Wright earlier, and there is more to say. He finds the ultimate foundation in the uncaused cause that is the Originator of all causation, the Good that needs no justification because it is the Originator of all that is good, proves their existence by the argument against infinite regress, and recognises them in the world as the Christian God, specifically as preached by the Roman Catholic Church. You could work out from that what his self-evident truths might be, for him to build these arguments on, but his actual self-evident truths are the religious visions that he had. He was never argued into any of this by the arguments that he presents (and neither am I, an atheist).
Self-evidence is a subjective property of a belief. The experience of self-evidence is the absence of experience of justification for the thing believed.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-27T14:00:44.241Z · LW(p) · GW(p)
I explained my context was the refutation of philosophical scepticism in general- what I was after should have been clear.
1- You assume that the criterion of self-evidence should be based on being universally convincing. Why should this necessarily be so? Self-evidence comes when the contrary proposition simply doesn't make sense, as it were (simplistic example: free will). The question is how to deal with that with regards to demonstrating the validity of probability/induction. 2- Because the fundamental starting assumption is unjustified, we are no more justified in believing we know the truth than the people who believe in God on faith.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-01-27T14:08:46.759Z · LW(p) · GW(p)
Self-evidence comes when the contrary proposition simply doesn't make sense, as it were (simplistic example: free will).
"Free will" is a concept, not a proposition. What is the proposition about free will that you are claiming to be self-evident, and its opposite "not making sense"?
Replies from: Carinthium↑ comment by Carinthium · 2014-01-27T14:44:19.399Z · LW(p) · GW(p)
The concept of free will doesn't make sense, so the proposition of its truth turns out to be self-evidently wrong after a certain amont of thought. It's still self-evident because it requires no assumptions.
Replies from: asr, Richard_Kennaway↑ comment by asr · 2014-01-28T03:02:51.042Z · LW(p) · GW(p)
I don't think "the concept of free will" refers to any particular concept. Different people use that phrase to mean different things, some of which are coherent, some of which are not. I don't think it's useful to discuss without a precise definition, and I suspect given a definition, it won't be necessary.
↑ comment by Richard_Kennaway · 2014-01-27T14:52:16.943Z · LW(p) · GW(p)
The concept of free will doesn't make sense, so the proposition of its truth turns out to be self-evidently wrong after a certain amont of thought.
It does make sense to Daniel Dennett and Sam Harris, who have both given it more than "a certain amount of thought".
I think that's all that need be said here.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-27T14:56:56.775Z · LW(p) · GW(p)
Not true- Sam Harris concludes it's incoherent. That's at the VERY START of what you linked to.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-01-27T19:50:12.013Z · LW(p) · GW(p)
So he does (though not at the very start of what I linked to). My mistake. Nevertheless, Dennett does take it seriously, as do various other philosophers, as indeed does Eliezer right here. So on what grounds do you dismiss it as "self-evidently" wrong? Merely an inability to say why it seems wrong to you?
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T02:56:54.847Z · LW(p) · GW(p)
Sam Harris, for a start, gives very good reasons. Maybe you should read him- he puts it better.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-01-28T12:24:24.653Z · LW(p) · GW(p)
Not so self-evident, then? Sam Harris puts out a bunch of arguments, and Dennett puts out a bunch taking a different view, and lots of other philosophers argue these and other points of view, and you agree with Harris, but what role is "self-evidence" playing here? It looks like any other sort of argument, where people put forth evidence of the ordinary sort, and deductions, and intuition pumps, and so on, and because it's philosophy, no-one is persuaded by anything (except for graduate students adopting the dominant views of wherever they're studying).
You've read Harris, and it seems have had some sort of conversion experience. That is, you have acquired a belief without being able to access the reasons that you hold it, and take this to be a fact about the belief, its "self-evidence". But lots of other people -- Dennett, for example -- have read Harris and had a completely different experience, and find his view not self-evident at all.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T12:59:53.014Z · LW(p) · GW(p)
It is self-evident in that it follows logically without any sort of assumptions whatsoever, merely by examining the concept of free will. Perhaps you mean something different.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-01-28T13:07:45.715Z · LW(p) · GW(p)
The concept and the thing conceived of are two different things. Sunrises did not cease when heliocentrism began. That someone conceptualises something in a way that can easily be knocked down does not mean that there was nothing there. Dennett makes this point in his review.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-30T03:25:24.457Z · LW(p) · GW(p)
Then you mean a different thing by "free will" then me- I was referring to free will in the popular conception.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-01-30T07:21:49.942Z · LW(p) · GW(p)
Then Sam Harris has written an entire book to demonstrate that when a tree falls in the forest and no-one is around to hear it, it doesn't make any sound.
↑ comment by ChristianKl · 2014-01-27T13:31:29.286Z · LW(p) · GW(p)
What I'm looking for is an argument that starts from no assumptions whatsoever but the self-evident, that gets to a justifiable probability theory.
Such an experiment doesn't exist. It's not self-evident that we don't live in a simulation in which strange things can happen.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-27T13:33:08.115Z · LW(p) · GW(p)
I know it's not self-evident that we don't with certainty (and I said argument, not experiment), but I'm not trying to get to the conclusion we don't live in such a simulation- only that it is improbable.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2014-01-27T17:15:05.742Z · LW(p) · GW(p)
Simple: The fact that we don't see strange things happening is bayesian evidence that we don't live in a world where that is possible.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T02:25:32.330Z · LW(p) · GW(p)
As I already mentioned, it is probability itself which must be justified in the first place. How do you do that?
Replies from: Gurkenglas↑ comment by Gurkenglas · 2014-01-29T00:56:37.553Z · LW(p) · GW(p)
What assumptions am I granted? Can't argue anythin' from nuthin'. Even "I think" is an assumption if logic is.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-29T02:16:28.884Z · LW(p) · GW(p)
This is the problem which must be dealt with. Rather than assume an assumption must be correct, you must somehow show it will work even if you start from no assumptions.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2014-01-29T08:24:03.945Z · LW(p) · GW(p)
Your universal propositional calculus might not be able to generate that proposition, but my calculus can easily prove: Yours won't generate any propositions if it has no axioms.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-30T03:24:21.739Z · LW(p) · GW(p)
This is precisely the problem. I was posting in the hopes of finding some clever solution to this problem- a self-proving axiom, as it were.
↑ comment by pragmatist · 2014-01-29T06:08:37.087Z · LW(p) · GW(p)
You apparently don't think it's self evident that induction works. Do you think that it's self evident that deduction works (like Descartes did)? If you do, why? If you met someone who was willing to accept the premises of a deductive argument but not the conclusion, like the tortoise in this parable, how would you convince the person they were wrong? The only way to do it would be using deductive arguments, but that's circular! So it seems that deductive arguments are just as "unjustifiable" as inductive arguments.
But if you reject deductive arguments as well, then you can't do anything. Even if you start with self-evident premises, you won't be able to conclude anything from them. Perhaps this is a hint that your standards for justification are so high that they're effectively useless.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-29T07:42:12.481Z · LW(p) · GW(p)
A premise isn't self-evident because anybody whatsoever would accept it, but because it must be true in any possible universe.
Deductive arguments aren't self-evident, but for a different reason than you think- the Evil Demon Argument, which shows that even if it looks completely solid it could easily be mistaken. There may be some way to deal with it, but I can't think of any. That's why I came here for ideas.
You claim my standards of justification are too high because you want to rule skepticism out- you are implicitly appealing to the fact skepticism results as a reason for me to lower my standards. Isn't that bias against skepticism, lowering standards specifically so it does not result?
Replies from: pragmatist↑ comment by pragmatist · 2014-01-31T04:04:42.017Z · LW(p) · GW(p)
There are all kinds of things that are true in every possible universe that aren't self-evident. Look up "necessary a posteriori" for examples. So no, self-evident is not the same as necessary, at least not according to a very popular philosophical approach to possible worlds (Kripke's). More generally, "necessity" is a metaphysical property, and "self-evidence" is an epistemic property. Just because a proposition has to be true does not mean it is going to be obvious to me that it has to be true. Even Descartes makes this distinction. He doesn't regard all the truths of mathematics to be self-evident (he says he may be mistaken about their derivation), but presumably he does not disagree that they are necessarily true. (Come to think of it, he may disagree that they are necessarily true, given his extreme theological voluntarism, but that's an orthogonal debate.)
As for your question about standards: I think it is a very plausible principle that "ought" implies "can". If I (or anyone else) have an obligation to do something, then it must at least be possible for me to do it. So, in so far as I have a rational obligation to have justified beliefs, it must be possible for me to justify my beliefs. If you're using the word "justification" in a way that renders it impossible for me to justify any belief, then I cannot have any obligation to justify my beliefs in that sense. And if that's the case, then skepticism regarding that kind of justification has no bite. Sure, my beliefs aren't justified in that rigorous sense, but if I have no rational obligation to justify them, why should I care?
So either you're using "justification" in a sense that I should care about, in which case your standards for justification shouldn't be so high as to render it impossible, or you're using "justification" in Descartes's highly rigorous sense, in which case I don't see why I should be worried, since rationality cannot require that impossible standard of justification. Either way, I don't see a skeptical problem.
Replies from: Carinthium↑ comment by Carinthium · 2014-02-01T14:55:22.760Z · LW(p) · GW(p)
It seems we're using different definitions of words here. Maybe I should clarify a bit.
The definition of rationality I use (and I needed to think about this a bit) is a set of rules that must, by their nature, correlate with reality. Pragmatic considerations do not correlate with reality, no matter how pressing they may seem.
Rather than a rational obligation, it is a fact that if a person is irrational then they have no reason to believe that their beliefs correlate with the truth, as they do not. It is merely an assumption they have.
↑ comment by Wes_W · 2014-01-28T07:05:32.412Z · LW(p) · GW(p)
"Self-evident assumptions" sounds suspiciously like "axioms", yet axioms are apparently not what you seek. What exactly are you hoping to find? What would an acceptable "self-evident assumption" be?
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T09:40:59.923Z · LW(p) · GW(p)
Assumptions that can be demonstrated to be true in any possible universe.
↑ comment by HoverHell · 2014-01-27T13:12:12.804Z · LW(p) · GW(p)
-
Replies from: Carinthium↑ comment by Carinthium · 2014-01-27T13:15:11.800Z · LW(p) · GW(p)
You have a point. Then how do you justify induction?
Replies from: HoverHell↑ comment by HoverHell · 2014-01-29T20:00:02.541Z · LW(p) · GW(p)
-
Replies from: Carinthium↑ comment by Carinthium · 2014-01-30T01:52:17.317Z · LW(p) · GW(p)
If you have no non-circular basis for believing in induction, surely it is irrational?
Replies from: HoverHell↑ comment by HoverHell · 2014-01-30T19:19:54.161Z · LW(p) · GW(p)
-
Replies from: Carinthium↑ comment by Carinthium · 2014-01-31T01:48:07.497Z · LW(p) · GW(p)
"Better" isn't a function of the real world anyway- I'm appealing to it because most people here want to be rational, not because it is objectively better.
What do you mean by "rational" is not a binary?
Replies from: HoverHell↑ comment by HoverHell · 2014-02-02T09:50:12.321Z · LW(p) · GW(p)
-
Replies from: Carinthium↑ comment by Carinthium · 2014-02-03T08:57:50.905Z · LW(p) · GW(p)
On thought, my response is that no circular argument can possibly be rational so the question of if rationality is binary is irrelevant. You are mostly right, though for some purposes rational/irrational is better considered as a binary.
Replies from: HoverHellcomment by fortyeridania · 2014-01-27T17:57:15.497Z · LW(p) · GW(p)
Are you familiar with Sextus Empiricus? If you like intransigent skepticism, you'll love him. And the SEP just published a new entry on him! While you are at it, you might want to look at this entry on a priori justification.
You are trying to answer Descartes' Evil Daemon argument. That is futile, because the whole point of the argument is to be unbeatable. But suppose you did come up with an argument against it; I can always come up with an even stronger "daemon" or whatnot that can defeat the argument. (There's always the classic "How do you know you're not dreaming right now?" also from Descartes.)
Perhaps you are not actually searching for something to defeat radical skepticism, but instead are trying to show everyone else the true nakedness of their epistemic pretensions?
↑ comment by Carinthium · 2014-01-28T02:30:28.707Z · LW(p) · GW(p)
If we can have no a priori knowledge, it means skepticism wins because everything is based on faith. Given this, I try to find a means to make a priori knowledge work despite objections, both of this sort and skeptical.
If this is right, then radical skepticism wins entirely. The point is if it can be shown false on probabilities.
Yes and no. I do believe it hopeless, but I search because I'm looking anyway,
↑ comment by fortyeridania · 2014-01-28T04:47:02.260Z · LW(p) · GW(p)
What does "skepticism wins" mean?
If what is right--that you can't be sure you're not dreaming? Of course that's right; how would you ever tell? Any method of distinguishing you came up with can't possibly be relied upon, because if you are dreaming, then that method only works in your dreamworld. In other words, it can distinguish between meta-dreams and dreams, but not between dreams and reality. (And there's no real reason to think it can even do the former, because hey, it's a dreamworld after all, and no rules apply.)
You search because you're looking? What does that mean?
Here's a question. I assume you are familiar with the probability-theoretic notion of maximum entropy. By "radical skepticism" do you mean the thesis that the only possible rational belief-state is maximum entropy?
↑ comment by Carinthium · 2014-01-28T05:09:05.411Z · LW(p) · GW(p)
It means we cannot be justified in knowing anything, and are isolated from any objective reality. The basic rules of probability from which we assume the reliability of memory, senses etc are taken on religious style faith.
I've been trying to find a way around this, but you are probably right.
I mean I am checking and and again just in case because I don't like the idea that scepticism is right.
I'm not familiar with that notion.
↑ comment by ChristianKl · 2014-01-28T13:54:50.318Z · LW(p) · GW(p)
It means we cannot be justified in knowing anything
Justification depends on a function that tells you whether something is justified. I can easily justify a belief with the fact that a teacher taught it to me.
In what sense do you think it can not be justified and why do you think that framework of justification has some sort of reality to it?
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T15:35:35.428Z · LW(p) · GW(p)
Something is epistemically justified if, as you said, it has some sort of reality to it not by coincidence but because the rule reliably shows what is real. I am trying to find a framework with some sort of reality to it, and that requires dealing with scepticism.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-01-28T15:39:01.917Z · LW(p) · GW(p)
If you don't believe in reality in the first place how could you check whether something has reality?
You need to look at reality to check whether something is real. There no way around it. Your idea for justification has no solid basis in reality if you don't believe in it in the first place.
You don't get to be certain about justification and be skeptic about reality. There are certain types of Buddhism who you could call skeptic about reality but they would also not accept the concept of justification in which you happen to believe.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-30T03:23:27.310Z · LW(p) · GW(p)
I don't believe in the reality around us, not on a rational level- that does not mean I don't believe there are things which are real(there may be, anyway). I just have no idea what they are.
Justification is DEFINED in a certain manner, and I think the best one to use is the definition I have given. That is how I can be certain about justification (or at least what I am calling justification) and a skeptic about reality.
↑ comment by fortyeridania · 2014-01-28T07:05:10.777Z · LW(p) · GW(p)
OK, let's skip to (4), as that might help you formulate your skepticism more precisely. "Maximum entropy" has more than one meaning, but here it basically means a belief-state that assigns an equal probability to all possibilities. In other words, it's the probability distribution you would use if you had zero information. For example, if I ask you whether glappzug is thuxreq or not thuxreq, you can't do better than to just pick an answer randomly. You have no clue to go on, so just get the choice over with and move on.
A thorough-going skeptic, it seems to me, would have to think that all choices are just like that one. Even when we think we have information, we don't really (because we could be dreaming!). Therefore there's no reason to discriminate between any pair of alternatives, or among any set of them.
When you say "skepticism wins," do you mean that for any set of alternative claims, there is never any reason to discriminate among them?
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T08:51:00.991Z · LW(p) · GW(p)
Probability itself being somehow valid is something I do not think rationally legitimate. Therefore, in a sense yes but in a sense no.
Replies from: fortyeridania↑ comment by fortyeridania · 2014-01-28T09:25:33.562Z · LW(p) · GW(p)
In that case, I don't know how to proceed until you formulate your skepticism more precisely. What exactly is it that is not justified, if "skepticism wins"?
Replies from: Carinthium↑ comment by Carinthium · 2014-01-30T03:26:40.886Z · LW(p) · GW(p)
Nothing is justified if skepticism wins. Unless we have irrational faith in at least one starting assumption (and it is irrational since we have no basis for making the assumption), it is impossible to determine anything except our lack of knowledge.
So on thought, yes. There is never any valid rational reason to discriminate between possibilities because nothing can demonstrate the Evil Demon Argument false.
Replies from: fortyeridania↑ comment by fortyeridania · 2014-01-30T07:22:03.294Z · LW(p) · GW(p)
OK. I am still not exactly sure what you mean by "justification." Let's put this in more concrete terms. Imagine the following:
Sitting down to dinner you see three items on the table before you: a bowl of rice, a bowl of gasoline, and a coin. Suppose further that you prefer rice over gasoline. You have three choices--eat the rice, drink the gasoline, or flip the coin and let the result determine the contents of which bowl to consume.
What does the Evil Demon Argument (and all in its family) say about the rationality of each choice, compared to the others (assuming it says anything at all)?
What advice would you personally give someone sitting at such a dinner table, and why?
↑ comment by Carinthium · 2014-01-30T10:51:32.111Z · LW(p) · GW(p)
The Evil Demon Argument says that you don't know that it's actually those three things before you. Further, it says that you don't know that eating the rice will actually have the effects you're used to, or that your memories can be used to remember your preferences. Etc etc...
On reason, I would give no advice. On faith, I would say to have the rice.
↑ comment by fortyeridania · 2014-01-30T19:36:39.571Z · LW(p) · GW(p)
On reason... On faith...
So, which advice would you give?
Replies from: Carinthium↑ comment by Carinthium · 2014-01-31T01:49:04.510Z · LW(p) · GW(p)
In the real world, it depends. With most people in practice, assuming they have enough of an understanding of me to know I am a skeptic on these things and are implicitly asking for one or the other, I give that. Therefore I normally give advice on faith.
Replies from: fortyeridania↑ comment by fortyeridania · 2014-02-01T07:35:35.016Z · LW(p) · GW(p)
I guess it's hard for me to understand what's irrational about advising them to eat the rice (as you indicated you would do). It seems like the only sane choice. I'm not sure exactly what you mean by "faith", but if advising people to eat the rice is based on it, then it must be compatible with rationality, right?
Right--choose the rice, assuming you (or they) want to live. That seems like the only sane choice, doesn't it?
Maybe this is a problem of terminology. You seem to be using the labels "faith" and "reason" in certain ways. Especially, you seem to be using the label "reason" to refer to the following of certain rules, but which you can't see how to justify.
Maybe instead of focusing on those rules (whatever they may happen to be), you should focus on why the rules are valuable in the first place (if they are). Presumably, it's because they reliably lead to success in achieving one's goals. The worth of the rules is contingent on their usefulness; it's not rational to believe only things you can prove with absolute certainty, because that would mean believing nothing, doing nothing, dying early and having no fun, and nobody wants that!
(In case you haven't read it, you might want to check you Newcomb's Problem and Regret of Rationality, from 2008.)
Replies from: Carinthium↑ comment by Carinthium · 2014-02-01T14:50:55.280Z · LW(p) · GW(p)
My conception of reason is based on determining what is true, completely and entirely irrespective of pragmatism. To call skeptical arguments irrational and call an anti-skeptical case rational would mean losing sight of the important fact that ONLY pragmatic considerations lead to the rejection of skepticism.
Rationality, to me, is defined as the hypothetical set of rules which reliably determine truth, not by coincidence, but because they must determine truth by their nature. Anything which does not follow said rules are irrational. Even if skepticism is false, believing in the world is irrational for me (and you, based on what I've heard from you and my definition) because nothing necessarily leads to a correlation between the senses and reality.
One of the rules of my rationality is that pragmatic considerations are not to be taken into account, as what is useful to believe and what is true have no necessary correlation. This applies for anything which has no necessary correlation with what is true.
What you're talking about is pragmatic, not rational. It is important to be aware of the distinction between what one may 'believe' for some reason and what is likely to be actually true, completely independent of such beliefs.
Replies from: fortyeridania↑ comment by fortyeridania · 2014-02-02T06:30:43.626Z · LW(p) · GW(p)
what is useful to believe and what is true have no necessary correlation
You seem to be referring to the distinction between instrumental and epistemic rationality. Yes, they are different things. The case I am trying to make does not depend on a conflation of the two, and works just fine if we confine ourselves to epistemic rationality, as I will attempt to show below.
OK, so I think your labeling system, which is clearly different from the one to which I am accustomed, looks like this:
rationality = a set of rules which reliably and necessarily determine truth
and
X is irrational = X does not follow rationality
If that's how you want to use the labels in this thread, fine. But it seems that an agent that believed only things that were known with infinite certainty would suffer from a severe truth deficiency. Even if such an agent managed to avoid directly accepting any falsehoods, she would fail to accept a vast number of correct beliefs. This is because much of the world is knowable--just not with absolute certainty. She would not have a very accurate picture of the world.
And this is not just because of "pragmatics"; even if the only goal is to maximize true beliefs, it makes no sense to filter out every non-provable proposition, because doing so would block too many true beliefs.
Perhaps an analogy with nutrition would be helpful. Imagine a person who refused to ingest anything that wasn't first totally proven to be nutritious. Whenever she was served anything (even if she had eaten the same thing hundreds of times before!), she had to subject it to a series of time-consuming, expensive, and painstaking tests.
Would this be a good idea, from a nutritional point of view? No. For one thing, it would take way too long--possibly forever. And secondly (and this is the aspect I'm trying to focus on) lots of nutritious things cannot be proven so. Is this bite of pasta going to be nutritious? What about the next one? And the one after that? A person who insisted on such a diet would not eat very nutrients at all, because so many things would not pass the test ( and because the person would spend so much time testing and so little time eating).
Now, how about a person's epistemic diet--does it make sense, from a purely epistemic perspective, for an agent to believe only what she can prove with absolute certainty? No. For one thing, it would take way too long--possibly forever. And secondly, lots of true things cannot be proven so, at least not with the kind of transcendent certainty you seem to be talking about. So an agent who insisted on such a filter would end up blocking much truth, thus "learning" a highly distorted map.
If the agent is interested in truth, she should ditch that filter and find a standard that lets her accept more true correct claims about the world, even if they aren't totally proven.
By the way, have you read many of the Sequences? They are quite helpful and much better written than my comments. I'd say to start here. This one and this one also heavily impinge on our topic.
Replies from: Carinthium↑ comment by Carinthium · 2014-02-02T06:58:48.891Z · LW(p) · GW(p)
This assumes what the entire thread is about- that probability is a legitimate means for discussing reality. This presumes a lot or axioms of probability, such as that if you see X it is more likely real than an illusion, and induction as valid.
The appeal to absence of many true beliefs is irrelevant, as you have no means to determine truth beyond skepticism.
Replies from: fortyeridania↑ comment by fortyeridania · 2014-02-02T07:06:33.438Z · LW(p) · GW(p)
I do not think anything I wrote above depends on using probability to discuss reality.
The appeal to absence of many true beliefs is irrelevant, as you have no means to determine truth beyond skepticism.
Please elaborate. I believe it is not only relevant, but decisive.
Replies from: Carinthium↑ comment by Carinthium · 2014-02-02T08:04:55.230Z · LW(p) · GW(p)
You believe that the world exists, your memories are reliable, etc. You argue that a system that does not produce those conclusions is not good enough because they are true and a system must show they are true. But how on earth do you know that? Assuming induction, that your memories are reliable etc to judge Epistemic rules is circular.
You must admit it is absurd that you know the world exists with certainty, therefore you must admit you believe it exists on probability. Therefore your entire case depends on the legitimacy of probability.
Before accusing me of contradiction, remember my posistion all along has a distinction between faith and rational belief.
Replies from: fortyeridania↑ comment by fortyeridania · 2014-02-03T06:59:32.166Z · LW(p) · GW(p)
my posistion [sic] all along has a distinction between faith and rational belief
OK, but you are not using the term "rational" in the (what I thought was) the standard way. So the only reason what you're saying seems contentious is because of your terminology.
You have not yet addressed much of what I've written. Automatically rejecting everything that isn't 100% proven is a poor strategy if the agent's goal is to be right as much as possible, yet it seems to be the only one you insist is rational. Is this merely because of how you're using the word "rational," or do you actually recommend "Reject everything that isn't known 100%" as a strategy to such a person? (From the rice-and-gasoline example I think I know your answer already--that you would not recommend the skeptical strategy.)
How should an agent proceed, if she wants to have as accurate picture of reality as possible?
Replies from: Carinthium↑ comment by Carinthium · 2014-02-03T07:56:49.038Z · LW(p) · GW(p)
You are the only who is making assumptions without evidence and ignoring what I'm saying- that contrary to what you think you do not in fact know the Earth exists, your memories are reliable etc and therefore that your argument, which assumes such, falls apart.
You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability. There is induction (e.g.- Sun risen X times already so it will probably rise again tonight), the Memory assumption (if my memories say I have done X then that is evidence in probabilities I have done X), the Reality assumption (seeing something is evidence in probabilities for it's existence) etc. None of these can be demonstrated- they are starting assumptions taken on faith.
In the real world, as I said, it depends on what the person asked for. If I believe they were implicitly asking for a faith-based answer I would give that, if I believe an answer based on pure reason I would say neither.
The truth is that anything an agent believes to be true they have no way of justifying, as any justification ultimately appeals to assumptions that cannot themselves be justified.
Replies from: fortyeridania↑ comment by fortyeridania · 2014-02-03T18:25:12.867Z · LW(p) · GW(p)
You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability.
I do not thus fail, and am aware of the specific assumptions you have in mind. I just deny that their existence implies what you say it implies.
OK. Let me try to restate your argument in terms I can better understand. Tell me if I'm getting this right.
(1) Let A = any agent and P = any proposition
(2) Define "justified belief" such that A justifiably believes P iff the following conditions hold:
a. P is provable from assumptions a, b, c, ... and z.
b. A justifiably believes every a, b, c, ... and z.
c. A believes P because of its proof from a, b, c, ... and z.
(3) The claim "The sun will rise tomorrow" (or insert any other claim you want to talk about instead) is not provable from assumptions in which any agent could be justified in believing.
(4) Therefore, for every agent, belief in the claim "The sun will rise tomorrow" is not justified.
Is this a fair characterization of your argument? If so, I'll work from this. If not, please improve it.
Replies from: Carinthium↑ comment by Carinthium · 2014-02-04T01:01:58.724Z · LW(p) · GW(p)
Mostly right. I accept the theoretical possibility of a self-evident belief- before learning of the Evil Demon argument, for example, I considered 1+1=2 to be such a belief.
However, a circular argument never is allowable, no matter how wide the circle. Without ultimately being tracable back to self-evident beliefs (though these can be self-evident axioms of probability, at least in theory), the system doesn't have any justification.
comment by mwengler · 2014-01-28T19:48:06.984Z · LW(p) · GW(p)
If the point is we can't derive the validity of probability from nothing, congratulations. You have rediscovered something significantly less useful than the wheel or fire.
So if you can't derive the validity of probability from nothing, what can you do?
1) Walk around in a self-induced fog of feigned ignorance, having slipped the fact that you have put logical derivation on a throne dictating "truth" without ever having questioned that operation. You certainly can't derive from nothing that logical derivation is the only source of truth.
2) Look out at the parking lot to see all the cars, look at the telephone, the cell phone in your pocket, and the computer on which you are typing. Contemplate the apparent facts that these all arise from groups of humans "knowing" things "well enough" to put, ultimately, millions of pieces together to create things with an astonishing level of actual complexity compared to even the most brilliant philosopher's gedanken experiment. Realize that this is either a show put on for you by a demon trying to trick you in to believing that the world is an orderly place who s bits and pieces obey discoverable rules so well that you can discover them and build things that didn't previously exist out of the pieces to do things you want... OR that the world really is like that.
And if you go the skeptical route, at this level, what do you gain? Because if I ignore your skeptical route and instead study engineernig and physics and chemistry, I gain the ability to build cars and bombs and phones and to become sufficiently wealthy that I can have children and pass my mind-numbing credulousness down in to the next generation.
So is the demon who is deluding us all winning? Or have we, by forcing it to up its game to the level of billion-transistor circuits and the beginnings of AI software that it must fake, called its bluff and forced it to create the world it thought it was faking?
If it turns out there really is/was a demon and at the end we spend the rest of eternity being tortured in flames while you stand there in your pit of flames telling us "I told you so," then I owe you a beer.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-29T02:24:31.726Z · LW(p) · GW(p)
Implicit assumptions- not just the senses, but the reliability of memory and the reliability of rules of induction.
I already mentioned that I believe in the world, not because I think it rational, but as an act of religious-style faith. I think it irrational to believe the world exists because it makes so many assumptions that can't be defended in a rational argument.
comment by wwa · 2014-01-28T12:19:14.080Z · LW(p) · GW(p)
how are said axioms to be justified?
This is how I'd answer a sceptic:
If I put two apples into a bag that previously had two apples, I can take four apples out of the bag. Thus, I believe that axioms on which basic arithmetic is based are "justified". By the same token I believe axioms of probability and I'm pretty sure you see a close approximation of a "fair coin" on a daily basis, not to mention more complex behaviors which probability theory predicts very well. If after that you're still skeptic of the correlation, I expect you to have strong evidence against the correlation. I predict you'd say that this reasoning is circular because the whole notion of "evidence" is sort of dependent on the axioms (Bayes etc.), in which case I can't help you any more than say that the given axioms are what they are precisely because of empirical observations.
In a huge oversimplification that's how math theories are constructed - you add or remove axioms until the stuff it predicts corresponds to stuff we observe. The correlation is the goal.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T13:02:47.003Z · LW(p) · GW(p)
No sceptic familiar with the Evil Demon Argument would agree that 2+2=4, as this assumes the mind remains undistorted which is part of what is under discussion.
My belief is belief on probabilities on faith in the religious sense, rather than on evidence, as I do not believe such evidence exists.
What you have is a giant circular argument, and therefore useless. A skeptic doubts the senses give actual evidence, they doubt math, and they doubt your axioms of probability. It is downright retarded to use one of those to prove the others.
Replies from: wwa↑ comment by wwa · 2014-01-28T14:06:08.473Z · LW(p) · GW(p)
A skeptic doubts the senses give actual evidence, they doubt math ...
How many skeptics walk off the cliff expecting to continue walking? If you're skepticism is of the purely theoretical kind "sure, I doubt everything, but God (heh) forbid me act on these doubts" then I cannot help you either.
Besides, that's cherry-picking circularities. Let's go meta: don't you doubt your doubts? If you claim you can't calculate or measure the level of anything real because "that's axioms", what makes doubt in math/physics weaker than doubt in doubt in math/physics? And if none is weaker then the other, why don't walk off the cliff?
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T15:30:15.881Z · LW(p) · GW(p)
Part one is ad hominem, and has no relation to the validity of the argument.
As for part 2, the point is not that the world is certainly an illusion but that we don't know either way. Given that, meta-doubts are implied.
For me personally, my posistion is that rationally there is no way out of skepticism but that I believe it false on religious style faith.
comment by asr · 2014-01-28T03:26:12.493Z · LW(p) · GW(p)
Why should I take the skeptic seriously?
I cannot picture how I would live my life without coping with uncertainty. And I know that probability follows from various plausible axiomatizations of uncertainty. (E.g, Cox's theorem)
This makes me suspect strongly that the skeptic is playing terminological games, since there's no actual substantive thing I could do differently if they convinced me.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T04:04:13.004Z · LW(p) · GW(p)
Can you clarify here? Starting from no assumptions whatsoever, how do things such as Cox's theorem get to the basic axioms of probability from which it can be inferred that the universe probably exists, that memories are probably accurate, and that induction probably works?
Replies from: asr↑ comment by asr · 2014-01-28T04:15:38.824Z · LW(p) · GW(p)
That's not the work Cox's theorem does. Cox's theorem tells you that if you believe in the universe, and you're going to deal with uncertainty, and you believe some (plausible) axioms, you should come out with something mathematically identical to probability.
I've never felt the need to justify my belief in the universe or that the past was roughly as I remember it. I've never seen any viable alternative to those beliefs.
Everybody I've ever met acts as though the universe exists. And you might say "we don't really know it, or even have evidence for it." But this feels like a terminological game. Everybody I've ever met seems to have some mental activity that looks like belief, that's updated in ways that look like induction. The burden is on the skeptic to label these things and construct an ontology that explains how we live our lives. I don't feel any need to justify acting as though the world exists.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T05:06:41.783Z · LW(p) · GW(p)
Any account which assumes we do live our lives, or proscribes ways to do so, is not sceptical at all.
Besides, your argument is circular as you assume the world's existence. It also involves argument ad populorum, appealing to popular belief rather than evidence. Showing that humans are incapable of believing X does not refute X.
Replies from: asr↑ comment by asr · 2014-01-28T05:22:11.502Z · LW(p) · GW(p)
I cheerfully plead guilty on all charges.
I am not a skeptic. I am unbothered by any logical circularity in my belief in objective reality. I see no reason to worry about a belief I am incapable of believing.
Honestly, I can't quite picture what it would be like to worry about such things, let alone believe them. If the universe doesn't exist, there's nothing you can do about it, so why waste energy thinking about the possibility?
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T06:04:08.919Z · LW(p) · GW(p)
Circular arguments have no correlation with reality except by chance- you may as well make something up and believe it. It would make about as much sense.
It is correct that if skepticism is correct then there is nothing we can do. Logically speaking, since probability doesn't exist there is a probability of 100%.
Replies from: ChristianKl, asr↑ comment by ChristianKl · 2014-01-28T13:51:49.729Z · LW(p) · GW(p)
For skepticism to be correct you would need to show that it's possible to be skeptic.
It's certainly possible to pretend to be skeptic but pretending to be skeptic doesn't make you any more of a skeptic than pretending to be a duck makes you a duck.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T15:27:56.708Z · LW(p) · GW(p)
Not so. There is no logical connection between the feasibility of a human believing something and its truth. Something can be true and impossible to believe simultaneously.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-01-28T15:43:10.286Z · LW(p) · GW(p)
Something can be true and impossible to believe simultaneously.
I think that's the category that Wittenstein summarizes as "things you can't talk about".
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T16:04:51.872Z · LW(p) · GW(p)
But we are talking about scepticism. It's an exception to the Wittgensteinian rule.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-01-28T16:30:41.743Z · LW(p) · GW(p)
I can also talk about weuisfdyhkj. It's a label. In itself not more meaningful than the label you use. You think that you know what the label means but if your brain can't simulate a reality behind the label it has no meaning. According to Wittgenstein we should therefore not speak about it.
Replies from: Carinthium↑ comment by Carinthium · 2014-02-01T15:03:09.233Z · LW(p) · GW(p)
I think I know my answer to this- I've realised my definition of "rational" subtly differs from LessWrong's. When you see mine, you'll see this wasn't my fault.
A set of rules is rational, I would argue, if that set of rules by it's very nature must correlate with reality- if one applies those rules to the evidence, they must reveal what is true. Even if skepticism is false, then it is a mere coincidence that our assumptions the world is not an illusion, our memories are accurate etc happened to be correct as we had no rational rule that would show us that they were. We do not even have a role that we must rationally consider it probable.
One of the rules of such rationality is that pragmatism is explicitly ruled out. Pragmatic considerations have no necessary correlation with what is actually true, therefore they should not be considered in determining what is true. The consideration of whether human beings are or are not capable of believing something is a pragmatic consideration.
You claim that skepticism is incoherent. Firstly, this is circular as you assume things to get to this conclusion. Second, even if you take those assumptions humans are capable of understanding the concept of "I don't know". Applying this concept to absolutely everything is effectively what skepticism is.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-02-01T19:54:52.370Z · LW(p) · GW(p)
Applying this concept to absolutely everything is effectively what skepticism is.
But you are not applying it to everything. You have a strong belief in a platonic ideal of rationality on which you base your concept.
Take the buddhists who actually don't attach themselves to mental concepts. They have sayings such as: "If you meet the Buddha on the road, kill him".
You are not willing that you don't know what skepticism happens to be because you have attachement to it. This is exactly what Wittengsteins sentence is about. We shouldn't talk about those concepts.
The buddhists also don't take in a rational sense about it. They meditate and have a bunch of koans but they are mystics. You just don't get to be a platonic idealist and no mystic and have skepticism be valid.
Replies from: Carinthium↑ comment by Carinthium · 2014-02-02T02:47:47.586Z · LW(p) · GW(p)
Not exactly Platonic- I have no belief whatsoever, on faith or reason, in ideal forms. As for why rationalism, I believe in it because rationalist arguments in this sense can be inherently self-justifying. This comes from starting from no assumptions.
However, I then show that such rationality fails in the long run to skeptical arguments of it's own sort, just as other types of rationality do. I focus on it because it is the only one with a halfway credible answer to skepticism.
I have already shown I know what skepticism is- not knowing anything whatsoever. You haven't refuted this argument, given that "I don't know" is a valid Epistemic state.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-02-02T08:17:44.470Z · LW(p) · GW(p)
I have no belief whatsoever [...] I have already shown I know what skepticism is
Those two positions contradict each other. You can't have both. You claim at the same time to believe that you know what skepticism happens to be and that you know nothing.
Replies from: Carinthium↑ comment by Carinthium · 2014-02-02T08:26:33.255Z · LW(p) · GW(p)
I said earlier that I believe that rationally speaking, skepticism proves itself correct and ordinary ideas of rationalism prove themselves self-refuting. However, I believe on faith (in the religious sense) that skepticism is false, and have beliefs on faith accordingly.
Therefore, I sort of believe in a double truth, but in a coherent fashion.
↑ comment by asr · 2014-01-28T06:26:49.376Z · LW(p) · GW(p)
Circular arguments have no correlation with reality except by chance- you may as well make something up and believe it. It would make about as much sense.
I don't believe this is true. A circular argument is at least internally consistent, and that prunes away a lot of ways to be inconsistent with reality.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T06:35:11.017Z · LW(p) · GW(p)
This assumes the falsity of skepticism to begin with. Even then, it is possible for a circular argument to be internally inconsistent.
comment by gedymin · 2014-01-27T11:26:56.652Z · LW(p) · GW(p)
Belief in the axioms of probability theory is justified by the fact that someone with inconsistent beliefs can be Dutch-booked.
If you're willing to put money on your beliefs (i.e. bet on them), then you ought to believe in the axioms in the first place, otherwise your opponent will always be able to come up with a combination of bets that will cause you to lose money.
This fact was proved by Bruno de Finetti in 1930-ties. See e.g. AI: A Modern Approach for an easily approachable technical discussion.
Replies from: cousin_it, Carinthium↑ comment by cousin_it · 2014-01-27T12:33:24.400Z · LW(p) · GW(p)
I think De Finetti's justification is fine as far as it goes, but it doesn't go quite as far as people think it does. Here's a couple dialogues to illustrate my point.
Dialogue 1
A: I have secretly flipped a fair coin and looked at the result. What's your probability that the coin came up heads?
B: I guess it's 50%.
A: Great! Will you accept a bet against me that the coin came up heads, at 1:1 odds?
B: Hmm, no, that doesn't seem fair because you already know the outcome of the coinflip and chose the bet accordingly.
A: So rational agents shouldn't necessarily accept either side of a bet according to their stated beliefs?
B: I suppose so.
Dialogue 2
A: I believe the sky is green with probability 90% and also blue with probability 90%.
B: Great! I can Dutch book you now. Here's a bet I want to make with you.
A: No, I don't wanna accept that bet. The theory doesn't force me to, as we learned in Dialogue 1.
Replies from: lmm, gedymin, Manfred↑ comment by lmm · 2014-01-27T12:36:19.883Z · LW(p) · GW(p)
In Dialogue 1 I adjust my probability estimate as the bet is offered, no?
Replies from: cousin_it↑ comment by cousin_it · 2014-01-27T12:53:37.568Z · LW(p) · GW(p)
That's a reasonable thing to do, but can you obtain something like De Finetti's justification of probability via Dutch books that way?
Replies from: JGWeissman↑ comment by JGWeissman · 2014-01-27T17:19:27.844Z · LW(p) · GW(p)
The obvious steelman of dialogue participant A would keep the coin hidden but ready to inspect, so that A can offer bets having credible ignorance of the outcomes and B isn't justified in updating on A offering the bet.
↑ comment by gedymin · 2014-01-27T12:38:32.211Z · LW(p) · GW(p)
Russel & Norvig:
"One common objection to de Finetti’s theorem is that this betting game is rather contrived. For example, what if one refuses to bet? Does that end the argument? The answer is that the betting game is an abstract model for the decision-making situation in which every agent is unavoidably involved at every moment. Every action (including inaction) is a kind of bet, and every outcome can be seen as a payoff of the bet. Refusing to bet is like refusing to allow time to pass."
I think a fair bet presupposes that both opponents will have access to the same amount of information, which is not the case in Dialogue 1. The bets in life are not always fair, but that has nothing to do with belief in probability axioms.
Replies from: pragmatist↑ comment by pragmatist · 2014-01-28T04:44:54.326Z · LW(p) · GW(p)
That Russell & Norvig quote doesn't appear to be a very good response to the objection it's addressing. De Finetti's argument is supposed to be a pragmatic argument for probabilism. In response to someone asking "Why should my beliefs obey the probability calculus?", de Finetti says "If you don't, you'll end up getting screwed (by being susceptible to dutch books)."
The response to de Finetti that Russell & Norvig are considering is "There are ways to get around susceptibility to dutch books other than accepting probabilism. For instance, I could formulate a policy of refusing to accept bets. Why is probabilism the right way to deal with susceptibility to dutch books?" Russell & Norvig are saying "Well, this is a thought experiment situation in which you are forced to bet."
OK, but that completely ruins the pragmatic appeal of de Finetti's theorem. I can feel the attraction of probabilism if it's the only way I'm protected against being screwed in reality. But not if it's the only way I'm protected against being screwed in an abstract model that doesn't match reality. Why should I care about getting screwed in a thought experiment?
↑ comment by Carinthium · 2014-01-27T11:43:13.213Z · LW(p) · GW(p)
This makes assumptions such as the existence of the world and the existence of bets which a global sceptic would not believe.
Replies from: gedymin↑ comment by gedymin · 2014-01-27T11:52:45.747Z · LW(p) · GW(p)
If you don't believe in the existence of the external world, then you shouldn't be worrying about the "notion of probability having a correlation with reality" in the first place. The OP presupposes the existence of the world. Do not shift the goalposts.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-27T12:09:37.408Z · LW(p) · GW(p)
I never said I disbelieved in it- I'm postulating, not accepting a position. It is also worth noting that in my postulated position I was neither accepting nor rejecting the existence of said external world.
As I mentioned, this argument started out on the basis of trying to figure out if something could be known without assumptions that could not themselves be justified. If assumptions are necessary to know anything, it means that we effectively believe on religious-style faith.
Replies from: gedymin↑ comment by gedymin · 2014-01-27T12:27:30.083Z · LW(p) · GW(p)
I was arguing against a rhetorical "you", identified with the sceptic, not you personally.
That said, an extreme skepticism is altogether a different game compared to a skepticism about probabilities. The latter is reasonable; the former, although it cannot be falsified, is useless. Reality does not go away when one stops believing into it.
Logical skepticism, on the other hand, is self-defeating. To make a logical argument against the possibility of logical arguments, against the value of reasoning - that is self-contradictory.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-27T12:38:52.243Z · LW(p) · GW(p)
The logical sceptic could argue that they are showing that logic is self-defeating- that when logic is taken to its ultimate conclusion it is shown to be false, therefore logically logic should be rejected. This is precisely what I would argue.
As for the matter of reality- if it exists, then of course it doesn't go away when we stop believing it. But how do we know that?
Replies from: JQuinton, gedymin↑ comment by JQuinton · 2014-01-27T15:34:13.688Z · LW(p) · GW(p)
If logic is pennies, and blue is a sock, then why are cats punctuation marks? Yes, because your mask is Maryland, and WiFi smokes bananas.
(This is what happens when you live in a world without logic, and is the only response you should have to someone who is a logic skeptic).
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T02:55:44.871Z · LW(p) · GW(p)
See my below response gedymin.
↑ comment by gedymin · 2014-01-27T12:48:43.189Z · LW(p) · GW(p)
Logically, "self-defeating" is not equal to "not self-defeating". If the skeptic rejects logic, then he should accept that "self-defeating" is equal to "not self-defeating". Therefore, if logic is self-defeating, then logic is also not self-defeating.
As for the second point - the epistemic perspective is more important than the ontological one. Seriously, read the conclusion of the "Simple truth".
This debate is getting silly, I'm out of here.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-27T12:56:48.482Z · LW(p) · GW(p)
On the first point, if you get to a conclusion within logic which marks it as "self-defeating", then from a logical perspective logic doesn't work. Non-logic doesn't matter for those who aren't logical, but for a logical person logic matters.
On the second point, once you start postulating actually viable alternatives to the world not existing, and considering the Evil Demon Argument, there is nothing in there which is actually dealt with.
comment by ChristianKl · 2014-01-27T10:50:23.312Z · LW(p) · GW(p)
I think that depends a lot on the specific skepticism you are dealing with.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-27T10:56:18.877Z · LW(p) · GW(p)
The scepticism I think is the most problematic is total scepticism. I've already explained the argument against it that scepticism about probability deals with.
comment by Dagon · 2014-01-28T08:21:21.981Z · LW(p) · GW(p)
how you answer a skeptic about ... reality
The standard answer is a punch in the nose. I have yet to meet a claimant to skepticism willing to let me perform this experiment enough times to get a trustworthy result.
Lighter-weight skeptics (those willing to at least tentatively accept some postulates about reality being real, and the validity of predicting future experiences) generally have no problem with "I can't justify these from first principles, but I'm using them until I can think of better".
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T08:49:18.614Z · LW(p) · GW(p)
That idea proves nothing, and you know it.
Replies from: Dagon↑ comment by Dagon · 2014-01-28T20:54:28.214Z · LW(p) · GW(p)
It proves nothing to the skeptic. To the rest of us, it proves that the skeptic actually believes in reality, at least enough to flinch.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-29T02:22:53.373Z · LW(p) · GW(p)
Why are you applying ad hominem selectively? You wouldn't use an ad hominem argument in most things- why is the skeptic an exception?
Replies from: Dagon↑ comment by Dagon · 2014-01-29T07:51:23.629Z · LW(p) · GW(p)
This isn't ad-hominem. I don't care which skeptic it is. I'm simply pointing out a pretty severe inconsistency between stated beliefs and actions. I use a similar tactic on a lot of topics where I don't have the time or skill to do ground-up research (and to help decide where it's worth the time). If the proponents of an idea behave very inconsistently with the idea, I update more strongly on their behavior than their statements.
The skeptic is making a prediction, that there is no probability or causality (they usually say "there is no basis for" rather than "there is no", but a quick recital of Tarski makes them equivalent). Anything can happen! If observed actions are inconsistent with that belief, that's evidence I should update on.
Note that the skeptic can't use this information, because there's no justification for belief that observation has any information about reality.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-29T08:14:11.832Z · LW(p) · GW(p)
Ad hominem represents arguing based not on the evidence but on the character of the person giving it. This is bad because it leads people to instinctively ignore arguments from those they dismiss rather than considering them.
In this case it is also circular, as you presume the existence of the skeptic which you should not be able to know.
Replies from: Dagon↑ comment by Dagon · 2014-01-29T08:52:09.514Z · LW(p) · GW(p)
Wait. How do you justify any belief in what any statement or action will "lead people to" do?
Replies from: Carinthium↑ comment by Carinthium · 2014-01-29T09:18:08.199Z · LW(p) · GW(p)
In reality, I believe non-skepticism on religious faith whilst thinking that rationally speaking skepticism is true. I slip up from time to time.
I should note, however, that a lot of my argument is that the rules of logic themselves suggest problems with beliefs as they currently stand- namely those surrounding circular arguments.
comment by Kawoomba · 2014-01-27T17:14:06.924Z · LW(p) · GW(p)
"the question isn’t how to arrive at the Truth, but rather how to eliminate error. Which sounds kind of obvious, until I meet yet another person who rails to me about how empirical positivism can’t provide its own ultimate justification, and should therefore be replaced by the person’s favorite brand of cringe-inducing ugh." -- Scott Aaronson
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T02:54:05.999Z · LW(p) · GW(p)
Complete elimination of error would logically imply knowing the truth.
Something like empirical positivism is like a castle on air- it makes assumptions with no basis in reality.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-01-28T14:01:53.691Z · LW(p) · GW(p)
Given that it happens within physical brains it obviously does have at least some basis in reality.
Genuine deep skepticism doesn't happen in real brains and therefore has no basis in reality.
Replies from: Carinthium↑ comment by Carinthium · 2014-01-28T15:24:05.257Z · LW(p) · GW(p)
Circular argument- You assume a basis in reality which assumes skepticism is wrong.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-01-28T15:45:26.151Z · LW(p) · GW(p)
Either there a reality and then there a basis or there reality in the first place and it's meaningless to speak about things having a basis in reality.
I mean do you believe that reality has a basis in reality?
Replies from: Carinthium↑ comment by Carinthium · 2014-01-30T03:32:21.581Z · LW(p) · GW(p)
I think we mean different things by "basis in reality". I use it to refer to something correlating with the real world, and evidence that demonstrates such a connection either probable or certain. Probability, of course, can only work if probability were somehow demonstrated valid.
Circular arguments do not count as a basis in reality, hence your argument, which assumes the existence of physical brains, does not work.