Popperian Decision making
post by curi · 2011-04-07T06:42:38.957Z · LW · GW · Legacy · 101 commentsContents
101 comments
Branching from: http://lesswrong.com/lw/54u/bayesian_epistemology_vs_popper/3uta?context=4
The question is: how do you make decisions without justifying decisions, and without foundations?
If you can do that, I claim the regress problem is solved. Whereas induction, for example, is refuted by the regress problem (no, arbitrary foundations or circular arguments are not solutions).
OK stepping back a bit, and explaining less briefly:
Infinite regresses are nasty problems for epistemologies.
All justificationist epistemologies have an infinite regress.
That means they are false. They don't work. End of story.
There's options of course. Don't want a regress? No problem. Have an arbitrary foundation. Have an unjustified proposition. Have a circular argument. Or have something else even sillier.
The regress goes like this, and the details of the justification don't matter.
If you want to justify a theory, T0, you have to justify it with another theory, T1. Then T1 needs justify by T2. Which needs justifying by T3. Forever. And if T25 turns out wrong, then T24 loses it's justification. And with T24 unjustified, T23 loses its justification. And it cascades all the way back to the start.
I'll give one more example. Consider probabilistic justification. You assign T0 a probability, say 99.999%. Never mind how or why, the probability people aren't big on explanations like that. Just do your best. It doesn't matter. Moving on, what we have to wonder if that 99.999% figure is correct. If it's not correct then it could be anything such at 90% or 1% or whatever. So it better be correct. So we better justify that it's a good theory. How? Simple. We'll use our whim to assign it a probability of 99.99999%. OK! Now we're getting somewhere. I put a lot of 9s so we're almost certain to be correct! Except, what if I had that figure wrong? If it's wrong it could be anything such as 2% or 0.0001%. Uh oh. I better justify my second probability estimate. How? Well we're trying to defend this probabilistic justification method. Let's not give up yet and do something totally differently, instead we'll give it another probability. How about 80%? OK! Next I ask: is that 80% figure correct? If it's not correct, the probability could be anything, such as 5%. So we better justify it. So it goes on and on forever. Now there's two problems. First it goes forever, and you can't ever stop, you've got an infinite regress. Second, suppose you stopped have some very large but finite number of steps. Then the probability the first theory is correct is arbitrarily small. Because remember that at each step we didn't even have a guarantee, only a high probability. And if you roll the dice a lot of times, even with very good odds, eventually you lose. And you only have to lose once for the whole thing to fail.
OK so regresses are a nasty problem. They totally ruin all justificationist epistemologies. That's basically every epistemology anyone cares about except skepticism and Popperian epistemology. And forget about skepticism, that's more of an anti-epistemology than an epistemology: skepticism consists of giving up on knowledge.
Now we'll take a look at Popper and Deutsch's solution. In my words, with minor improvements.
Regresses all go away if we drop justification. Don't justify anything, ever. Simple.
But justification had a purpose.
The purpose of justification is to sort out good ideas from bad ideas. How do we know which ideas are any good? Which should we believe are true? Which should we act on?
BTW that's the same general problem that induction was trying to address. And induction is false. So that's another reason we need a solution to this issue.
The method of addressing this issue has several steps, so try to follow along.
Step 1) You can suggest any ideas you want. There's no rules, just anything you have the slightest suspicion might be useful. The source of the ideas, and the method of coming up with them, doesn't matter to anything. This part is easy.
Step 2) You can criticize any idea you want. There's no rules again. If you don't understand it, that's a criticism -- it should have been easier to understand. If you find it confusing, that's a criticism -- it should have been clearer. If you think you see something wrong with it, that's a criticism -- it shouldn't have been wrong it that way, *or* it should have included an explanation so you wouldn't make a mistaken criticism. This step is easy too.
Step 3) All criticized ideas are rejected. They're flawed. They're not good enough. Let's do better. This is easy too. Only the *exact* ideas criticized are rejected. Any idea with at least one difference is deemed a new idea. It's OK to suggest new ideas which are similar to old ideas (in fact it's a good idea: when you find something wrong with an idea you should try to work out a way to change it so it won't have that flaw anymore).
Step 4) If we have exactly one idea remaining to address some problem or question, and no one wants to revisit the previous steps at this time, then we're done for now (you can always change your mind and go back to the previous steps later if you want to). Use that idea. Why? Because it's the only one. It has no rivals, no known alternatives. It stands alone as the only non-refuted idea. We have sorted out the good ideas from the bad -- as best we know how -- and come to a definite answer, so use that answer. This step is easy too!
Step 5) What if we have a different number of ideas left over which is not exactly one? We'll divide that into two cases:
Case 1) What if we have two or more ideas? This one is easy. There is a particular criticism you can use to refute all the remaining theories. It's the same every time so there's not much to remember. It goes like this: idea A ought to tell me why B and C and D are wrong. If it doesn't, it could be better! So that's a flaw. Bye bye A. On to idea B: if B is so great, why hasn't it explained to me what's wrong with A, C and D? Sorry B, you didn't answer all my questions, you're not good enough. Then we come to idea C and we complain that it should have been more help and it wasn't. And D is gone too since it didn't settle the matter either. And that's it. Each idea should have settled the matter by giving us criticisms of all its rivals. They didn't. So they lose. So whenever there is a stalemate or a tie with two or more ideas then they all fail.
Case 2) What if we have zero ideas? This is crucial because case one always turns into this! The answer comes in two main parts. The first part is: think of more ideas. I know, I know, that sounds hard. What if you get stuck? But the second part makes it easier. And you can use the second part over and over and it keeps making it easier every time. So you just use the second part until it's easy enough, then you think of more ideas when you can. And that's all there is to it.
OK so the second part is this: be less ambitious. You might worry: but what about advanced science with its cutting edge breakthroughs? Well, this part is optional. If you can wait for an answer, don't do it. If there's no hurry, then work on the other steps more. Make more guesses and think of more criticisms and thus learn more and improve your knowledge. It might not be easy, but hey, the problem we were looking at is how to sort out good ideas from bad ideas. If you want to solve hard problems then it's not easy. Sorry. But you've got a method, just keep at it.
But if you have a decision to make then you need an answer now so you can make your decision. So in that case, if you actually want to reach a state of having exactly one theory which you can use now, then the trick is when you get stuck be less ambitious. I think how you can see how that would work in general terms. Basically if human knowledge isn't good enough to give you an answer of a certain quality right now, then your choices are either to work on it more and not have an answer now, or accept a lower quality answer. You can see why there isn't really any way around that. There's no magic way to always get a top quality answer now. If you want a cure for cancer, well I can't tell you how to come up with one in the next five minutes, sorry.
This is a bit vague so far. How does lowering your standards address the problem. So what you do is propose a new idea like this, "I need to do something, so I will do..." and then you put whatever you want (idea A, idea B, some combination, whatever).
This new idea is not refuted by any of the existing criticisms. So now you have one idea, it isn't refuted, and you might be done. If you're happy with it, great. But you might not be. Maybe you see something wrong with it, or you have another proposal. That's fine; just go back to the first three steps and do them more. Then you'll get to step 4 or 5 again.
What if we get back here? What do we do the second time? The third time? We simply get less ambitious each time. The harder a time we're having, the less we should expect. And so we can start criticizing any ideas that aim too high.
BTW it's explained on my website here, including an example:
http://fallibleideas.com/avoiding-coercion
Read that essay, keeping in mind what what I've been saying, and hopefully everything will click. Just bear in mind that when it talks about cooperation between people, and disagreements between people, and coming up with solutions for people -- when it discusses ideas in two or more separate minds -- everything applies exactly the same if the two or more conflicting ideas are all in the same mind.
What if you get real stuck? Well why not do the first thing that pops into your head? You don't want to? Why not? Got a criticism of it? It's better than nothing, right? No? If it's not better than nothing, do nothing! You think it's silly or dumb? Well so what? If it's the best idea you have then it doesn't matter if it's dumb. You can't magically instantly become super smart. You have to use your best idea even if you'd like to have better ideas.
Now you may be wondering whether this approach is truth-seeking. It is, but it doesn't always find the truth immediately. If you want a resolution to a question immediately then its quality cannot exceed today's knowledge (plus whatever you can learn in the time allotted). It can't do better than the best that is known how to do. But as far as long term progress, the truth seeking came in those first three steps. You come up with ideas. You criticize those ideas. Thereby you eliminate flaws. Every time you find a mistake and point it out you are making progress towards the truth. That's how we approach the truth: not by justifying but by identify mistakes and learning better. This is evolution, it's the solution to Paley's problem, it's discussed in BoI and on my Fallible Ideas website. And it's not too hard to understand: improve stuff, keep at it, and you get closer to the truth. Mistake correcting -- criticism -- is a truth-seeking method. That's where the truth-seeking comes from.
101 comments
Comments sorted by top scores.
comment by Scott Alexander (Yvain) · 2011-04-10T13:49:53.295Z · LW(p) · GW(p)
I am very inexperienced in epistemology, so forgive me if I'm making a simple error.
But it sounds like everything important in your theory is stuck into a black box in the words "criticize the idea".
Suppose we had a computer program designed to print the words "I like this idea" to any idea represented as a string with exactly 5 instances of the letter 'e' in it, and the words "I dislike this idea because it has the wrong number of 'e's in it" to any other idea.
And suppose we had a second computer program designed to print "I like this idea" to any idea printed on blue paper, and "I dislike this idea because it is on the wrong color paper" to any idea printed on any other color of paper.
These two computers could run through your decision making process of generating and criticizing ideas, and eventually would settle on the first idea generated which was written on blue paper and which used the letter 'e' exactly five times.
So it would seem that for this process to capture what we mean by "truth", you have to start out with some reasoners who already have a pretty good set of internal reasoning processes kind of like our own that they use when criticizing an idea.
But everything that's interesting and difficult about epistemology is captured in that idea of "a pretty good set of internal reasoning processes kind of like our own that they use when criticizing an idea", so really this decision-making process only works for entities that are already running a different epistemology that's doing all the work.
It almost seems like a detached lever fallacy, where the lever is the ability to criticize ideas, and the machinery the lever is activating is the actual epistemology the agent is using.
comment by cousin_it · 2011-04-07T08:26:45.451Z · LW(p) · GW(p)
Regarding the technical side of your post, if a Bayesian computer program assigns probability 0.87 to proposition X, then obviously it ought to assign probability 1 to the fact that it assigns probability 0.87 to proposition X. (If you don't trust your own transistors, add error-correction at the level of transistors, don't contaminate the software.) But it's hard to think of a situation where the program will need to make use of the latter probability.
Regarding the substance, I think you disagree with popular opinion on LW because there are two possible meanings of "epistemology":
1) If a human wants to have rational beliefs, what rulebook should they follow?
2) If we want to write a computer program that arrives at rational beliefs, what algorithms should we use?
From your posts and comments it looks like you're promoting Popperianism as an answer to (1). The problem is, it's pretty hard to determine whether a given answer to (1) is right, wrong or meaningless, when it's composed of mere words (cognitive black boxes) and doesn't automatically translate to an answer for (2). So most LWers think that (2) is really the right question to ask, and any non-confused answer to (2) ought to dissolve any leftover confusion about (1).
PS: this might be slightly off topic, but any discussion of anti-Bayesianism ought to contain links to two texts by Cosma Shalizi, the most interesting anti-Bayesian that I know of.
Replies from: curi, timtyler↑ comment by curi · 2011-04-07T08:59:14.503Z · LW(p) · GW(p)
I don't agree with either meaning of epistemology. The traditional meaning of epistemology, which I accept, is the study of knowledge, and in particular questions like What is knowledge? and How do we sort out good ideas from bad ideas? and How is knowledge created?
Both of your definitions of the field have bayesian ways of thinking already built in to them. They are biased.
If you don't want to be an epistemology, that would be OK with me. But for example Yudkowsky claimed that Bayesianism was dethroning Popperism. To do that it has to be an epistemology and deal with the same questions Popper addresses.
Popperian epistemology does not offer any rulebook. It says rulebooks are an authoritarian and foundationalist mistake, which comes out of the attempt to find a source of justification. (Well, the psychological claims are not important and not epistemology. But Popper did occasionally say things like that, and I think it's true)
I will take a look at your links, thanks. I respect that author a lot for this post on why heritability studies are wrong:
http://cscs.umich.edu/~crshalizi/weblog/520.html
(1). The problem is, it's pretty hard to determine whether a given answer to (1) is right, wrong or meaningless, when it's composed of mere words (cognitive black boxes) and doesn't automatically translate to an answer for (2). So most LWers think that (2) is really the right question to ask, and any non-confused answer to (2) ought to dissolve any leftover confusion about (1).
Note that Popperians think there is no algorithm that automatically arrives at rational beliefs. There's no privileged road to truth. AIs will not be more rational than people. OK they usually won't have a few uniquely human flaws (like, umm, caring if they are fat). But there is no particular reason to expect this stuff will be replaced with correct ideas. Whatever AIs think of instead will have its own mistakes. It's the same kind of issue as if some children were left on a deserted island to form their own culture. They'll avoid various mistakes from our culture, but they will also make new ones. The rationality of AIs, just like the rationality of the next generation, depends primarily on the rationality of the educational techniques used (education is closely connected to epistemology in my view, because it's about learning, i.e. creating knowledge. Popperian epistemology has close connections to educational theory which led to the philosophy "Taking Children Seriously" by David Deutsch).
Replies from: cousin_it, None↑ comment by cousin_it · 2011-04-07T09:16:50.412Z · LW(p) · GW(p)
I''m willing to reformulate like this:
1) How can a human sort out good ideas from bad ideas?
2) How can a computer program sort out good ideas from bad ideas?
and the subsequent paragraph can stay unchanged. Whatever recipe you're proposing to improve human understanding, it ought to be "reductionist" and apply to programs too, otherwise it doesn't meet the LW standard. Whether AIs can be more rational than people is beside the point.
Replies from: curi↑ comment by curi · 2011-04-07T09:43:32.849Z · LW(p) · GW(p)
I don't think you understood the word "reductionist". Reductionism doesn't mean that things can be reduced to lower levels but that they should -- it actually objections to high level statements and considers them worse. There's no need for reductionism of that kind for ideas to be applicable to low level issues like being programmable.
Yes Popperian epistemology can be used for an AI with the reformulations (at least: I don't know any argument that it couldn't).
Why aren't we there yet? There aren't a lot of Popperians, Popperian philosophy does not seek to be formal which makes it harder to translate into code, and most effort has been directed at human problems (including criticizing large mistakes plaguing the field of philosophy, and which also affect regular people and permeate our culture). The epistemology problems important to humans are not all the same as the ones important to writing an AI. For an AI you need to worry about what information to start it with. Humans are born with information, we don't yet have the science to control that, so there's is only limited reason to worry about it. Similarly there is the issue of how to educate a very young child. No one knows the answer to that in words -- they can do it by following cultural traditions but they can't explain it. But for AIs, how to deal with the very young stages is important.
Broadly an AI will need a conjecture generator, a criticism generator, and a criticism evaluator. Humans have these built in. So again the problems for AI are somewhat different than what's important for, e.g., explaining epistemology to human adults.
You may think the details of these things in humans are crucially important. The reason they aren't is that they are universal, so implementation details don't affect anything much about our lives.
It's still interesting to think about. I do sometimes. I'll try to present a few issues. In abstract terms we would be content with a random conjecture generator, and with sorting through infinitely many conjectures. But we can't program it like that -- too slow. You need shortcuts. A big one is you generate new conjectures by taking old conjectures and making random but limited changes to them. How limited is a good idea? I don't know how to quantify that. Moving on, there is an issue of: do you wait until conjectures are created and then criticize them afterwards? Or do you program it in such a way that conjectures which would be refuted by a criticism can sometimes not be generated in the first place, as a kind of optimization? I lean towards the second view, but I don't know how to code it. I'm partial to the notion of using criticisms as filters on the set of possible conjectures. There's no danger of getting stuck, or losing universality if the filters can be disabled as desired, and modified as desired, and they don't prevent conjectures that would want to modify them. That raises another issue which is: can people think themselves into a bad state they can't get out of? I don't know if that's impossible or not. I don't think it happens in practice (yes people can be really dumb, but i don't think they are even close to impossible to get out of). If it was technically possible for an AI to get stuck, would that be a big deal? You can see here perhaps some of the ways I don't care for rulebooks.
BTW one of the things our theory tells us is you can never build half an AI. It will jump straight from very minimal functionality to universal functionality, just as computer programming languages do. (The "jump to universality" is discussed by David Deutsch in The Beginning of Infinity). One thing this means is there is no way to know how far along we are -- the jump could come at any time with one new insight.
Whether AIs can be more rational than people is beside the point.
Is it? What good are they, then? I have some answers to that, but nothing really huge. If they aren't assumed to be super rational geniuses then they can't be expected to quickly bring about the singularity or that kind of thing.
Replies from: timtyler, timtyler, cousin_it↑ comment by timtyler · 2011-04-07T13:38:37.315Z · LW(p) · GW(p)
BTW one of the things our theory tells us is you can never build half an AI. It will jump straight from very minimal functionality to universal functionality, just as computer programming languages do. (The "jump to universality" is discussed by David Deutsch in The Beginning of Infinity). One thing this means is there is no way to know how far along we are - the jump could come at any time with one new insight.
That sounds pretty bizarre. So much for the idea of progress via better and better compression and modeling. However, it seems pretty unlikely to me that you actually know what you are talking about here.
Replies from: curi↑ comment by curi · 2011-04-07T19:10:30.395Z · LW(p) · GW(p)
Insulting my expertise is not an argument. (And given you know nothing about my expertise, it's a silly too. Concluding that people aren't experts because you disagree with them is biased and closed minded.)
Are you familiar with the topic? Do you want me to give you a lecture on it? Will you read about it?
↑ comment by timtyler · 2011-04-07T16:19:11.350Z · LW(p) · GW(p)
Reductionism doesn't mean that things can be reduced to lower levels but that they should -- it actually objections to high level statements and considers them worse.
Conventionally, and confusingly, the word reductionism has two meanings:
Reductionism can either mean (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents.
↑ comment by cousin_it · 2011-04-07T09:55:15.995Z · LW(p) · GW(p)
Is it? What good are they, then?
I didn't say it was false, just irrelevant to the current discussion of what we want from a theory of knowledge.
You could use math instead of code. To take a Bayesian example, the Solomonoff prior is uncomputable, but well-defined mathematically and you can write computable approximations to it, so it counts as progress in my book. To take a non-Bayesian example, fuzzy logic is formalized enough to be useful in applications.
Anyway, I think I understand where you're coming from, and maybe it's unfair to demand new LW-style insights from you. But hopefully you also understand why we like Bayesianism, and that we don't even think of it at the level you're discussing.
Replies from: curi↑ comment by curi · 2011-04-07T10:33:06.567Z · LW(p) · GW(p)
I understand some. But I think you're mistaken and I don't see a lot to like when judged by the standards of good philosophy. Philosophy is important. Your projects, like inventing an AI, will run into obstacles you did not foresee if your philosophy is mistaken.
Of course I have the same criticism about people in all sorts of other fields. Architects or physicists or economists who don't know philosophy run into problems too. But claiming to have an epistemology, and claiming to replace Popper, those are things most fields don't do. So I try to ask about it. Shrug.
I think I figured out the main idea of Bayesian epistemology. It is: Bayes' theorem is the source of justification (this is intended as the solution to the problem of justification, which is a bad problem).
But when you start doing math, it's ignored, and you get stuff right (at least given the premises, which are often not realistic, following the proud tradition of game theory and economics). So I should clarify: that's the main philosophical claim. It's not very interesting. Oh well.
Replies from: None↑ comment by [deleted] · 2011-04-07T14:46:11.362Z · LW(p) · GW(p)
I think I figured out the main idea of Bayesian epistemology. It is: Bayes' theorem is the source of justification (this is intended as the solution to the problem of justification, which is a bad problem).
No. See here, where Eliezer specifically says that this is not the case. ("But first, let it be clearly admitted that the rules of Bayesian updating, do not of themselves solve the problem of induction.")
Replies from: curi↑ comment by [deleted] · 2011-04-07T13:15:52.753Z · LW(p) · GW(p)
Note that Popperians think there is no algorithm that automatically arrives at rational beliefs. There's no privileged road to truth. AIs will not be more rational than people. OK they usually won't have a few uniquely human flaws (like, umm, caring if they are fat). But there is no particular reason to expect this stuff will be replaced with correct ideas. Whatever AIs think of instead will have its own mistakes. It's the same kind of issue as if some children were left on a deserted island to form their own culture. They'll avoid various mistakes from our culture, but they will also make new ones. The rationality of AIs, just like the rationality of the next generation, depends primarily on the rationality of the educational techniques used (education is closely connected to epistemology in my view, because it's about learning, i.e. creating knowledge.
This is mostly irrelevant to your main point, but I'm going to talk about it because it bothered me. I don't think anyone on LessWrong would agree with this paragraph, since it assumes a whole bunch of things about AI that we have good reasons to not assume. The rationality of an AI will depend on its mind design--whether it has biases built into its hardware or not us up to us. In other words, you can't assert that AIs will make their own mistakes because this assumes things about the mind design of the AI, things that we can't assume because we haven't built it yet. Also, even if an AI does have its own cognitive biases, it still might be orders of magnitude more rational than a human being.
Replies from: curi, JoshuaZ↑ comment by curi · 2011-04-07T19:13:38.764Z · LW(p) · GW(p)
I'm not assuming stuff by accident. There is serious theory for this. AI people ought to learn these ideas and engage with them, IMO, since they contradict some of your ideas. If we're right, then you need to make some changes to how you approach AI design.
So for example:
The rationality of an AI will depend on its mind design--whether it has biases built into its hardware or not us up to us.
If an AI is a universal knowledge creator, in what sense can it have a built in bias?
Replies from: timtyler, None↑ comment by timtyler · 2011-04-07T19:22:09.907Z · LW(p) · GW(p)
I'm not assuming stuff by accident. There is serious theory for this. AI people ought to learn these ideas and engage with them, IMO, since they contradict some of your ideas.
Astrology also conflicts with "our ideas". That is not in itself a compelling reason to brush up on our astrology.
↑ comment by [deleted] · 2011-04-07T19:19:50.020Z · LW(p) · GW(p)
If an AI is a universal knowledge creator, in what sense can it have a built in bias?
I don't understand this sentence. Let me make my view of things clearer: An AI's mind can be described by a point in mind design space. Certain minds (most of them, I imagine) have cognitive biases built into their hardware. That is, they function in suboptimal ways because of the algorithms and heuristics they use. For example: human beings. That said, what is a "universal knowledge creator?" Or, to frame the question in the terms I just gave, what is its mind design?
Replies from: curi↑ comment by curi · 2011-04-07T19:27:51.284Z · LW(p) · GW(p)
Certain minds (most of them, I imagine) have cognitive biases built into their hardware.
That's not what mind design space looks like. It looks something like this:
You have a bunch of stuff that isn't a mind at all. It's simple and it's not there yet. Then you have a bunch of stuff that is a fully complete mind capable of anything that any mind can do. There's also some special cases (you could have a very long program that hard codes how to deal with every possible input, situation or idea). AIs we create won't be special cases of that type which are a bad kind of design.
This is similar to the computer design space, which has no half-computers.
what is a "universal knowledge creator?"
A knowledge creator can create knowledge in some repertoire/set. A universal can do any knowledge creation that any other knowledge creator can do. There is nothing in the repertoire of some other knowledge creator, but not its own.
Human beings are universal knowledge creators.
Are you familiar with universality of computers? And how very simple computers can be universal? There's a lot of parallel issues.
Replies from: None↑ comment by [deleted] · 2011-04-07T20:45:52.315Z · LW(p) · GW(p)
You have a bunch of stuff that isn't a mind at all. It's simple and it's not there yet. Then you have a bunch of stuff that is a fully complete mind capable of anything that any mind can do. There's also some special cases (you could have a very long program that hard codes how to deal with every possible input, situation or idea). AIs we create won't be special cases of that type which are a bad kind of design. This is similar to the computer design space, which has no half-computers.
I'm somewhat skeptical of this claim--I can design a mind that has the functions 0(n) (zero function), S(n) (successor function), and P(x0, x1,...xn) (projection function) but not primitive recursion, I can compute most but not all functions. So I'm skeptical of this "all or little" description of mind space and computer space.
However, I suspect it ultimately doesn't matter because your claims don't directly contradict my original point. If your categorization is correct and human beings are indeed universal knowledge creators, that doesn't preclude the possibility of us having cognitive biases (which it had better not do!). Nor does it contradict the larger point, which is that cognitive biases come from cognitive architecture, i.e. where one is located in mind design space.
Are you familiar with universality of computers? And how very simple computers can be universal? There's a lot of parallel issues.
If you're referring to Turing-completeness, then yes I am familiar with it.
Replies from: curi↑ comment by curi · 2011-04-07T21:03:38.574Z · LW(p) · GW(p)
I'm somewhat skeptical of this claim--I can design a mind that has the functions 0(n) (zero function), S(n) (successor function), and P(x0, x1,...xn) (projection function) but not primitive recursion, I can compute most but not all functions. So I'm skeptical of this "all or little" description of mind space and computer space.
How is that a mind? Maybe we are defining it differently. A mind is something that can create knowledge. And a lot, not just a few special cases. Like people who can think about all kinds of topics such as engineering or art. When you give a few simple functions and don't even have recursion, I don't think it meets my conception of a mind, and I'm not sure what good it is.
If your categorization is correct and human beings are indeed universal knowledge creators, that doesn't preclude the possibility of us having cognitive biases (which it had better not do!).
In what sense can a bias be very important (in the long term), if we are universal? We can change it. We can learn better. So the implementation details aren't such a big deal to the result, you get the same kind of thing regardless.
Temporary mistakes in starting points should be expected. Thinking needs to be mistake tolerant.
↑ comment by JoshuaZ · 2011-04-07T19:50:13.099Z · LW(p) · GW(p)
Also, even if an AI does have its own cognitive biases, it still might be orders of magnitude more rational than a human being.
Or orders of magnitude less rational. This isn't terribly germane to your original point but it seemed worth pointing out. We really have no good idea what the minimum amount of rationality actually is for an intelligent entity.
Replies from: None↑ comment by timtyler · 2011-04-07T16:14:03.062Z · LW(p) · GW(p)
Regarding the technical side of your post, if a Bayesian computer program assigns probability 0.87 to proposition X, then obviously it ought to assign probability 1 to the fact that it assigns probability 0.87 to proposition X.
I am pretty sure that is wrong. For one thing it would be overconfident. For another 0 and 1 are not probabilities.
But it's hard to think of a situation where the program will need to make use of the latter probability.
It's a measure of how much confidence there is in the estimate, so it could be used when updating in response to evidence. High confidence there would mean that it takes a lot of new evidence to shift the 0.87 estimate.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-07T16:47:01.689Z · LW(p) · GW(p)
Your last paragraph is wrong. Here's an excruciatingly detailed explanation.
Let's say I am a perfect Bayesian flipping a possibly biased coin. At the outset I have a uniform prior over all possible biases of the coin between 0 and 1. Marginalizing (integrating) that prior, I assign 50% probability to the event of seeing heads on the first throw. Knowing my own neurons perfectly, I believe all the above statements with probability 100%.
The first flip of the coin will still make me update the prior to a posterior, which will have a different mean. Perfect knowledge of myself doesn't stop me from that.
Now skip forward. I have flipped the coin a million times, and about half the results were heads. My current probability assignment for the next throw (obtained by integrating my current prior) is 50% heads and 50% tails. I have monitored my neurons diligently throughout the process, and am 100% confident of their current state.
But it will take much more evidence now to change the 50% assignment to something like 51%, because my prior is very concentrated after seeing a million throws.
The statement "I have perfect knowledge of the current state of my prior" (and its integral, etc.) does not in any way imply that "my current prior is very concentrated around a certain value". It is the latter, not the former, that controls my sensitivity to evidence.
Replies from: Sniffnoy, timtyler↑ comment by timtyler · 2011-04-07T17:38:25.769Z · LW(p) · GW(p)
Your last paragraph is wrong. Here's an excruciatingly detailed explanation.
That does clarify what you originally meant. However, this still seems "rather suspicious" - due to the 1.0:
Replies from: cousin_itif a Bayesian computer program assigns probability 0.87 to proposition X, then obviously it ought to assign probability 1 to the fact that it assigns probability 0.87 to proposition X.
↑ comment by cousin_it · 2011-04-07T17:43:03.054Z · LW(p) · GW(p)
I'm willing to bite the bullet here because all hell breaks loose if I don't. We don't know how a Bayesian agent can ever function if it's allowed (and therefore required) to doubt arbitrary mathematical statements, including statements about its own algorithm, current contents of memory, arithmetic, etc. It seems easier to just say 1.0 as a stopgap. Wei Dai, paulfchristiano and I have been thinking about this issue for some time, with no results.
comment by Desrtopa · 2011-04-07T15:24:50.831Z · LW(p) · GW(p)
I'm not all that sure that this is going anywhere helpful, but since curi has asked for objections to Critical Rationalism, I might as well make mine known.
My first objection is that it attempts to "solve" the problem of induction by doing away with an axiom. Yes, if you try to prove induction, you get circularity or infinite regress. That's what happens when you attempt to prove axioms. The Problem of Induction essentially amounts to noticing that we have axioms on which inductive reasoning rests.
Popperian reasoning could be similarly "refuted" simply by refusing to accept any or all of its axioms. The axiom of induction that Critical Rationalism rejects is one that we have every reason to suspect is true, save that we cannot prove it, which is as good as axioms get, so Critical Rationalism is not any more secure.
Second, with respect to the principle of criticism, it gives too much leeway to a mere clever arguer. Philosophy as a discipline is testament to the fact that humans can criticize each others' ideas endlessly without coming to meaningful consensuses. Wittgenstein criticized Popper. Was that the end of Popper? No, Popper and supporters countered criticisms, which were met with counter-counters, and so on till today and almost certainly beyond. For humans operating in natural language, following this principle does not have a good track record of getting people to promote good ideas over bad ones, when compared to distinction based on plausibility and evidence.
Replies from: curi↑ comment by curi · 2011-04-07T20:11:01.448Z · LW(p) · GW(p)
My first objection is that it attempts to "solve" the problem of induction by doing away with an axiom.
Proposed axioms can be mistakes. Do we need that axiom? Popper says the argument that we need it is mistaken. That could be an important and valid insight if true, right?
Popperian reasoning could be similarly "refuted" simply by refusing to accept any or all of its axioms.
You are applying foundationalist and justificationist criticism to a philosophy which has, as one of its big ideas, that those ways of thinking are mistaken. That is not a good answer to Popper's system.
The axiom of induction that Critical Rationalism rejects is one that we have every reason to suspect is true
No, it's worse than that. Try specifying the axiom. How are ideas justified?
If the answer you give is "they are justified by other ideas, which themselves must be justified" then that isn't merely not proven but wrong. That doesn't work.
If the answer you give is, "they are justified by other ideas which are themselves justified, or the following set of foundational ideas ..." then the problem is not merely that you can't prove your foundations are correct, but that this method of thinking is anti-critical and not sufficiently fallibilist.
Fallibilism teaches that people make mistakes. A lot. It is thus a bad idea to try to find the perfect truth and set it up to rule your thinking, as the unquestionable foundations. You will set up mistaken ideas in the role of foundations, and that will be bad. What we should do instead is to try set up institutions which are good at (for example):
1) making errors easy to find and question -- highlighting rather than hiding them 2) making ideas easy to change when we find errors
Popper's solution to the problem of justification isn't merely dropping an axiom but involves advocating an approach along these lines (with far more detail than I can type in). It's substantive. And rejecting it on the grounds that "you can't prove it" or "i refuse to accept it" is silly. You should give a proper argument, or reconsider rejecting it.
Second, with respect to the principle of criticism, it gives too much leeway to a mere clever arguer.
It is non-obvious how it doesn't. There is a legitimate issue here. But the problem can be and is dealt with. I do not have this problem in practice, and I know of no argument that I really have it in theory. You give Wittgenstein as an example, and suggest things go back and forth kind of indefinitely. Some replies:
1) stuff gets resolved or figured out. broad consensuses get reached. there isn't a rule to force people not to be idiots, but being an idiot isn't a fulfilling life and people tire of it when they understand a better way. this isn't easy but it happens.
2) adding a rule to require people to listen to your conception of reason would not solve the problem. they might refuse your premises.
3) I don't agree with your pessimism. most people don't try hard enough to organize their thinking and reach conclusions. but it can be done, in practice. you can look at issues in depth, and actually sort through all the arguments and counter arguments. it takes patience, persistence and effort to do high quality learning, but progress is possible.
following this principle does not have a good track record
As I see it, it is responsible for the enlightenment, in broad terms. i'm not trying to claim all the credit but the things done in the enlightenment were basically compatible with what i'm advocating. Also for the golden age of athens (plato and aristotle are not representative of the height of athens, they lived just after. it's the pre-socratics i have in mind, such as xenophanes.)
one aspect of the enlightenment was rebelling against authority (in particular religious authority, and in particular religious authority as applied to science. also in particular, political authority). this is broadly in line with my philosophy. and i think not so much in line with an attempt to set up foundational axioms for everyone to obey to prevent them from thinking wrong.
That said, Popper's insights beyond description of how people already learn have little track record. What they do have is pretty positive. Popper and his philosophy of science in particular has been respected by many scientists great and small, like Einstein, Deutsch, Feynman, Wheeler, Medawar, Eccles, Monod.
Anyway, back to those clever people. They can be dealt with. Wittgenstein is easy to argue with, he's so bad. Do you have an example of some argument you don't know how to resolve and want me to resolve?
Replies from: Desrtopa↑ comment by Desrtopa · 2011-04-07T21:16:19.723Z · LW(p) · GW(p)
Edit:
I wrote a response up, but I deleted it because I think this is getting too confrontational to be useful. I have plenty of standing objections to Critical Rationalism, but I don't think I can pose them without creating an attitude too adversarial to be conducive to changing either of our minds. I hate to be the one to start bringing this in again, but I think perhaps if you want to continue this discussion, you should read the Sequences, at which point you should hopefully have some understanding of how we are convinced that Bayesianism improves people's information processing and decisionmaking from a practical standpoint. I will be similarly open to any explanations of how you feel Critical Rationalism improves these things (let me be very clear that I'm not asking for more examples of people who approved of Popper or things you feel Critical Rationalism can take credit for, show me how people who can only be narrowly interpreted as Popperian outperform people who are not Popperian.) I have standing objections to Popper's critiques of induction, but this is what I actually care about and am amenable to changing my mind on the basis of.
Replies from: curi↑ comment by curi · 2011-04-07T21:34:03.082Z · LW(p) · GW(p)
The reason I'm not very interested in carefully reading your Sequences is that I feel they miss the point and aren't useful (that is, useful to philosophy. lots of your math is nice). In my discussions here, I have not found any reason to think otherwise.
show me how people who can only be narrowly interpreted as Popperian outperform people who are not Popperian
Show it how? I can conjecture it. Got a criticism?
Replies from: Desrtopa, Peterdjones, JoshuaZ, JoshuaZ↑ comment by Desrtopa · 2011-04-07T21:43:09.317Z · LW(p) · GW(p)
Responses to criticisms are not interesting to me; proponents of any philosophy can respond to criticisms in ways that they are convinced are satisfying, and I'm not impressed that supporters of Critical Rationalism are doing a better job. If you cannot yourself come up with a convincing way to demonstrate that Critical Rationalism results in improved success in ways that supporters of other philosophies cannot, why should I take it seriously?
Replies from: curi↑ comment by curi · 2011-04-07T21:45:40.212Z · LW(p) · GW(p)
What would you find convincing? What convinced you of Bayes' or whatever you believe?
Replies from: Desrtopa↑ comment by Desrtopa · 2011-04-07T21:53:35.953Z · LW(p) · GW(p)
Examples of mistakes in processing evidence people make in real life which lead to bad results, and how Bayesian reasoning resolves them, followed by concrete applications such as the review of the Amanda Knox trial.
Have you already looked at the review of the Amanda Knox trial? If you haven't, it might be a useful point for us to examine.
It doesn't help anyone to point out an example of inductive reasoning, say "this is a mistake" because you reject the foundations of inductive reasoning, but not demonstrate how rejecting it leads to better results than accepting it. So far the examples you have given of the supposed benefits of Critical Rationalism have been achievements of people who can only be loosely associated with Critical Rationalism, or arguments in a frame of Critical Rationalism for things that have already been argued for outside a frame of Critical Rationalism.
Replies from: curi↑ comment by curi · 2011-04-08T00:40:20.271Z · LW(p) · GW(p)
Inductive reasoning doesn't lead to any results, ever.
No one has ever used it.
The theory they have is a mistake.
This cannot be demonstrated in the way you request. It can only be argued. e.g. by beginning with the question: what precisely does induction say to do? (which has never been successfully answered.)
have people done stuff similar to induction, and did it work OK? well that depends on philosophical understanding of what is and isn't similar to something that doesn't make sense. i'm not very inclined to start calling any coherent things similar to any incoherent ones.
Many Popperian insights are of this type: they are philosophical ideas.
So far the examples you have given of the supposed benefits of Critical Rationalism have been achievements of people who can only be loosely associated with Critical Rationalism
What are you talking about? I don't think you know much about the philosophies of the people I listed. They aren't all just loosely associated.
Replies from: Desrtopa, jimrandomh, JoshuaZ↑ comment by Desrtopa · 2011-04-08T20:47:09.396Z · LW(p) · GW(p)
This doesn't address my requests at all.
So what if nobody has ever used induction? I'm convinced that Popper is wrong, but without any evidence that following his epistemology produces improved results, I don't see why I should be interested in the possibility that he's right. Even supposing induction is merely an approximation of how we really gain knowledge, it's a computable approximation which produces results that are at least as viable, so there's no reason why it not being the "real" method of knowledge production should matter, for AI or for humans.
What are you talking about? I don't think you know much about the philosophies of the people I listed. They aren't all just loosely associated.
Then explain specifically what each of them have achieved that could not have been achieved equally well had they not been Critical Rationalists, and why these achievements are due to Critical Rationalism. Or hell, explain what any of them have achieved that's unambiguously due to critical rationalism.
Replies from: curi↑ comment by curi · 2011-04-08T21:25:29.050Z · LW(p) · GW(p)
Even supposing induction is merely an approximation
Popper says it's not.
Does that matter to you?
Then explain specifically what each of them have achieved that could not have been achieved equally well had they not been Critical Rationalists, and why these achievements are due to Critical Rationalism. Or hell, explain what any of them have achieved that's unambiguously due to critical rationalism.
You are challenging me to explain things to you which you could learn about on your own if you wanted. You want me to answer questions you chose not to research. That is OK, but...
Before that you were dismissive. So I'm not sure if I want to help answer your questions about scientists. Are you a person with intellectual integrity who is worth talking to? Help me decide. You made a statement about the people I had listed, without knowing much about the people I had listed, and in particular without knowing if they all only have a loose association with CR or not. You falsely asserted they did all have a loose association only. You were mistaken to speak from ignorance about scientists -- assuming I was wrong without even asking -- and now you would like to learn better and change your mind. Is that correct?
Note for example that Deutsch has published two books advocating Popperian philosophy and talking extensively about Popper. That isn't a loose association. Even wikipedia level knowledge of these people would be sufficient not to make the mistake you did. You had less than that level of knowledge and posted anyway. Do you want to apologize, retract your statements, or anything? Or do you want to get mad at me now? I want to test your reaction.
Replies from: Desrtopa↑ comment by Desrtopa · 2011-04-08T21:50:10.877Z · LW(p) · GW(p)
Popper says it's not.
Does that matter to you?
I don't see why it should, unless you can demonstrate that it leads to different results. This is what I expect in order to have an interest in this discussion, please provide it if you want me to continue to participate.
You are challenging me to explain things to you which you could learn about on your own if you wanted. You want me to answer questions you chose not to research. That is OK, but...
Why should I commit to reading a large amount of material without an indication that it contains useful ideas? That's an opportunity cost, time I could be dedicating to countless other things including any other philosophy. You yourself haven't committed the time to reading the Sequences, and have demonstrated basic level misunderstandings of the positions we hold here; you're applying a double standard in your expectations.
I am giving you ample opportunity to convince me that doing this research is worth my time, and am becoming less and less patient as you fail to provide anything I consider a meaningful incentive.
↑ comment by jimrandomh · 2011-04-08T01:05:48.109Z · LW(p) · GW(p)
Inductive reasoning doesn't lead to any results, ever. No one has ever used it.
Go read Jaynes' defense of Laplace's rule of succession (which is an example of inductive reasoning) in Chapter 18.
↑ comment by JoshuaZ · 2011-04-08T01:35:48.251Z · LW(p) · GW(p)
Inductive reasoning doesn't lead to any results, ever.
No one has ever used it.
This is so empirically false that I don't know how to approach it. Do you actually think that when people are saying that they are using induction they really aren't? Note that this isn't the same claim that people shouldn't be using induction or that their induction is unjustified. But claiming they are not using it is just wrong unless you are using some very non-standard terminology under which one could say things like "No one has ever used homeopathy." This seems like an abuse of language.
Replies from: curi↑ comment by curi · 2011-04-08T09:53:35.043Z · LW(p) · GW(p)
Post the method of induction, step by step, in sufficient detail that a reasonable person could do it without having to ask any questions.
When you fail -- in particular by having large unspecified parts -- it will be because you are wrong about the issue in question.
When you respond to this failure by making ad hoc additions that still don't provide followable instructions, then I will stop talking to you.
OK, go ahead.
Replies from: JoshuaZ↑ comment by Peterdjones · 2011-04-14T16:46:49.524Z · LW(p) · GW(p)
Which exposes one of the problems with Popperianism..it leads to the burden of proof being shifted to the wrong place. The burden should be with who proposes a claim, or whoever makes the most extraordinary claim. Popperianism turns it into a burden of disproof on the refuter. All you have to do is "get in first" with your conjecture, and you can sit back and relax. "Why, you have to show Barack Obama is NOT an alien".
Replies from: curi↑ comment by JoshuaZ · 2011-04-10T04:11:01.070Z · LW(p) · GW(p)
Show it how? I can conjecture it. Got a criticism?
I'm replying a second time to this remark because thinking about it more it illustrates a major problem you are having. You are using a specific set of epistemological tools and notation when that is one of the things that is in fact in dispute. That's unproductive and is going to get people annoyed.
It is all the more severe because many of these situations are cases where the specific epistemology doesn't even matter. For example, the claim under discussion is the claim that " people who can only be narrowly interpreted as Popperian outperform people who are not Popperian" That's something that could be tested regardless of epistemology. To use a similar example, if someone is arguing for Christianity and they claim that Christians have a longer lifespan on average then I don't need to go into a detailed discussion about epistemology to examine the data. If I read a paper, in whatever branch of science, I could be a Popperian, a Bayesian, completely uncommitted, or something else, and still read the paper and come to essentially the same results.
Trying to discuss claims using exactly the framework in question is at best unproductive, and is in general unhelpful.
Replies from: curi↑ comment by curi · 2011-04-10T04:26:45.421Z · LW(p) · GW(p)
I'm replying a second time to this remark because thinking about it more it illustrates a major problem you are having. You are using a specific set of epistemological tools and notation when that is one of the things that is in fact in dispute. That's unproductive and is going to get people annoyed.
Sure I am. But so are you! We can't help but use epistemological tools. I use the ones I regard as actually working. As do you. I'm not sure what you're suggesting I do instead.
If you want me to recognize what I'm doing, I do. If you want me to consider other toolsets, I have. In depth. (Note, btw, that I am here voluntarily choosing to learn more about how Bayesians think. I like to visits various communities. Simultaneously I'm talking to Objectivists (and reading their books) who have a different way of approach epistemology.) The primary reason some people have accused me (and Brian) of not understanding Bayesian views and other views not our own isn't our unfamiliarity but because we disagree and choose to think differently than them and not to accept or go along with various ideas.
When people do stuff like link me to http://yudkowsky.net/rational/bayes it is their mistake to think I haven't read it. I have. They think if I read it I would change my mind; they are just plain empirically wrong; I did read it and did not change my mind. They ought to learn something from their mistake, such as that their literature is less convincing than they think it is.
On the other hand, no one here has noticeably familiar with Popper. And no one has pointed me to any rigorous criticism of Popper by any Bayesian. Nor any rigorous rebuttal to Popper's criticisms of Bayesianism (especially the most important ones, that is the philosophical not mathematical ones).
The situation is that Popper read up on Bayesian stuff, and many other ideas, engaged with and criticized other ideas, formulated his own ideas, rebutted criticisms of his views, and so on. That is all good stuff. Bayesians do it some too. But they haven't done it with Popper; they've chosen to disregard him based on things like his reputation, and misleading summaries of his work. At best some people here have read his first book, which is not the right one to start with if you want to understand what he's about, and gotten a very unrepresentative picture of what Popper is about. This disregarding of Popper without engaging with the bulk of his work is no good.
The same thing can be found, btw, in Objectivist circles. Here's what happened when my friend asked Harry Binswanger (a big shot who knew Rand personally for a long time) about Popper: Binswanger gave Popper quotes attributed to the wrong book and briefly stated a few mythes about Popper. And in one of the quotes he inserted a clarifying word. It was roughly, at the start of the quote: "That doctrine [realism]" when Popper wasn't talking about realism, Binswanger hadn't read (or had misread, but I think hadn't read since he didn't know what book the quotes were from) the context. When confronted with his mistakes he basically ignored them and said "I'm right anyway" except, amazingly, without the "anyway" part. I think you may be happy to jump on the dumb Objectivists. But from my perspective, the reception here hasn't been better. In some ways the Objectivsts were superior. They provided some relevant published material on the matter (it was badly wrong, but at least they had something).
It is all the more severe because many of these situations are cases where the specific epistemology doesn't even matter.
As I was talking about in other comments recently (edit: oops, I actually wrote a different post first but haven't managed to post it yet due to the rate limit. it's a reply to you, so you can find it in your inbox in 10 minutes. i'll post it next), all mistakes matter. It doesn't work to ignore mistakes thinking they aren't relevant and just keeping going and hope they wont' bite you. You know, I've gotten a bunch of flak where people say Popper isn't rigorous enough and Bayesian stuff is more rigorous. But it's not quite like that. Popper thought that certain kinds of formalness used by philosophers were mistakes, said why, and didn't do them (especially in his later works). But for other issues, the Popperian attitude is more rigorous. We don't gloss over small mistakes. We think they all matter! Is that not being more rigorous in a way? Maybe you think the wrong way. But that's a substantive disagreement.
BTW your entire question asking for empirical evidence that a non-empirical philosophy produces better results is itself a product of your epistemological tools. Popperians regard that as something of a bad question. That's why you don't get the direct answer you expect. It's not evasion but disagreement about your premises. Large parts of philosophy are not empirical and can't really be judged empirically. And there's so many issues that make a rigorous answer to what you want impracticable. No philosophy can answer it because there's too many uncontrolled factors. People always have lots of ideas of a variety of types, and imperfect understanding of the philosophy they are associated with.
Replies from: hairyfigment↑ comment by hairyfigment · 2011-04-10T04:44:59.596Z · LW(p) · GW(p)
I'm not sure what you're suggesting I do instead.
Tell us what you think your tool does better, in some area where we see a problem with Bayes. (And I do mean Bayes' Theorem as a whole, not a part of it taken out of context.)
Show it how? I can conjecture it. Got a criticism?
Seems like any process that leads to harmful priors can also produce a criticism of your position as you've explained it so far. As I mentioned before, the Consistent Gambler's Fallacy would lead us to criticize any theory that has worked in the past.
↑ comment by JoshuaZ · 2011-04-08T14:25:19.595Z · LW(p) · GW(p)
show me how people who can only be narrowly interpreted as Popperian outperform people who are not Popperian
Show it how? I can conjecture it. Got a criticism?
Yes. There's no reason to conjecture this other than your own personal preference. I could conjecture that people with red hair perform better and that would have almost as much basis.
(The mere act of asserting a hypothesis is not a reason to take it seriously.)
comment by [deleted] · 2011-04-07T10:50:44.563Z · LW(p) · GW(p)
You can criticize any idea you want. There's no rules again. If you don't understand it, that's a criticism -- it should have been easier to understand. If you find it confusing, that's a criticism -- it should have been clearer. If you think you see something wrong with it, that's a criticism -- it shouldn't have been wrong it that way, or it should have included an explanation so you wouldn't make a mistaken criticism. This step is easy too Step 3) All criticized ideas are rejected. They're flawed. They're not good enough.
tl;dr
I've just Criticised your idea. Your idea is not good enough. You have to come up with a new one.
Or is there some level of criticism that doesn't count because it's not good enough either?
Replies from: curi↑ comment by curi · 2011-04-07T19:51:23.313Z · LW(p) · GW(p)
Your criticism is generic. It would work on all ideas equally well. It thus fails to differentiate between ideas or highlight a flaw in the sense of something which could possibly be improved on.
So, now that I've criticized your criticism (and the entire category of criticisms like it), we can reject it and move on.
Replies from: None, prase↑ comment by [deleted] · 2011-04-07T20:25:00.690Z · LW(p) · GW(p)
My criticism is not generic. It would not work on an idea which consisted of cute cat pictures. Therefore, your criticism of my criticism does not apply.
I can continue providing specious counter-counter-...counter-criticisms until the cows come home. I don't see how your scheme lets sensible ideas get in edgeways against that sort of thing.
Anyhow, criticism of criticisms wasn't in your original method.
Replies from: curi↑ comment by curi · 2011-04-07T20:38:51.190Z · LW(p) · GW(p)
I can continue providing specious counter-counter-...counter-criticisms
If you understand they are specious, then you have a criticism of it.
Anyhow, criticism of criticisms wasn't in your original method.
Criticisms are themselves ideas/conjectures and should themselves be criticized. And I'm not saying this ad hoc, I had this idea before posting here.
Replies from: None↑ comment by [deleted] · 2011-04-07T20:42:05.048Z · LW(p) · GW(p)
I understand they are specious, but I'm not using your epistemology to determine that. What basis do you have for saying that they are specious?
Replies from: curi↑ comment by curi · 2011-04-07T20:54:35.782Z · LW(p) · GW(p)
It doesn't engage with the substance of my idea. It does not explain what it regards as a flaw in the idea.
Unless you meant the tl;dr as your generic criticism and the flaw you are trying to explain is that all good idea should be short and simple. Do you want me to criticize that? :-)
Replies from: None↑ comment by [deleted] · 2011-04-07T21:18:50.894Z · LW(p) · GW(p)
What I'm trying to get at is: By your system, the idea to be accepted is the one without an uncountered criticism. What matters isn't any external standard of whether the criticism is good or bad, just whether it has been countered. But any criticism, good or bad, can be countered by a (probably bad) criticism, so your system doesn't offer a way to distinguish between good criticism and bad criticism.
Replies from: curi↑ comment by curi · 2011-04-07T21:36:43.874Z · LW(p) · GW(p)
You have to conjecture standards of criticism (or start with cultural ones). Then improve them by criticism, and perhaps by conjecturing new standards.
If you want to discuss some specific idea, say gardening, you can't discuss only gardening in a very isolated way. You'll need to at least implicitly refer to a lot of background knowledge, including standards of criticism.
One way this differs from foundations is if you think a standard of criticism reaches the wrong conclusion about gardening, you can argue from your knowledge of gardening backwards (as some would see it) to criticize the standard of criticism for getting a wrong answer.
Replies from: None↑ comment by [deleted] · 2011-04-07T21:40:08.120Z · LW(p) · GW(p)
How can you expect that criticizing your standards of criticism will be productive if you don't have a good standard of criticism in the first place?
Replies from: curi↑ comment by curi · 2011-04-07T21:43:55.893Z · LW(p) · GW(p)
Many starting points work fine.
In theory, could you get stuck? I don't have a proof either way.
I don't mind too much. Humans already have standards of criticism which don't get stuck. We have made scientific progress. Our standards we already have allow self-modifiaction and thereby unbounded progress. So it doesn't matter what would have happened if we had started with a bad standard once a upon a time, we're past that (it does matter if we want to create an AI).
Replies from: None↑ comment by [deleted] · 2011-04-07T21:49:28.000Z · LW(p) · GW(p)
You would definitely get stuck. The problem Khoth pointed out is that your method can't distinguish between good criticism and bad criticism. Thus, you could criticize any standard that you come up with, but you'd have know way of knowing which criticisms are legitimate, so you wouldn't know which standards are better than others.
I agree that in practice we don't get stuck, but that's because we don't use the method or the assumptions you are defending.
Replies from: curi↑ comment by curi · 2011-04-07T21:51:31.923Z · LW(p) · GW(p)
Thus, you could criticize any standard
I meant stuck in the sense of couldn't get out of. Not in the sense of could optionally remain stuck.
I agree that in practice we don't get stuck, but that's because we don't use the method or the assumptions you are defending.
What's the argument for that?
We have knowledge about standards of criticism. We use it. Objections about starting points aren't very relevant because Popperians never said they were justified by their starting points. What's wrong with this?
Replies from: None↑ comment by [deleted] · 2011-04-07T21:59:59.553Z · LW(p) · GW(p)
I meant stuck in the sense of couldn't get out of. Not in the sense of could optionally remain stuck.
I don't think there's a way out if your method doesn't eventually bottom out somewhere. If you don't have a reliable or objective way of distinguishing good criticism from bad, the act of criticism can't help you in any way, including trying to fix this standard.
We have knowledge about standards of criticism. We use it. Objections about starting points aren't very relevant because Popperians never said they were justified by their starting points. What's wrong with this?
If you don't have objective knowledge of standards of criticism and you are unwilling to take one as an axiom, then what are you justified by?
Replies from: curi↑ comment by curi · 2011-04-08T00:34:26.464Z · LW(p) · GW(p)
If you don't have objective knowledge of standards of criticism and you are unwilling to take one as an axiom, then what are you justified by?
Nothing. Justification is a mistake. The request that theories be justified is a mistake. They can't be. They don't need to be.
If you don't have a reliable or objective way of distinguishing good criticism from bad, the act of criticism can't help you in any way, including trying to fix this standard.
Using the best ideas we know of so far is a partially reliable, partially objective way which allows for progress.
↑ comment by prase · 2011-04-07T20:02:32.896Z · LW(p) · GW(p)
Doesn't this create an infinite regress of criticisms, if you try hard enough? (Your countercriticism is also generic, when it applies to the whole category.)
Replies from: curi↑ comment by curi · 2011-04-07T20:19:21.547Z · LW(p) · GW(p)
If you try hard enough you can refuse to think at all.
Popperian epistemology helps people learn who want to. It doesn't provide a set of rules that, if you follow them exactly while trying your best not to make progress, then you will learn anyway. We only learn much when we seriously try to, with good intentions.
You can always create trivial regresses, e.g. by asking "why?" infinitely many times. But that's different than the following regress:
If you assert "theories should be justified, or they are crap"
and you assert "theories are justified in one way: when they are supported by a theory which is itself justified"
Then you have a serious problem to deal with which is not the same type as asking "why?" forever.
Things which reject entire categories is not a precise way to state what theories should be rejected. You are correct that the version I wrote can be improved to be clearer and more precise. One of the issues is whether a criticism engages with the substance of the idea it is criticizing, or not. "All ideas are wrong" (for example) doesn't engage with any of the explanations that the ideas it rejects give, it doesn't point out flaws in them, it doesn't help us learn. Criticisms which don't help us learn better are no good -- the whole purpose and meaning of criticism, as we conceive it, is you explain a flaw so we can learn better.
One issue this brings up is that communication is never 100% precise. There is always ambiguity. If a person wants to, he can interpret everything you say in the worst possible way. If he does so, he will sabotage your discussion. But if he follows Popper's (not unique or original) advice to try to interpret ideas he hears as the best version they could mean -- to try to figure out good ideas -- then the conversation can work better.
comment by FAWS · 2011-04-07T07:28:02.587Z · LW(p) · GW(p)
You assign T0 a probability, say 99.999%. Never mind how or why, the probability people aren't big on explanations like that. Just do your best. It doesn't matter. Moving on, what we have to wonder if that 99.999% figure is correct.
Subjective probabilities don't work like that. Your subjective probability just is what it is. In Bayesian terms the closest thing to a "real" probability is whatever probability estimation is the best you can do with the available data. There is no "correct" or "incorrect" subjective probability, just predictably doing worse than possible to different degrees.
Replies from: Matt_Simpson, None↑ comment by Matt_Simpson · 2011-04-07T07:56:33.449Z · LW(p) · GW(p)
There is no "correct" or "incorrect" subjective probability, just predictably doing worse than possible to different degrees.
There is a correct P(T0|X) where X is your entire state of information. Probabilities aren't strictly speaking subjective, they're subjectively objective.
Replies from: FAWS↑ comment by FAWS · 2011-04-07T08:35:06.282Z · LW(p) · GW(p)
"Subjectively objective" just means that trying to do the best you can doesn't leave any room for choice. You can argue that you aren't really talking about probabilities if you knowingly do worse than you could, but that's just a matter of semantics.
↑ comment by [deleted] · 2011-04-07T08:06:00.943Z · LW(p) · GW(p)
Are you saying that there is no regress problem? Yudkowsky disagrees. And so do other commenters here, one of whom called it a "necessary flaw".
Replies from: FAWS, Manfred, timtyler↑ comment by FAWS · 2011-04-07T08:16:39.157Z · LW(p) · GW(p)
Are you saying that there is no regress problem?
No, just that it doesn't manifest itself in the form of a pyramid of probabilities of probabilities being "correct". There certainly is the problem of priors, and the justification for reasoning that way in the first place (which were sketched by others in the other thread).
↑ comment by Manfred · 2011-04-07T15:24:20.367Z · LW(p) · GW(p)
Yeah, you're making a flawed argument by analogy. "There's an infinite regress in deductive logic, so therefore any attempt at justification using probability will also lead to an infinite regress." The reason that probabilistic justification doesn't run into this (or at least, not the exact analogous thing) is that "being wrong" is a definite state with known properties, that is taken into account when you make your estimate. This is very unlike deductive logic.
comment by drethelin · 2011-04-07T18:39:51.686Z · LW(p) · GW(p)
I fail to see how, in practical terms, this is at all better than using induction based reasoning. It may make you feel and look smarter to tell someone they can't prove or disprove anything with certainty, but that's not exactly a stunning endorsement. You can't actually ACT as if nothing is ever conclusively true. I would like to see a short description as to WHY this is a better way to view the world and update your beliefs about it.
Replies from: curi↑ comment by curi · 2011-04-07T19:49:26.013Z · LW(p) · GW(p)
You can't actually ACT as if nothing is ever conclusively true.
But I do act that way. I am a fallibilist. Are you denying fallibilism? Some people here endorsed it. Is there a Bayesian consensus on it?
Why do you think I don't act like that?
I would like to see a short description as to WHY this is a better way to view the world and update your beliefs about it.
Because it works and makes sense. If you want applications to real life fields you can find applications to parenting, relationships and capitalism here:
Replies from: GuySrinivasan↑ comment by SarahNibs (GuySrinivasan) · 2011-04-07T20:23:00.420Z · LW(p) · GW(p)
I was interested in applications to capitalism. Is there a place on that site other than the one titled "Capitalism" which shows applications to capitalism? I saw nothing there involving fallibilism or acting as if nothing is ever conclusively true.
Replies from: curi↑ comment by curi · 2011-04-07T20:32:46.088Z · LW(p) · GW(p)
I'll just quickly write something for you:
Capitalism is a part of liberalism. It applies liberal ideas, such as individual freedom, to economic issues, and thus advocates, for example, free trade.
What might we consider instead of freedom? Force.
Liberalism hates force. It wants all disputes to be resolved without the use of force. This leads to capitalist ideas (taking capitalism seriously) like that taxes are a use of force which should be improved on, that people don't have a right to bread (provided by someone else, who becomes in a small way their slave), etc...
The best argument against force comes from fallibilism. This was first discussed by the liberal philosopher William Godwin.
It is: in any disagreement, we might be wrong. The other guy might be right. Therefore, we should not impose our will on him. That isn't truth seeking, and truth seeking is needed because we don't know who is right and shouldn't assume it's us.
Force is inherently irrational because it assumes who is right based on the source of the ideas in question (or, if you prefer, denies the other guy has an idea, or something like that).
Why is it initiating force in particular that is bad, but defense is OK? Because defense does not sabotage truth seeking. The outcome already wasn't going to be decided based on reason when the first guy initiated force. Defense doesn't cause any new problem.
Capitalist values allow for all voluntary interaction, which is compatible with correcting our mistakes (does not require it, but allows it) and bans non-voluntary interaction in which some party is acting contrary to fallibilism.
Get the idea?
Replies from: Desrtopa↑ comment by Desrtopa · 2011-04-08T21:12:06.788Z · LW(p) · GW(p)
This only demonstrates that you can argue in a fallibilist framework for something you can argue for in practically any other philosophical framework as well. Simply showing that your epistemology allows you to do things as well as people who don't even know what an epistemology is isn't a rousing argument for its usefulness.
Replies from: curi↑ comment by curi · 2011-04-08T21:49:49.209Z · LW(p) · GW(p)
What are the other arguments for liberalism, of this quality and just as fundamental?
I read some other philosophies and wasn't able to find great liberal arguments like this. I'd like to hear them.
As an example, Mises has very good arguments for liberalism, but none of them are as fundamental as this. They are all higher level stuff.
Replies from: Desrtopa↑ comment by Desrtopa · 2011-04-08T21:52:17.594Z · LW(p) · GW(p)
I'm not convinced that this is an argument of exceptional quality, or fundamental at all, so I'm just going to have to say "most of them."
Replies from: JoshuaZ, curi↑ comment by curi · 2011-04-08T21:54:05.138Z · LW(p) · GW(p)
Give one that's better.
Replies from: Desrtopa↑ comment by Desrtopa · 2011-04-08T22:13:22.142Z · LW(p) · GW(p)
I'm not convinced that any moral argument is fundamental. What do you want one for? Is that just an "I'd like to see you do better?" challenge? If so, I'm not going to bother, because I doubt it would serve a useful purpose in furthering the conversation, and I don't think anyone was particularly swayed by yours in the first place. Those of us who are already liberals have our own, and those who aren't weren't compelled to change our minds.
If this is a "If I can see an argument for liberalism that is more convincing to an average person not already entrenched in my philosophy, it will change my opinion about my philosophy," then I'll try to provide one, but I'll stipulate that the persuasiveness should be decided by a poll elsewhere, not by either of us. If you can provide a response to my request in my other comment that convinces me that I should continue to be interested in this conversation at all, I would be amenable to that.
comment by Matt_Simpson · 2011-04-07T08:08:00.377Z · LW(p) · GW(p)
Popperian epistemology still relies on deductive logic. Why is deductive logic trustworthy? (Serious question, I think it illuminates the nature of foundations)
You might argue that we conjecture that deductive logic, as we know it, is true/valid/correct and nothing that we've come up with seems to refute it - yet that doesn't mean we've "proved" that deductive logic is correct. I would go further with some positive arguments, but we'll leave it at that for now.
A Bayesian might argue that the basic assumptions that go into Bayesian epistemology (The assumptions for Cox's theorem + some assumptions that yield prior distributions) have the same status as the rules of logic - we conjecture that they're true and they stand up to criticism, ye we don't think we've proven these assumptions.
This is my understanding of Bayesian epistemology - something like (sophisticated) Popperian falsificationism / critical rationalism provides support for the assumptions of Bayesian epistemology. I would argue that you can go farther than falsificationism and actually give some positive arguments for the foundations, but that's irrelevant, really. Even without the positive arguments, the assumptions of Cox's theorem seem to stand up - as in, not get knocked down.
Replies from: curi, timtyler↑ comment by curi · 2011-04-07T08:36:21.510Z · LW(p) · GW(p)
Popperian epistemology still relies on deductive logic.
Uses, especially in science. Doesn't rely on in any fundamental way for the main philosophical ideas.
Why is deductive logic trustworthy? (Serious question, I think it illuminates the nature of foundations)
It is not "trustworthy". But I don't have a criticism of it. I don't reject ideas for no reason. Lacks of positive justification isn't a reason (since nothing has justification, lack of it cannot differentiate between theories, so it isn't a criticism to lack it).
What is "trustworthy" in an everyday sense, but not a strict philosophical sense, is knowledge. The higher quality an idea (i.e. the more it's been improved up to now), the more trustworthy since the improvements consist of getting rid of mistakes, so there's less mistakes to bite you. But how many mistakes are left, and how big are they? Unknown. So, it's not trustworthy in any kind of definitive way.
You might argue that we conjecture that deductive logic, as we know it, is true/valid/correct and nothing that we've come up with seems to refute it - yet that doesn't mean we've "proved" that deductive logic is correct.
Yes. We do conjecture it. And it's not proved. So what?
A Bayesian might argue that the basic assumptions that go into Bayesian epistemology (The assumptions for Cox's theorem + some assumptions that yield prior distributions) have the same status as the rules of logic.
A difference between Popperian views and justificationist ones is Popperian views don't say anything should be justified and then fail to do it. But if you do say things should be justified, and then fail to do it, that is bad.
When Bayesians or inductivists set out to justify theories (and we mean that word broadly, e.g. "support" is a type of justification. So is "having high probability" when it is applied to ideas rather than to events.), they are proposing something rather different from logic. A difference is: justificationism has been criticized while logic hasn't. The criticism is: if you say that theories should be justified, and you say that they gain their justification from other ideas which are themselves justified, then you get a regress. And if you don't say that, you face the question: how are ideas to be justified? And you better have an answer. But no known answers work.
So justification has a different kind of status than logic. And also, if you accept justificationism then you face the problem of justifying logic. But if you don't, then you don't. So that's why you have to, but we don't.
So you might wonder what a non-justiificationist Bayesian epistemology would look like. If you're interested, maybe you could tell me. I certainly do think that Bayes' theorem itself is correct, but I'm not convinced it has any important applications to epistemology. I think that trying to have it play the role of justifying ideas is a mistake.
This is my understanding of Bayesian epistemology - something like (sophisticated) Popperian falsificationism / critical rationalism provides support for the assumptions of Bayesian epistemology. Then once you have the assumptions, well the rest falls out.
Popperians aren't overly attached to any particular idea. Our favorites are things like fallibilism, not deduction. But we don't have the structure of having some foundational ideas and then "the rest falls out". We regard that kind of structure as fragile and bad. Popper said knowledge is like a "woven web". There's no up and down, no foundations and derivative parts, no preferred directions, and no simple structure like A -> B -> C. Everything is interconnected in messy fashion (much more so than real spider webs, which actually have relatively simple geometric patterns). And you are permitted to start in the middle, or anywhere, even in mid air. Whenever you want. It doesn't matter. You can conjecture anything with no support, not just foundational ideas.
One of the problems with trying to use Popperian ideas for your foundations, then forgetting about them, is that they say you shouldn't. If you let them in as your foundations, they will immediately tell you what to do next, and it isn't Bayesian epistemology! They will tell you to be Popperians, and also to stop being foundationalists, they will not endorse your attempt to have "the rest fall out". If Popper was right enough to serve as a foundation, why is he wrong about all the rest?
Another aspect of your approach is reductionism. You treat low level theories are more important. We consider that a mistake. There's nothing wrong with emergent properties. There is nothing wrong with arguing from a higher level idea to a lower level one. Higher level ideas are just as valid as any others.
Replies from: prase, None, JoshuaZ↑ comment by prase · 2011-04-07T13:46:49.070Z · LW(p) · GW(p)
Why is deductive logic trustworthy? (Serious question, I think it illuminates the nature of foundations)
It is not "trustworthy". But I don't have a criticism of it.
I second here Khoth's comment. How do you decide about validity of a criticism? There are certainly people who don't understand logic, and since you have said
You can criticize any idea you want. There's no rules again. If you don't understand it, that's a criticism -- it should have been easier to understand.
doesn't it mean that you actually have a criticism of logic? Or does only count that you personally don't criticise it? If so, how this approach is different from accepting any idea at your wish? What's the point of having an epistemology when it actually doesn't constrain your beliefs in any way?
A technical question: how do I make nested quotes?
Replies from: curi, Matt_Simpson↑ comment by curi · 2011-04-07T18:42:20.708Z · LW(p) · GW(p)
How do you decide about validity of a criticism?
You conjecture standards of criticism, and use them. If you think they aren't working well, you can criticize them within the system and change the, or you can conjecture new standards of criticism and use those. Note: this has already been done, and we already have standards of criticism which work pretty well and which allow themselves to be improved. (They are largely not uniquely Popperian, but well known.)
Different aspect: in general, all criticisms always have some valid point. If someone is making a criticism, and it's wrong, then why wasn't he helped enough not to do that? Theories should be clear and help people understand the world. If someone doesn't get it then there is room for improvement.
doesn't it mean that you actually have a criticism of logic?
I don't regard logic as 'rules', in this context. But terminology is not important. The way logic figures into Popperian critical discussions is: if an idea violates logic you can criticize it for having done so. It would then in theory be possible to defend it by saying why this idea is out of the domain of logic or something (and of course you can point out if it doesn't actually violate logic) -- there's no rule against that. But no one has ever come up with a good argument of that type.
Replies from: prase↑ comment by prase · 2011-04-07T20:07:14.249Z · LW(p) · GW(p)
Isn't this
all criticisms always have some valid point
contradicting this
no one has ever come up with a good argument of that type
?
I mean, if you can judge arguments and say whether they are good, doesn't it mean that there are bad arguments which don't have a valid point?
Replies from: curi↑ comment by curi · 2011-04-07T20:13:14.926Z · LW(p) · GW(p)
All criticisms have some kind of point, e.g. they might highlight a need for something to be explained better. This is compatible with saying no one ever came up with a good argument (good in the context of modern knowledge) for the Earth being flat, or something. If someone thinks the Earth is flat, then this is quite a good criticism of something -- and I suspect that something is his own background knowledge. We could discus the matter. If he had some argument which addresses my round-earth views, i'd be interested. Or he might not know what they are. Shrug.
↑ comment by Matt_Simpson · 2011-04-07T14:34:25.579Z · LW(p) · GW(p)
Replies from: praseIf this quote is nested, put two >'s in front of the part you want to be quoted twice.
↑ comment by prase · 2011-04-07T19:55:00.041Z · LW(p) · GW(p)
This works for me. However, I want to quote something inside a quote and then continue on the first level, such as
inner quote outer quote
The text in italic should be one quoting level deeper.
Replies from: jimrandomh↑ comment by jimrandomh · 2011-04-07T19:59:53.999Z · LW(p) · GW(p)
>> Inner quote
>
> Outer quote
Yields
Replies from: praseInner quote
Outer quote
↑ comment by [deleted] · 2011-04-07T18:57:40.238Z · LW(p) · GW(p)
Another aspect of your approach is reductionism. You treat low level theories are more important. We consider that a mistake. There's nothing wrong with emergent properties. There is nothing wrong with arguing from a higher level idea to a lower level one. Higher level ideas are just as valid as any others.
No, that's incorrect. That may be how other philosophers use the term, but that's not what it means here.
Edit: To clarify, I mean that LessWrong doesn't define reductionism the same way you just did, so your argument doesn't apply.
↑ comment by JoshuaZ · 2011-04-08T02:48:27.907Z · LW(p) · GW(p)
It is not "trustworthy". But I don't have a criticism of it. I don't reject ideas for no reason.
Um, there's a lot of criticism out there of deductive logic. For one thing, humans often make mistakes in deductive logic so one doesn't know if something is correct. For another, some philosophers have rejected the law of the excluded middle. Yet others have proposed logical systems which try to localize contradictions and prevent explosions (under the sensible argument that when a person is presented with two contradictory logical arguments that look valid to them they don't immediately decide that the moon is made of green cheese). There's a lot to criticize about deductive logic.
↑ comment by timtyler · 2011-04-07T16:27:21.632Z · LW(p) · GW(p)
A Bayesian might argue that the basic assumptions that go into Bayesian epistemology (The assumptions for Cox's theorem + some assumptions that yield prior distributions) have the same status as the rules of logic - we conjecture that they're true and they stand up to criticism, ye we don't think we've proven these assumptions.
I don't think I have heard that argued. The problem of the reference machine in Occam's razor leads to a million slightly-different variations. That seems much more dubious than deduction does.
comment by [deleted] · 2011-04-07T14:44:13.282Z · LW(p) · GW(p)
) What if we have two or more ideas? This one is easy. There is a particular criticism you can use to refute all the remaining theories. It's the same every time so there's not much to remember. It goes like this: idea A ought to tell me why B and C and D are wrong. If it doesn't, it could be better! So that's a flaw. Bye bye A. On to idea B: if B is so great, why hasn't it explained to me what's wrong with A, C and D? Sorry B, you didn't answer all my questions, you're not good enough. Then we come to idea C and we complain that it should have been more help and it wasn't. And D is gone too since it didn't settle the matter either. And that's it. Each idea should have settled the matter by giving us criticisms of all its rivals. They didn't. So they lose. So whenever there is a stalemate or a tie with two or more ideas then they all fail.
This seems absurd, since an explanation like "Phlogiston!", which can "explain" everything because it is a mysterious answer, would pass your test but a legitimate explanation wouldn't.
Replies from: curi↑ comment by curi · 2011-04-07T19:50:26.542Z · LW(p) · GW(p)
If something can explain everything (by not being adapted to addressing any particular problem) we can criticize it for doing just that. So we dispense with it.
Replies from: Nonecomment by janos · 2011-04-07T08:19:39.961Z · LW(p) · GW(p)
If you don't justify your beliefs, how are they less arbitrary than those of a Bayesian? You may say they are tied to the truth (in a non-infinite-regress-laden way) by the truth-seeking process of criticism, forming new ideas, etc. However this is also what ties a Bayesian to the truth. The Bayesian is restricted (in theory) to updating probabilities based on evidence, but we tend to accept absence/presence/content of criticisms as evidence (though we insist on talking about the truth or falsity of statements, rather than whether they're "good ideas" (except insofar as we're actually discussing a statement about another statement)). Like your method, this one moves towards the truth in a largely unconstrained way, using many different sorts of reasons it can come upon. Also like yours, it fails to explode on lack of justification; if you can't find any evidence, you merely use the prior; it may not actually point you to the best thing, but, well, what else are you going to do?
The clear difference I see is that the Bayesian epistemology quantifies uncertainty and puts a mathematical model around it; this model doesn't actually match how we reason under uncertainty, but it's a useful idealization of it. Your epistemology does not quantify uncertainty, does not lay out criteria for criticisms, etc.; it seems to be based on verbally describing what reasonable people do, but as a prescription it's useless (unless you already knew how to think and just needed a reminder of some step). In particular it doesn't ground out to math, so even in very simple toy examples where an agent knows what toy example it's in and what its goal is, it's unclear how your epistemology should be used, while Bayesian probability easily gives optimal prescriptions.
Have I got this right?
comment by zntneo · 2011-05-19T06:15:14.753Z · LW(p) · GW(p)
It seems you have completely talked about interalist versions of epistemology. What about Relaiblism? It does not fall into either of your categories (its one i'm pretty sympathetic towards).
Aso to make sure i understand you correctly this is arguing about getting rid of jusitifed in the standard true justified belief (also including getier part to) am i right? or are you saying something is "justified" when it can no longer be criticized (due to not being about to come up with a criticism)? I also agree with Yvain that it seems this "criticize the idea" needs to be taken apart more.
I might have more comments but need to think about it more
comment by Architectonic · 2011-04-18T16:01:05.930Z · LW(p) · GW(p)
Many of the criticisms mentioned in the above comments have in fact been addressed by Bartley in his conception of pan-critical rationalism. See his book "The Retreat to Commitment".
Bayesian methods can be considered useful within such an epistemological system, however one cannot justify that one fact is more true than another merely based on Bayesian probabilities.
Both justificationist and falsificationist outlooks are stated with respect to something else. That is why philosophers played all those language games. They soon realised that you couldn't reduce everything down to language, be it natural or symbolic without losing something. Axiomatic systems don't make any sense on their own. It is sad that many commit to a justificationist position without realising that they are doing so.