Bayesian Epistemology vs Popper
post by curi · 2011-04-06T23:50:51.766Z · LW · GW · Legacy · 228 commentsContents
228 comments
I was directed to this book (http://www-biba.inrialpes.fr/Jaynes/prob.html) in conversation here:
http://lesswrong.com/lw/3ox/bayesianism_versus_critical_rationalism/3ug7?context=1#3ug7
I was told it had a proof of Bayesian epistemology in the first two chapters. One of the things we were discussing is Popper's epistemology.
Here are those chapters:
http://www-biba.inrialpes.fr/Jaynes/cc01p.pdf
http://www-biba.inrialpes.fr/Jaynes/cc02m.pdf
I have not found any proof here that Bayesian epistemology is correct. There is not even an attempt to prove it. Various things are assumed in the first chapter. In the second chapter, some things are proven given those assumptions.
Some first chapter assumptions are incorrect or unargued. It begins with an example with a policeman, and says his conclusion is not a logical deduction because the evidence is logically consistent with his conclusion being false. I agree so far. Next it says "we will grant that it had a certain degree of validity". But I will not grant that. Popper's epistemology explains that *this is a mistake* (and Jaynes makes no attempt at all to address Popper's arguments). In any case, simply assuming his readers will grant his substantive claims is no way to argue.
The next sentences blithely assert that we all reason in this way. Jaynes' is basically presenting the issues of this kind of reasoning as his topic. This simply ignores Popper and makes no attempt to prove Jaynes' approach is correct.
Jaynes goes on to give syllogisms, which he calls "weaker" than deduction, which he acknowledges are not deductively correct. And then he just says we use that kind of reasoning all the time. That sort of assertion only appeals to the already converted. Jaynes starts with arguments which appeal to the *intuition* of his readers, not on arguments which could persuade someone who disagreed with him (that is, good rational arguments). Later when he gets into more mathematical stuff which doesn't (directly) rest on appeals to intution, it does rest on the ideas he (supposedly) established early on with his appeals to intuition.
The outline of the approach here is to quickly gloss over substantive philosophical assumptions, never provide serious arguments for them, take them as common sense, do not detail them, and then later provide arguments which are rigorous *given the assumptions glossed over earlier*. This is a mistake.
So we get, e.g., a section on Boolean Algebra which says it will state previous ideas more formally. This briefly acknowledges that the rigorous parts depend on the non-rigorous parts. Also the very important problem of carefully detailing how the mathematical objects discussed correspond to the real world things they are supposed to help us understand does not receive adequate attention.
Chapter 2 begins by saying we've now formulated our problem and the rest is just math. What I take from that is that the early assumptions won't be revisted but simply used as premises. So the rest is pointless if those early assumptions are mistaken, and Bayesian Epistemology cannot be proven in this way to anyone who doesn't grant the assumptions (such as a Popperian).
Moving on to Popper, Jaynes is ignorant of the topic and unscholarly. He writes:
http://www-biba.inrialpes.fr/Jaynes/crefsv.pdf
> Karl Popper is famous mostly through making a career out of the doctrine that theories may not be proved true, only false
This is pure fiction. Popper is a fallibilist and said (repeatedly) that theories cannot be proved false (or anything else).
It's important to criticize unscholarly books promoting myths about rival philosophers rather than addressing their actual arguments. That's a major flaw not just in a particular paragraph but in the author's way of thinking. It's especially relevant in this case since the author of the books tries to tell us about how to think.
Note that Yudkowsky made a similar unscholarly mistake, about the same rival philosopher, here:
http://yudkowsky.net/rational/bayes
> Previously, the most popular philosophy of science was probably Karl Popper's falsificationism - this is the old philosophy that the Bayesian revolution is currently dethroning. Karl Popper's idea that theories can be definitely falsified, but never definitely confirmed
Popper's philosophy is not falsificationism, it was never the most popular, and it is fallibilist: it says ideas cannot be definitely falsified. It's bad to make this kind of mistake about what a rival's basic claims are when claiming to be dethroning him. The correct method of dethroning a rival philosophy involves understanding what it does say and criticizing that.
If Bayesians wish to challenge Popper they should learn his ideas and address his arguments. For example he questioned the concept of positive support for ideas. Part of this argument involves asking the questions: 'What is support?' (This is not asking for its essential nature or a perfect definition, just to explain clearly and precisely what the support idea actually says) and 'What is the difference between "X supports Y" and "X is consistent with Y"?' If anyone has the answer, please tell me.
228 comments
Comments sorted by top scores.
comment by prase · 2011-04-07T13:30:14.366Z · LW(p) · GW(p)
I have skimmed through the comments here and smelled a weak odour of a flame war. Well, the discussion is still rather civil and far from a flame war as understood on most internet forums, but it somehow doesn't fit well within what I am used to see here on LW.
The main problem I have is that you (i.e. curi) have repeatedly asserted that the Bayesians, including most of LW users, don't understand Popperianism and that Bayesianism is in fact worse, without properly explaining your position. It is entirely possible, even probable, that most people here don't actually get all subtleties of Popper's worldview. But then, a better strategy may be to first write a post which explains these subtleties and tells why they are important. On the other hand, you don't need to tell us explicitly "you are unscholarly and misinterpret Popper". If you actually explain what you ought to (and if you are right about the issue), people here will likely understand that they were previously wrong, and they will do it without feeling that you seek confrontation rather than truth - which I mildly have.
Replies from: Desrtopa, TheOtherDave, curi↑ comment by Desrtopa · 2011-04-07T14:49:52.971Z · LW(p) · GW(p)
Upvoted and agreed. I feel at this point like further addressing the discussion on present terms would be simply irresponsible, more likely to become adversarial than productive. If curi wrote up such a post, it would hopefully give a meaningful place to continue from.
Edit: It seems that curi has created such a post. I'm not entirely convinced that continuing the discussion is a good idea, but perhaps it's worth humoring the effort.
↑ comment by TheOtherDave · 2011-04-07T14:24:22.982Z · LW(p) · GW(p)
For what it's worth, I have that feeling more than mildly and consequently stopped paying attention to the curi-exchange a while ago. Too much heat, not enough light.
I've been considering downvoting the whole thread on the grounds that I want less of it, but haven't yet, roughly on the grounds that I consider it irresponsible to do so without paying more careful attention to it and don't currently consider it worth paying more attention to.
↑ comment by curi · 2011-04-07T19:46:41.859Z · LW(p) · GW(p)
By "properly explaining my position" I'm not sure what you want. Properly understanding it takes reading, say, 20 books (plus asking questions about them as you go, and having critical discussions about them, and so on). If I summarize, lots of precision is lost. I have tried to summarize.
I can't write "a (one) post" that explains the subtitles of Popper. It took Popper a career and many books.
Bayesianism has a regress/foundations problem. Yudkowsky acknowledges that. Popperism doesn't. So Popperism is better in a pretty straightforward way.
On the other hand, you don't need to tell us explicitly "you are unscholarly and misinterpret Popper".
But they were propagating myths about Popper. They were unscholarly. They didn't know wtf they were talking about, not even the basics. Basically all of Popper's books contradict those myths. It's really not cool to attribute positions to someone he never advocated. This mistake is easy to avoid by the method: don't publish about people you haven't read. Bad scholarship is a big deal, IMO.
Replies from: Desrtopa, prase↑ comment by Desrtopa · 2011-04-08T21:34:47.940Z · LW(p) · GW(p)
Bayesianism has a regress/foundations problem. Yudkowsky acknowledges that. Popperism doesn't. So Popperism is better in a pretty straightforward way.
Any system with axioms can be infinitely regressed or rendered circular if you demand that it justify the axioms. Critical Rationalism has axioms, and can be infinitely regressed.
You were upvoted in the beginning for pointing out gaps in scholarship and raising ideas not in common circulation here. You yourself, however, have demonstrated a clear lack of understanding of Bayesianism, and have attracted frustration with your own lack of scholarship and confused arguments, along with failure to provide good reasons for us to be interested in the prospect of doing this large amount of reading you insist is necessary to properly understand Popper. If doing this reading were worthwhile, we would expect you to be able to give a better demonstration of why.
Replies from: curi↑ comment by curi · 2011-04-08T21:44:47.659Z · LW(p) · GW(p)
Any system with axioms can be infinitely regressed or rendered circular if you demand that it justify the axioms. Critical Rationalism has axioms, and can be infinitely regressed.
You haven't understood the basic point that this only works if you accept that ideas should be justify.
If you reject the demand for justification -- as CR does -- then then regress is gone. Hence no regress in CR.
Replies from: Peterdjones, Desrtopa↑ comment by Peterdjones · 2011-04-15T15:04:54.377Z · LW(p) · GW(p)
How do you know which forms of criticism are valid? Do you jusitify them, or attempt to criticise them? EIther way looks regressive to me.
Replies from: curi↑ comment by Desrtopa · 2011-04-08T21:58:13.314Z · LW(p) · GW(p)
I reject this position as vacuous.
The position might be self consistent if you accept its premises, and one of its premises may be that you can introduce any idea without justification, but it's not reality consistent.
Replies from: curi↑ comment by curi · 2011-04-08T22:00:50.389Z · LW(p) · GW(p)
b/c?
Replies from: Desrtopa↑ comment by Desrtopa · 2011-04-08T22:19:35.481Z · LW(p) · GW(p)
It's no less founded on unproven axioms, and it's no less arbitrary than induction, it just contains the tenet "we don't reject propositions for being arbitrary" and pats itself on the back for being self consistent. This doesn't do a better job helping people generate true information, it does a worse one.
Replies from: None↑ comment by [deleted] · 2011-04-09T07:42:37.051Z · LW(p) · GW(p)
Being arbitrary is a criticism, so a critical rationalist can and does reject propositions for being arbitrary. Rejecting the idea of justification does not mean accepting any old arbitrary thing. If your idea doesn't stand up to criticism, including criticism that it is just arbitrary, then it is gone.
↑ comment by prase · 2011-04-07T20:51:35.155Z · LW(p) · GW(p)
I have tried to summarize.
I acknowledge that, although I would have prefered if you did that before you have written this post.
I can't write "a (one) post" that explains the subtitles of Popper. It took Popper a career and many books.
Could be five posts.
Even if such a defense can be sometimes valid, it is too often used to defend confused positions (think about theology) to be much credible.
Replies from: curi↑ comment by curi · 2011-04-07T20:52:52.597Z · LW(p) · GW(p)
It would need to be 500 posts.
But anyway, they are written and published. By Popper not me. They already exist and they don't need to be published on this particular website.
Replies from: prase, None↑ comment by prase · 2011-04-07T20:57:01.858Z · LW(p) · GW(p)
Following your advice expressed elsewhere, isn't the fact that the basics of Popperianism cannot be explained in five posts a valid criticism of Popperianism, which should be therefore rejected?
Replies from: curi↑ comment by curi · 2011-04-07T20:59:26.016Z · LW(p) · GW(p)
Why is that a criticism? What's wrong with that?
Also maybe it could be. But I don't know how.
And the basics could be explained quickly, to someone who didn't have a bunch of anti-Popperian biases, but people do have those b/c they are built into our culture. And without the details and precision then people complain about 1) not understanding how to do it, what it says 2) it not having enough precision and rigor
Replies from: prase↑ comment by prase · 2011-04-07T21:06:08.472Z · LW(p) · GW(p)
Why is that a criticism?
Actually I don't know what constitutes a criticism in your book (since you never specified), but you have also said that there are no rules for criticism, so I suppose that it is a criticism. If not, then please say why it is not a criticism.
I am not going to engage in a discussion about my and your biases, since such debates rarely lead to an agreement.
Replies from: curi↑ comment by curi · 2011-04-07T21:11:10.819Z · LW(p) · GW(p)
You can conjecture standards of criticism, or use the ones from your culture. If you find a problem with them, you can change them or conjecture different ones.
For many purposes I'm pretty happen with common sense notions of standards of criticism, which I think you understand, but which are hard to explain in words. If you have a relevant problem with the, you can say it.
↑ comment by [deleted] · 2011-04-07T20:55:41.775Z · LW(p) · GW(p)
One thing you could do is write a post highlighting a specific example where Bayes is wrong and Popper is right. A lot of people have asked for specific examples in this thread; if you could give a detailed discussion of one, that would move the discussion to more fertile ground.
Replies from: curi↑ comment by curi · 2011-04-07T20:57:01.411Z · LW(p) · GW(p)
Can you give me a link to a canonical essay on Bayesian epistemology/philosophy, and I'll pick from there?
Induction and justificationism are examples but I've been talking about them. I think you want something else. Not entirely sure what.
Replies from: None↑ comment by [deleted] · 2011-04-07T21:04:46.834Z · LW(p) · GW(p)
It's not at all canonical, but a paper that neatly summarizes Bayesian epistemology is "Bayesian Epistemology" by Stephan Hartmann and Jan Sprenger.
Replies from: curi↑ comment by curi · 2011-04-07T21:09:44.936Z · LW(p) · GW(p)
Found it.
http://www.stephanhartmann.org/HartmannSprenger_BayesEpis.pdf
Will take a look in a bit.
Replies from: None↑ comment by [deleted] · 2011-04-07T21:14:16.429Z · LW(p) · GW(p)
Excellent, thanks.
Replies from: curi↑ comment by curi · 2011-04-08T09:18:03.773Z · LW(p) · GW(p)
http://www.stephanhartmann.org/HartmannSprenger_BayesEpis.pdf
Bayesian epistemology therefore complements traditional epistemology; it does not replace it or aim at replacing it.
Since Popper refuted traditional epistemology (source: his books, and the failure of anyone to come up with any good criticisms of his main ideas), and Bayesian Epistemology retains it, then Bayesian Epistemology is refuted too. And discussing this issue can be done without mentioning probability, Bayes' theorem, or Solomonoff induction. Bringing those up cannot be a relevant defense since traditional epistemology, which doesn't use them, is retained.
Bayesian epistemology is, in the first place, a philosophical project, and that it is its ambition to further progress in philosophy.
Why are most Less Wrong people anti-philosophy then? There's so much instrumentalism, empiricism, reductionism and borderline postivism. Not much interest in philosophy.
Section 2 introduces the probability calculus and explains why degrees of belief obey the probability calculus. Section 3 applies the formal machinery to an analysis of the notion of evidence, and high- lights potential application. Section 4 discusses Bayesian models of coherence and testimony, and section 5 ends this essay with a comparison of traditional epistemology and Bayesian epistemology.
Sections 2-4 are irrelevant. They are already assuming mistakes from traditional epistemology. Moving on to 5, which is only one page.
Bayesian epistemology, on the other hand, draws much of its power from the mathematical machinery of probability theory. It starts with a mathematical intuition.
Advocating intuitionism is very silly.
traditional epistemology inspires Bayesian accounts.
So Bayesians should care about criticisms of traditional epistemoloyg, and be willing to engage with them directly without even mentioning any Bayesian stuff.
Both Bayesian epistemology and traditional epistemology do not much consider empirical data. Both are based on intuitions,
That's not even close to what most Less Wrong people told me. They mostly are very focussed on empirical data.
This might be a problem as privilege is given to the philosopher’s intuitions.
Might be? lol... What a hedge. They know it's a problem and equivocate.
non-philosophers may have different intuitions.
Also Popperian philosophers, and all other types that don't agree with you.
While it is debatable how serious these intuitions should be taken (maybe peo- ple are simply wrong!)
But not traditional philosophers, who have reliable intuitions? This is just plain silly.
It is therefore advisable that philosophers also keep on paying attention to other formal frameworks
But not informal frameworks, because your intuition says that formality is next to Godliness?
Replies from: None, None↑ comment by [deleted] · 2011-04-08T19:33:54.638Z · LW(p) · GW(p)
I would recommend turning this into a discussion-level post--I doubt anyone will find this comment, as it's buried pretty deeply in this discussion.
Replies from: curi↑ comment by curi · 2011-04-08T19:35:36.062Z · LW(p) · GW(p)
Do you think people other than you will like it? I think many will complain about the style and i didn't want to rewrite it more formally. also i dismissed most of the paper as irrelevant. i expect people to complain about that and don't particularly expect the discussion to go anywhere.
Replies from: None↑ comment by [deleted] · 2011-04-08T21:03:27.408Z · LW(p) · GW(p)
You'd probably have to change the style, yes. And no, I don't expect other people to like it, but I expect that they will respond. Also: you're probably going to have to either go into more depth or pick a specific example, or both.
Replies from: curi↑ comment by curi · 2011-04-08T21:48:47.097Z · LW(p) · GW(p)
What's in it for me? I think I got the gist of what less wrong has to offer already.
Replies from: Desrtopa↑ comment by Desrtopa · 2011-04-08T22:25:39.185Z · LW(p) · GW(p)
Can you explain how we believe that Bayesianism leads to better decisonmaking? I'm not even asking you to do it, I no longer have high expectations of productivity from this conversation and don't intend to prolong it, but know that if you can't, you don't understand what we offer at all.
↑ comment by [deleted] · 2011-04-09T08:46:09.971Z · LW(p) · GW(p)
Good criticisms here, yet downvoted to -3. Do LWer's really want to be less wrong?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-09T16:20:48.078Z · LW(p) · GW(p)
Good criticisms here, yet downvoted to -3. Do LWer's really want to be less wrong?
There's a general pattern here. Some of this comments are potentially good. But the general pattern either a) misses points b) doesn't actually grapple with what he is claiming it does and c) is uncivil. C is a major issue. Obnoxious remarks like "But not informal frameworks, because your intuition says that formality is next to Godliness?" are going to get downvoted.
People are inclined to downvote uncvil comments for a variety of reasons: 1) They reinforce emotionalism on all members of a discussion, making the actual exchange of ideas less likely. 2) They make the individual making the comments much less likely to acknowledge when they are wrong (this is due to standard cognitive biases). 3) They make communities less pleasant.
Uncivil comments that support common beliefs are also voted down. LWians are not perfect and you shouldn't be surprised if that is going to occur even more with comments that are uncivil and go against the consensus. In this particular case, it also doesn't help that most of the uncivility is at the end of the comment, so one moves directly from reading the unproductive, uncivil remarks to seeing the vote button.
comment by jimrandomh · 2011-04-07T01:02:59.894Z · LW(p) · GW(p)
The assumptions behind Cox's theorem are:
- Representation of degrees of plausibility by real numbers
- Qualitative correspondence with common sense
- Consistency
Would you please clearly state which of these you disagree with, and why? And if you disagree with (1), is it because you don't think degrees of plausibility should be represented, or because you think they should be represented by something other than real numbers, and if so, then what? (Please do not give an answer that cannot be defined precisely by mapping it to a mathematical set. And please do not suggest a representation that is obviously inadequate, such as booleans.)
Replies from: curi↑ comment by curi · 2011-04-07T03:00:06.507Z · LW(p) · GW(p)
Could you explain what you're talking about a bit more? For example you state "consistency" as an assumption. What are you assuming is (should be?) consistent with what?
Replies from: JoshuaZ, Larks, jimrandomh↑ comment by JoshuaZ · 2011-04-07T03:25:19.066Z · LW(p) · GW(p)
You may have valid points to make but it might help in getting people to listen to you if you don't exhibit apparent double standards. In particular, your main criticism seems to be that people aren't reading Popper's texts and related texts enough. Yet, at the same time, you are apparently unaware of the basic philosophical arguments for Bayesianism. This doesn't reduce the validity of anything you have to say but as an issue of trying to get people to listen, it isn't going to work well with fallible humans.
Replies from: curi↑ comment by curi · 2011-04-07T03:33:31.734Z · LW(p) · GW(p)
It's fine if most people haven't read Popper. But they should be able to point to some Bayesian somewhere who did, or they should know at least one good argument against a major Popperian idea. or they should be interested and ask more about him instead of posting incorrect arguments about why his basic claims are false.
I do know, offhand, several arguments against Bayesian epistemology (e.g. it's inability to create moral knowledge, and i know many arguments against induction, each decisive). And anyway I came here to learn more about it. One particular thing I would be interested in is a Bayesian criticism of Popper. Are there any? By contrast (maybe), Popper did criticize Bayesian epistemology in LScD and elsewhere. And I am familiar with those criticisms.
Learning enough Bayesian stuff to sound like a Bayesian so people want to listen to me more sounds to me like more trouble than it's worth, no offense. I'm perfectly willing to read more things when I make a mistake and there is a specific thing which explains the issue. I have been reading various things people refer me to. If you wanted me to study Bayesian stuff for a month before speaking, well, I'd get bored because I would see flaws and then see them repeated, and then read arguments which depend on them. I did read the whole HP fic if that helps.
One thing that interests me, which I posted about in the initial post, is how unscholarly some Bayesian scholars are. Can anyone correct that? Are there any with higher scholarly standards? I would like there to be. I don't want to just read stuff until I happen to find something good, I want to be pointed to something considerably better than the unscholarly stuff I criticized. I don't know where to find that.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-07T04:55:06.953Z · LW(p) · GW(p)
It's fine if most people haven't read Popper. But they should be able to point to some Bayesian somewhere who did, or they should know at least one good argument against a major Popperian idea. or they should be interested and ask more about him instead of posting incorrect arguments about why his basic claims are false.
Really? Much of that seems questionable. There are many different ideas out there and practically speaking, there are too many ideas out there for people to have to deal with every single one. Sure, making incorrect arguments is bad. And making arguments against strawmen is very bad. But people don't have time actually research every single idea out there or even know which one's to look at. Now, I think that Popper is important enough and has relevant enough points that he should be on the short list of philosophers that people can grapple with at least to some limited extent. But frankly, speaking as someone who is convinced of that point, you are making a very weak case for it.
I do know, offhand, several arguments against Bayesian epistemology (e.g. it's inability to create moral knowledge, and i know many arguments against induction, each decisive).
This paragraph seems to reflect a general problem you are having here in making assertions without providing any information other than vague claims of existence. I am for example aware of a large variety of arguments against induction (the consistency of anti-induction frameworks seems to be a major argument) but calling them "decisive" is a very strong claim, and isn't terribly relevant in so far as Bayesianism is not an inductive system in many senses of the term.
You've also referred to before to this claim that Popperian system can lead to moral knowledge and that's a claim I'd be very curious to hear expanded with a short summary of how that works. Generally when I see a claim that an epistemological system can create moral knowledge my initial guess is that someone has managed to bury the naturalistic fallacy somewhere or has managed to smuggle in additional moral premises that aren't really part of the epistemology. I'd be pleasantly surprised to see something that didn't function that way.
One particular thing I would be interested in is a Bayesian criticism of Popper. Are there any? By contrast (maybe), Popper did criticize Bayesian epistemology in LScD and elsewhere.
I haven't read it myself but I've been told that Earman's "Bayes or Bust" deals with a lot of the philosophical criticisms of Bayesianism as well as giving a lot of useful references. It should do a decent job in regards to the scholarly concerns.
As to Popper's criticism of Bayesianism, the discussion of it in LScD is quite small, which is understandable in that Bayesianism was not nearly as developed in that time as it is now. (You may incidentally be engaging in a classical philosophical fallacy here in focusing on a specific philosopher's personal work rather than the general framework of ideas that followed from it. There's a lot of criticism of Bayesianism that is not in Popper that is potentially strong. Not everything is about Popper.)
Learning enough Bayesian stuff to sound like a Bayesian so people want to listen to me more sounds to me like more trouble than it's worth, no offense.
As a non-Bayesian, offense taken. You can't expect to go to a room full of people with a specific set of viewpoints offer a contrary view, act like the onus is on them to translate into your notation and terminology, and then be shocked when they don't listen to you. Moreover, knowing the basics of Cox's theorem is not asking you to "sound like a Bayesian" anyhow.
If you wanted me to study Bayesian stuff for a month before speaking, well, I'd get bored because I would see flaws and then see them repeated, and then read arguments which depend on them. I did read the whole HP fic if that helps.
What? I don't know how to respond to that. I'm not sure an exclamation exists in standard English to express my response to that last sentence. I'm thinking of saying "By every deity in the full Tegmark ensemble" but maybe I should wait for a better time to use it. You are repeatedly complaining about people not knowing much about Popper while your baseline for Bayesianism is that you've read an incomplete Harry Potter fanfic? This fanfic hasn't even addressed Bayesianism other than in passing. This seems akin to someone thinking they understand rocketry because they've watched "Apollo 13".
Replies from: endoself, curi↑ comment by curi · 2011-04-07T09:21:30.226Z · LW(p) · GW(p)
Really? Much of that seems questionable. There are many different ideas out there and practically speaking, there are too many ideas out there for people to have to deal with every single one.
The number of major ideas in epistemology is not very large. After Aristotle, there wasn't very much innovation for a long time. It's a small enough field you can actually trace ideas all the way back to the start of written history. Any professional can look at everything important. Some Bayesian should have. Maybe some did, but I haven't seen anything of decent quality.
You've also referred to before to this claim that Popperian system can lead to moral knowledge and that's a claim I'd be very curious to hear expanded with a short summary of how that works.
It works exactly identically to how Popperian epistemology creates any other kind of knowledge. There's nothing special for morality.
Knowledge is created by an evolutionary process involving conjecture and refutation. By criticizing flaws in ideas, we seek to improve them (by making better conjectures we hope will eliminate the flaws).
You may incidentally be engaging in a classical philosophical fallacy here in focusing on a specific philosopher's personal work rather than the general framework of ideas that followed from it.
I have a lot of familiarity with the other Popperians. But Popper and Deutsch are by far the best. There isn't really anything non-Popperian that draws on Popper much. Everyone who has understood Popper is a Popperian, IMO. If you disagree, do tell.
As to Popper's criticism of Bayesianism, the discussion of it in LScD is quite small
Small is not a criticism; substance matters not length. Do you have a criticism of his arguments in LScD or not? Also he dealt with it elsewhere, as I stated.
Replies from: Randaly, None, JoshuaZ↑ comment by Randaly · 2011-04-07T13:39:46.869Z · LW(p) · GW(p)
Err, Bayesian probability doesn't have anything special for morality either. People on LW tend to be moral non-realists, ie people who deny that there is objective moral knowledge, if that's what you're talking about (not sure- sorry!), but that's completely orthogonal to this discussion: there's nothing in Bayesianism that leads inevitably to non-realism. (Also, I'm not convinced that moral realism is right, so saying "Bayesianism leads to moral non-realism" isn't a very effective argument.)
Replies from: curi↑ comment by curi · 2011-04-07T19:08:51.698Z · LW(p) · GW(p)
Bayesian epistemology doesn't create moral knowledge because it only functions when fed in observation data (or assumptions). I get a lot of conflicting statements here, but some people tell me they only care about prediction, they are instrumentalists, and that is what Bayes stuff is for, and they don't regard it as a bad thing that it doesn't address morality at all.
Now what you have in mind, I think, is that if you make a ton of assumptions you could then talk about morality using Bayes. Popperism doesn't require a bunch of arbitrary starting assumptions to create moral knowledge, it just can deal with it directly.
If I'm wrong, explain how you can deal with figuring out, e.g., what are good moral values to have (without assuming a utility function or something).
Replies from: Randaly↑ comment by Randaly · 2011-04-09T04:22:53.259Z · LW(p) · GW(p)
As I tried to say (and probably explained really poorly- sorry!), the LW consensus is that morality is not objective. Therefore, the idea of figuring out what good moral values would be is, according to moral non-realism, impossible: any decision about what a good moral value is must rely on your pre-existing values, if an objective morality is not out there to be discovered. Using this as a criticism of Bayesianism is sorta like criticizing thermodynamics because it claims it's impossible to exactly specify the position and velocity of each particle: not only is the criticism unrelated to the subject matter, but satisfying it would require the theory to do something that is to the best of our knowledge incorrect.
↑ comment by [deleted] · 2011-04-07T11:19:12.348Z · LW(p) · GW(p)
Knowledge is created by an evolutionary process involving conjecture and refutation. By criticizing flaws in ideas, we seek to improve them (by making better conjectures we hope will eliminate the flaws).
I'm inclined to take this formula seriously, but I'd like to start by applying it to innate knowledge, knowledge we are born with, because here we are definitely talking about an evolutionary process involving mutation and natural selection. Some mutations add what amounts to a new innate conjecture (hypothesis, belief) into the cognitive architecture of the creature.
However, what occurs at this point is not that a creature with a false innate conjecture is eliminated. The creature isn't being tested purely against reality in isolation. It's being tested against other members of its species. The creature with the least-false, or least-perilously-false conjecture will tend to do better than the competitors. The competition for survival amounts to a competition between rival conjectures. The truest, or most-usefully-true, or least-wrong, or least-dangerously-wrong innate belief will tend to outdo its competitors and ultimately spread through the species. (With the odd usefully-wrong belief surviving.)
The occasional appearance of new innate conjectures resembles the conjecture part of Popperian conjecture and refutation. However, the contest between rival innate conjectures that occurs as the members of the species struggle against each other for survival seems less Popperian than Bayesian.
The relative success of the members of the species who carry the more successful hypothesis vaguely resembles Bayesian updating, because the winners increase their relative numbers and the losers decrease their relative numbers, which resembles the shift in the probabilities assigned to rival hypotheses that occurs in Bayesian updating. Consider the following substitutions applied to Bayes' formula:
P(H|E) = P(E|H)P(H) / P(E)
- P(H|E) is the new proportion (i.e. in the next generation) of the species carrying the hypothesis H, given that event E occurred (E is "everything that happened to the generation")
- P(E|H) is the degree to which H predicts and thus prepares the individual to handle E (measured in expected number of offspring given E)
- P(H) is the old proportion (i.e. in the previous generation) of the species carrying H
- P(E) is the degree to which the average member of the species predicts and thus is prepared to handle E (measured in expected number of offspring given E)
With these assignments, what the equation means is:
The new proportion of the species with H is equal to the old proportion of the species with H, times the expected number of offspring of members with H, divided by the expected number of offspring of the average member of the species.
One difference between this process and Bayesian updating is that this process allows the occasional introduction of new hypotheses over time, with what amounts to a modest but not vanishing initial prior.
Replies from: curi↑ comment by curi · 2011-04-07T19:19:31.120Z · LW(p) · GW(p)
I'm not sure if we're interested in the same stuff. But taking up one topic:
I think you regard innate/genetic ideas as important. I do not. Because people are universal knowledge creators, and can change any idea they start with, it doesn't matter very much.
The reason people are so biased is not in their genes but their memes.
There are two major replication strategies that memes use.
1) a meme can be useful and rational. it spreads because of its value
2) a meme can sabotage its holders creativity to prevent him from criticizing it, and to take away his choice not to spread it
The second type dominated all cultures on Earth for a long time. The transition to the first type is incomplete.
More details one memes and universality can be found in The Beginning of Infinity by David Deutsch
Replies from: None↑ comment by [deleted] · 2011-04-07T22:10:10.021Z · LW(p) · GW(p)
I think you regard innate/genetic ideas as important. I do not. Because people are universal knowledge creators, and can change any idea they start with, it doesn't matter very much.
You misunderstand. I bring it up as a model of learning, and my choice was based on your own remarks. You said that knowledge is created by an evolutionary process. That way of putting it suggests an analogy with Darwin's theory of evolution as proceeding by random variation and natural selection. And indeed there is an analogy between Popper's conjectures and refutations and variation and natural selection, and it is this: a conjecture is something like variation (mutation), and refutation is something like natural selection.
However, what I found was that the closer I looked at knowledge which is actually acquired through natural selection - what we might call innate knowledge or instinctive knowledge - the more the process of acquisition resembled Bayesian updating rather than Popperian conjecture and refutation. I explained why.
In Bayesian updating, there are competing hypotheses, and the one for which actual events are less of a surprise (i.e., the hypothesis Hi for which P(E|Hi) is higher) is strengthened relative to the one for which events are more of a surprise. I find a parallel to this in competition among alleles under natural selection, which I described.
Essential to Bayesian updating is the coexistence of competing hypotheses, and essential to natural selection is the coexistence of competing variants in a species. In contrast, Popper talks about conjecture and refutation, which is a more lonely process that need not involve more than one conjecture and a set of observations which have the potential to falsify it. Popper talks about improving the conjecture in response to refutation, but this process more resembles Lamarckian evolution than Darwinian evolution, because in Lamarckian evolution the individuals improve themselves in response to environmental challenges, much as Popper would have us improve our conjectures in response to observational challenges. Also, in Lamarckian evolution, as in the Popperian process of conjecture and refutation, competing variants (compare: competing hypotheses) do not play an essential role (though I'm sure they could be introduced). Rather, the picture is of a single animal (compare: a single hypothesis) facing existential environmental challenges (compare: facing the potential for falsification) improving itself in response (which improvement is passed to offspring).
The Popperian process of conjecture, refutation, and improvement of the conjecture, can as it happens be understood from a Bayesian standpoint. It does implement Bayesian updating in a certain way. Specifically, when a particular conjecture is refuted and the scientist modifies the conjecture - at that point, there are two competing hypotheses. So at that point, the process of choosing between these two competing hypotheses can be characterized as Bayesian updating. The less successful hypothesis is weakened, and the more successful hypothesis is strengthened.
In short, if you want to take seriously the analogy that does exist between evolution through natural selection and knowledge acquisition of whatever type, then you may want to take a closer look at Bayesian updating as conforming more closely to the Darwinian model.
Replies from: curi↑ comment by curi · 2011-04-07T22:12:27.997Z · LW(p) · GW(p)
In short, if you want to take seriously the analogy
I wasn't talking about an analogy.
Evolution is a theory which applies to any type of replicator. Not by analogy by literally applies.
Make sense so far?
Replies from: None↑ comment by [deleted] · 2011-04-07T22:22:10.277Z · LW(p) · GW(p)
That only strengthens my argument.
Replies from: curi↑ comment by curi · 2011-04-07T22:24:48.553Z · LW(p) · GW(p)
You said we were discussing an analogy. That was a mistake. How can having made a mistake strength your argument? When you make a mistake, and find out, you should be like "uh oh. maybe i made 2. or 3. i better rethink things a bit more carefully. maybe the mistake is caused by a misunderstanding that could cause multiple mistakes." I don't think glossing over mistakes is rational or wise.
Make sense so far?
Replies from: Randaly, None↑ comment by Randaly · 2011-04-08T14:40:12.846Z · LW(p) · GW(p)
Because if there is only an analogy between evolution and knowledge acquisition, there are some aspects of each that do are not the same, and it is possible that these differences mean that the specific factor under consideration is not the same; but if the two processes are literally the same, that is not possible.
"How can having a mistake strengthen your argument?"
Example: During WWII,many American leaders didn't believe that Germany was actually committing massacres, as they were disillusioned from similar but inaccurate WWI propaganda; however, they still believed that Nazi aggression was morally wrong. Later, the death camps were discovered. Clearly, given that they were mistaken in disbelieving in the Holocaust, they were mistaken in believing that the Nazis were morally wrong- because how can making a mistake strength your argument?
↑ comment by [deleted] · 2011-04-07T22:30:35.400Z · LW(p) · GW(p)
Make sense so far?
Your defects would be easier to tolerate if you were less arrogant. A bit of humility would go a long way to keeping the conversation going. My guess is that you picked up your approach because it led to your being the last person standing, winning by attrition - when in reality the other participants were simply too disgusted to continue.
↑ comment by JoshuaZ · 2011-04-07T15:40:44.887Z · LW(p) · GW(p)
The number of major ideas in epistemology is not very large. After Aristotle, there wasn't very much innovation for a long time. It's a small enough field you can actually trace ideas all the way back to the start of written history. Any professional can look at everything important. Some Bayesian should have. Maybe some did, but I haven't seen anything of decent quality.
As to a professional, I already referred you to Earman. Incidentally, you seem to be narrowing the claim somewhat. Note that I didn't say that the set of major ideas in epistemology isn't small, I referred to the much larger class of philosophical ideas (although I can see how that might not be clear from my wording). And the set is indeed very large. However, I think that your claim about "after Aristotle" is both wrong and misleading. There's a lot of what thought about epistemological issues in both the Islamic and Christian worlds during the Middle Ages. Now, you might argue that that's not helpful or relevant since it gets tangled up in theology and involves bad assumptions. But that's not to say that material doesn't exist. And that's before we get to non-Western stuff (which admittedly I don't know much about at all).
(I agree when you restrict to professionals, and have already recommended Earman to you.)
It works exactly identically to how Popperian epistemology creates any other kind of knowledge. There's nothing special for morality.
Knowledge is created by an evolutionary process involving conjecture and refutation. By criticizing flaws in ideas, we seek to improve them (by making better conjectures we hope will eliminate the flaws).
This is a deeply puzzling set of claims. First of all, a major point of his epistemological system is falsfiability based on data (at least as I understand it from LScD). How that would at all interact with moral issues is unclear to me. Indeed, the semi-canonical example of a non-falsifiable claim in the Popperian sense is Marxism, a set of ideas that has a large set of attached moral claims.
I also don't see how this works given that moral claims can always be criticized by the essential sociopathic argument "I don't care. Why should you?" Obviously, that line of thinking can be/should be expanded. To use your earlier example, how would you discuss "murder is wrong" in a Popperian framework? I would suggest that this isn't going to be any different than simply discussing moral ideas based on shared intuitions with particular attention to the edge cases. You're welcome to expand on these claims, but right now, nothing you've said in this regard is remotely convincing or even helpful since it amounts to just saying "well, do the same thing."
I have a lot of familiarity with the other Popperians. But Popper and Deutsch are by far the best. There isn't really anything non-Popperian that draws on Popper much. Everyone who has understood Popper is a Popperian, IMO. If you disagree, do tell.
I'm going to be obnoxious and quote a friend of mine "Everyone who understands Christianity is a Christian." I don't have any deep examples of other individuals although I would tentatively say that I understood Popper's views in Logic of Scientific Discovery just fine.
Do you have a criticism of his arguments in LScD or not?
Sure. The most obvious one is when he is discussing the law of large numbers and frequentist v. Bayesian interpretations (incidentally to understand those passages it is helpful to note that he uses the term "subjective" to describe Bayesians rather than Bayesian which is consistent with the language of the time, but in modern terminology has a very different meaning (used to distinguish between subject and objective Bayesians)). In that section he argues that (I don't have the page number unfortunately since I'm using my Kindle edition. I have a hard copy somewhere but I don't know where) that "it must be inadmissable to give after the deduction of Bernoulli's theorem a meaning to p different from the one which was given to it before the deduction." This is, simply put, wrong. Mathematicians all the time prove something in one framework and then interpret it in another framework. You just need to show that all the properties of the relevant frameworks overlap in sufficiently non-pathological cases. If someone wrote this as a complaint about say using the complex exponential to understand the symmetries of the Euclidean plane, we'd immediately see this as a bad claim. There's an associated issue in this section which also turns up but it is more subtle; Popper doesn't appreciate what you can do with measure theory and L_p spaces and related ideas to move back and forth between different notions of probability and different metrics on spaces. That's ok, it was a very new idea when he wrote LScD (although the connections were to some extent definitely there). But it does render a lot of what he says simply irrelevant or outright wrong.
Replies from: curi↑ comment by curi · 2011-04-07T18:58:34.285Z · LW(p) · GW(p)
As to a professional, I already referred you to Earman.
Which you stated you had not read. I have rather low standards for recommendations of things to read, but "I never read it myself" isn't good enough.
I don't agree with "restrict to professionals". How is it to be determined who is a professional? I don't want to set up arbitrary, authoritative criteria for dismissing ideas based on their source.
First of all, a major point of his epistemological system is falsfiability based on data (at least as I understand it from LScD).
That is a major point for scientific research where the problem "how do we use evidence?" is important. And the answer is "criticisms can refer to evidence". Note by "science" here I mean any empirical field. What do you do in non-scientific fields? You simply make criticisms that don't refer to evidence. Same method, just missing one type of criticism which is rather useful in science but not fundamental to the methodology.
Indeed, the semi-canonical example of a non-falsifiable claim in the Popperian sense is Marxism, a set of ideas that has a large set of attached moral claims.
It is not empirically falsifiable. It is criticizable. For example Popper criticized Marx in The Open Society and its Enemies..
I also don't see how this works given that moral claims can always be criticized by the essential sociopathic argument "I don't care. Why should you?"
Any argument which works against everything fails at the task of differentiating better and worse ideas. So it is a bad argument. So we can reject it and all other things in that category, by this criticism.
To use your earlier example, how would you discuss "murder is wrong" in a Popperian framework?
The short answer is: since we don't care to have justified foundations, you can discuss it any way you like. You can say it's bad because it hurts people. You can say it's good because it prevents overpopulation. You can say it's bad because it's mean. These kinds of normal arguments, made by normal people, are not deemed automatically invalid and ignored. Many of them are indeed mistakes. But some make good points.
For more on morality, please join this discussion:
http://lesswrong.com/lw/552/reply_to_benelliott_about_popper_issues/3uv7
I would tentatively say that I understood Popper's views in Logic of Scientific Discovery just fine.
He has like 20 books. There's way more to it. When one reads a lot of them, a whole worldview comes across that is very hard to understand from just a couple books. And I wasn't trying to argue with that statement, I was just commenting. I mentioned it because of a comment to do with whether I had studied results of non-Popperians using Popperian ideas.
"it must be inadmissable to give after the deduction of Bernoulli's theorem a meaning to p different from the one which was given to it before the deduction." This is, simply put, wrong.
Are you really telling me that you can prove something, then take the conclusion, redefine a term, and work with that, and consider it still proven? You could only do that if you created a second proof that the change doesn't break anything, you can't just do it. I'm not sure you took what Popper was saying literally enough; I don't think your examples later actually do what he criticized. Changing the meaning of a term in a conclusion statement, and considering a conclusion from a different perspective, are different.
Popper doesn't appreciate what you can do with measure theory and L_p spaces
Would you understand if I said this has no relevance at all to 99.99% of Popper's philosophy? Note that his later books generally have considerably less mention of math or logic.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-08T02:43:45.320Z · LW(p) · GW(p)
Which you stated you had not read. I have rather low standards for recommendations of things to read, but "I never read it myself" isn't good enough.
Earman is a philosopher and the book has gotten positive reviews from other philosophers. I don't know what else to say in that regard.
I don't agree with "restrict to professionals". How is it to be determined who is a professional? I don't want to set up arbitrary, authoritative criteria for dismissing ideas based on their source.
Hrrm? You mentioned professionals first. I'm not sure why you are now objecting to the use of professionals as a relevant category.
That is a major point for scientific research where the problem "how do we use evidence?" is important. And the answer is "criticisms can refer to evidence". Note by "science" here I mean any empirical field. What do you do in non-scientific fields? You simply make criticisms that don't refer to evidence. Same method, just missing one type of criticism which is rather useful in science but not fundamental to the methodology
I'm not at all convinced that this is what Popper intended (but again I've only read LScD) but if this is accurate then Popper isn't just wrong in an interesting way but is just wrong. Does one mean for example to claim that pure mathematics works off of criticism? I'm a mathematician. We don't do this. Moreover, it isn't clear what it would even mean for us to try to do this as our primary method of inquiry. Are we supposed to spend all our time going through pre-existing proofs trying to find holes in them?
He has like 20 books. There's way more to it. When one reads a lot of them, a whole worldview comes across that is very hard to understand from just a couple books.
Yes, and I'm quite sure that I get much more of a worldview if I read all of Hegel rather than just some of it. That doesn't mean I need to read all of it. Similar remarks would apply to Aquinas or more starkly the New Testament. Do you need to read all of the New Testament to decide that Christianity is bunk? Do you need to read the entire Talmud to decide that Judaism is incorrect? But you get a whole worldview that you don't obtain from just reading the major texts.
The short answer is: since we don't care to have justified foundations, you can discuss it any way you like. You can say it's bad because it hurts people. You can say it's good because it prevents overpopulation. You can say it's bad because it's mean. These kinds of normal arguments, made by normal people, are not deemed automatically invalid and ignored. Many of them are indeed mistakes. But some make good points
Right, and then we just the criticism "why bother" or "and how does that maximize the number of paperclip in the universe?" Or one can say "mean" "good" bad" are all hideously ill-defined. In any event, does it not bother you that you are essentially claiming that your moral discussion with your great epistemological system looks just like a discussion about morality by a bunch of random individuals? There's nothing in the above that uses your epistemology in any substantial way.
Are you really telling me that you can prove something, then take the conclusion, redefine a term, and work with that, and consider it still proven? You could only do that if you created a second proof that the change doesn't break anything, you can't just do it.
Right! And conveniently in the case Popper cares about you can prove that.
Popper doesn't appreciate what you can do with measure theory and L_p spaces
Would you understand if I said this has no relevance at all to 99.99% of Popper's philosophy? Note that his later books generally have considerably less mention of math or logic.
Do you mean understand or do you mean care? I don't understand why you are making this statement given that my remark was addressing the question you asked of whether I had specific problems with Popper's handling of Bayesianism in LScD. This is a specific problem there.
Replies from: AlephNeil, curi↑ comment by AlephNeil · 2011-04-08T18:12:28.618Z · LW(p) · GW(p)
Does one mean for example to claim that pure mathematics works off of criticism? I'm a mathematician. We don't do this.
I don't know what Popper himself would say, but one of his more insightful followers, namely Lakatos, argues for exactly that position.
I read Proofs and Refutations too many years ago to say anything precise about it. I remember finding it interesting but also frustrating. Lakatos seems determined to ignore/deny/downplay the fact of mathematical practice that we only call something a 'theorem' when we've got a proof, and we only call something a 'proof' when it's logically watertight in such a way that no 'refutations' are possible. Still, it's well-researched (in its use of a historical case-study) and he comes up with some decent ideas along the way (e.g. about "monster barring" and "proof-oriented definitions".)
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-09T14:11:08.041Z · LW(p) · GW(p)
Yes, Lakatos does argue for that in a certain fashion, (and I suppose it is right to bring this up since I've myself repeatedly pointed people here on LW to read Lakatos when they think that math is completely reliable.) However, Lakatos took a more nuanced position than the position that curi is apparently taking that math advances solely through this method of criticism. I also think Lakatos is wrong in so far as the examples he uses are not actually representative samples of what the vast majority of mathematics looks like. Euler's formula is an extreme example, and it is telling that when one wants to give other similar examples one often gives other topological claims from before 1900 or so.
↑ comment by curi · 2011-04-08T09:47:58.750Z · LW(p) · GW(p)
Does one mean for example to claim that pure mathematics works off of criticism?
yes
I'm a mathematician. We don't do this.
Instead, you make appeals to authority?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-08T13:45:36.527Z · LW(p) · GW(p)
I'm a mathematician. We don't do this.
Instead, you make appeals to authority?
You are confused about what that means. An appeal to authority is not intrinsically fallacious. An appeal to authority is problematic when the authority is irrelevant (e.g. a celebrity who plays a doctor on TV endorsing a product) or when one is claiming that one has a valid deduction in some logical system. Someone making an observation about what people in their profession actually do is not a bad appeal to authority in the same way. In any event, you ignored the next line of my comment:
Moreover, it isn't clear what it would even mean for us to try to do this as our primary method of inquiry. Are we supposed to spend all our time going through pre-existing proofs trying to find holes in them?
If you do think that mathematicians use Popperian reasoning then please explain how we do it.
Replies from: curi↑ comment by curi · 2011-04-08T16:49:03.006Z · LW(p) · GW(p)
An appeal to authority is not intrinsically fallacious.
It is in Popperian epistemology.
Could you point me to a Bayesian source that says they are OK? I'd love to have a quote of Yudkowsky advocating appeals to authority, for instance. Or could others comment? Do most people here think appeals to authority are good arguments?
Replies from: Marius↑ comment by Marius · 2011-04-08T17:05:13.175Z · LW(p) · GW(p)
An appeal to authority is not logically airtight, and if logic is about mathematical proofs, then it's going to be a fallacy. But an appeal to an appropriate authority gives Bayesians strong evidence, provided that [X|Authority believes X] is sufficiently high. In many fields, authorities have sufficient track records that appeals to authority are good arguments. In other fields, not so much.
Of course, the Appeal to Insufficient Force fallacy is a different story from the Appeal to Inappropriate Authority
Replies from: curi↑ comment by curi · 2011-04-08T17:10:23.461Z · LW(p) · GW(p)
How do you judge:
[X|Authority believes X]
In general I judge it very low. Certainly in this case.
Can you provide a link to Yudkowsky or any well known Bayesian advocating appeals to authority?
Replies from: Marius, benelliott↑ comment by Marius · 2011-04-08T17:23:40.980Z · LW(p) · GW(p)
How do you judge: [X|Authority believes X]
Track record of statements/predictions, taking into account the prior likelihood of previous predictions and prior likelihood of current prediction.
Can you provide a link to Yudkowsky or any well known Bayesian advocating appeals to authority?
Are you asking us to justify appeals to authority by using an appeal to authority?
edit per wedrifid
Replies from: wedrifid, curi↑ comment by curi · 2011-04-08T17:24:50.834Z · LW(p) · GW(p)
Are you asking us to justify appeals to authority by using an appeal to authority?
No lol. I just wanted one to read. Some of my friends will be interested in it too.
Track record of statements/predictions
Since the guy who made the appeal to authority has little track record with me, and little of it good in my view, why would he expect me to concede to his appeal to authority?
↑ comment by benelliott · 2011-04-08T17:22:42.412Z · LW(p) · GW(p)
Robin Hanson does so here.
Replies from: curi↑ comment by curi · 2011-04-08T17:30:31.889Z · LW(p) · GW(p)
Too much ambiguity there. e.g. the word authority isn't used.
Replies from: benelliott↑ comment by benelliott · 2011-04-08T19:36:00.122Z · LW(p) · GW(p)
This is silly. Whether or not he uses the word authority does not change the fact he is suggesting that we treat the opinions of experts as more accurate than our own opinions.
I had a lot of respect for you before you made this comment, but you have now lost most of it.
Replies from: curi↑ comment by curi · 2011-04-08T19:40:33.392Z · LW(p) · GW(p)
The idea that appeals to authority are good arguments is not identical to the idea that the opinions of experts are more accurate. Suppose they are more accurate, on average. Does that make appealing to one a good argument? I don't think so and my friends won't. They won't know if Hanson thinks so.
For the purposes I wanted to use it for, this will not work well.
One thing I know about some of my friends is that they consider the word "authority" to be very nasty, but the word "expert" to be OK. They specifically differentiate between expertise (a legitimate concept) and authority (an illegitimate concept). Hanson's use of the expertise terminology, instead of the authority terminology, will matter to them. Explaining that he meant what they call authority will add complexity -- and scope for argument -- and be distracting. And people will find it boring and ignore it as a terminological debate.
And I'm not even quite sure what Hanson did mean. I don't think what he meant is identical to what the commenter I was speaking to meant.
Hanson speaks of, for example, "if you plan to mostly ignore the experts". That you shouldn't ignore them is a different claim than that appeals to their authority are good arguments.
Replies from: benelliott, JoshuaZ↑ comment by benelliott · 2011-04-08T21:32:16.076Z · LW(p) · GW(p)
He's stated before, I'm not sure where, that if you believe an expert has more knowledge about an issue than you then you should prefer their opinions to any argument you generate. This is because if they disagree with you it is almost certainly because they have considered and rejected your argument, not because they have not considered your argument.
One thing I know about some of my friends is that they consider the word "authority" to be very nasty, but the word "expert" to be OK. They specifically differentiate between expertise (a legitimate concept) and authority (an illegitimate concept). Hanson's use of the expertise terminology, instead of the authority terminology, will matter to them.
If your friends cannot differentiate between the content of an argument and its surface appearance then I would advise you find new friends [/facetious].
Replies from: curi↑ comment by curi · 2011-04-08T21:48:02.514Z · LW(p) · GW(p)
They can, but some won't be interested in researching this.
I think Hanson's approach to experts (as you describe it) is irrational because it abdicates from thinking. And in particular, if you think you don't know what you're talking about (i.e. think your argument isn't good enough) then don't use it, but if you think otherwise you should respect your own mind (if you're wrong to think otherwise, convince yourself).
Besides, in all the interesting real cases, there are experts advocating things on both sides. One expert disagrees with you. Another reaches the same conclusion as you. What now?
Replies from: benelliott↑ comment by benelliott · 2011-04-08T22:01:22.547Z · LW(p) · GW(p)
if you think otherwise you should respect your own mind (if you're wrong to think otherwise, convince yourself).
Hanson would suggest that this is pure, unjustified arrogance. I'm not sure I agree with him, I struggle to fault the argument but its still a pretty tough bullet to bite.
Have you heard of the Outside View? Hanson's a big fan of it, and if you don't know about it his thought process won't always make much sense.
Besides, in all the interesting real cases, there are experts advocating things on both sides. One expert disagrees with you. Another reaches the same conclusion as you. What now?
You could go with the consensus, or with the majority, or you could come up with a procedure for judging which are most trustworthy. If the experts can't resolve this issue what makes you think you can? More importantly, if you know less than the average expert, then aren't you better off just picking one expert at random rather than trusting yourself?
Replies from: curi↑ comment by curi · 2011-04-08T22:09:26.225Z · LW(p) · GW(p)
Is the majority of experts usually right? I don't think so. Whenever there is a new idea, which is an improvement, usually for a while a minority believe it. In a society with rapid progress, this is a common state.
Have you heard of the Outside View?
no
if you know less than the average expert, then aren't you better off just picking one expert at random rather than trusting yourself?
Why not learn something? Why not use your mind? I don't think that thinking for yourself is arrogant.
In my experience reading (e.g.) academic papers, most experts are incompetent. the single issue of misquoting is ubiquitous. people publish misquotes even in peer reviewed journals. e.g. i discovered a fraudulent Edmund Burke quote which was used in a bunch of articles. Harry Binswanger (and Objectivist expert) posted misquotes (both getting the source wrong, and inserting brackted explanatory text to explain context which was dead wrong). Thomas Sowell misquoted Godwin in a book that discussed Godwin at length.
I can sometimes think better than experts, in their own field, in 15 minutes. In cases where I should listen to expert advice, i do without disagreeing with the expert and overruling my judgment (e.g. i'm not a good cook. when i don't know how to make something i use a recipe. i don't think i know the answer, so don't get overruled. i can tell the difference btwn when i have an opinion that matters or not.).
In the case of cooking, I think the experts I use would approach the issue in the same way I would if I learned the field myself (in the relevant respects). For example, they would ask the same questions I am interested in like, "If I test this recipe out, does it taste good?" Since i think they already did the same work I would do, there's no need to reinvent the wheel. In other cases, i don't think experts have addressed the issue in a way that satisfies me, so i don't blindly accept their ideas.
Replies from: benelliott↑ comment by benelliott · 2011-04-08T22:38:01.308Z · LW(p) · GW(p)
To be honest I'm not exactly a passionate Hansonian, I read his blog avidly because what he has to say is almost always original, but if you want to find a proponent of his to argue with you may need to look elsewhere. Still, I can play devil's advocate if you want.
Is the majority of experts usually right? I don't think so. Whenever there is a new idea, which is an improvement, usually for a while a minority believe it. In a society with rapid progress, this is a common state.
At any time, most things are not changing, so most experts will be right about most things. Anyway, the question isn't whether experts are right, its why you think you are more reliable.
Brief introduction to the Outside View:
Cognitive scientists investigating the planning fallacy (in which people consistently and massively underestimate the amount of time it will take them to finish a project) decided to try to find a 'cure'. In a surprising twist, they succeeded. If you ask the subject "how long have similar projects taken you in the past" and only then ask the question "how long do you expect this project to take" the bias is dramatically reduced.
They attributed this to the fact that in the initial experiment students had been taking the 'inside view' of their project. They had been examining each individual part on its own, and imagining how long it was likely to take. They made the mistake of failing to imagine enough unexpected delays. If they instead take the outside view, by looking at other similar things and seeing how they took, then they ended up implicitly taking those unexpected delays into account because most of those other projects encountered delays of their own/
In general, the outside view says "don't focus on specifics, you will end up ignoring unexpected confounding elements from outside your model. Instead, consider the broad reference class of problems to which this problem belongs and reason from them".
Looking at your 3rd last paragraph I can see a possible application of it. You belong to the broad reference class of "people who think they have proven an expert wrong". Most such people are either crackpots, confused or misinformed. You don't think of yourself as any of these things, but neither do most such people. Therefore you should perhaps give your own opinions less weight.
(Not a personal attack. I do not mean to imply that you actually are a crackpot, confused or misinformed, for all I know you may be absolutely right, I'm just demonstrating the principle).
This very liberal use of the theory has come under criticism from other Bayesians, including Yudkowsky. One of its problems is that it is not always clear which reference class to use.
A more serious problem comes when you apply it to its logical extreme. If we take the reference class "people who have believed themselves to be Napoleon" then most of them were/are insane, does this mean Napoleon himself should have applied the outside view and concluding that he was probably insane?
Why not learn something? Why not use your mind? I don't think that thinking for yourself is arrogant.
Like I said, tough bullet to bite.
Replies from: curi↑ comment by curi · 2011-04-08T22:46:14.383Z · LW(p) · GW(p)
Anyway, the question isn't whether experts are right, its why you think you are more reliable.
This question is incompatible with Popperian philosophy. Ideas haven't got reliability which is just another word for justification. Trying to give it to them leads to problems like regress.
What we do instead is act on our best knowledge without knowing how reliable it is. That means preferring ideas which we don't see anything wrong with to those that we do see something wrong with.
When you do see something wrong with an expert view, but not with your own view, it's irrational to do something you expect not to work, over something you expect to work. Of course if use double standards for criticism of your own ideas, and other people's, you will go wrong. But the solution to that isn't deferring to experts, it's improving your mind.
Most such people are either crackpots, confused or misinformed.
Or maybe they have become experts by thinking well. How does one get expert status anyway? Surely if I think I can do better than people with college degrees at various things, that's not too dumb. I'm e.g. a better programmer than many people with degrees. I have a pretty good sense of how much people do and don't learn in college, and how much work it is to learn more on one's own. The credential system isn't very accurate.
edit: PS please don't argue stuff you don't think is true. if no true believers want to argue it, then shrug.
Replies from: benelliott↑ comment by benelliott · 2011-04-08T23:03:56.090Z · LW(p) · GW(p)
You seemed curious so I explained.
Incidentally, someone has been downvoting Curi's comments and upvoting mine, would they like to step forward and make the case? I'm intrigued to see some of his criticisms answered.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-09T16:11:23.317Z · LW(p) · GW(p)
I suspect that the individuals who are downvoting curi's remarks in this subthread here are doing so because much of what he is saying are things he has already said elsewhere and that people are getting annoyed at him. I suspect that his comments are also being downvoted since he first used the term "authority" and then tried to make a distinction between "expertise" and "authority" when under his definition the first use of such an argument would seem to be in what he classifies as expertise. Finally, I suspect that his comments in this subthread have been downvoted for his apparent general arrogance regarding subject matter experts such as his claim that "I can sometimes think better than experts, in their own field, in 15 minutes."
↑ comment by JoshuaZ · 2011-04-09T15:12:47.810Z · LW(p) · GW(p)
The idea that appeals to authority are good arguments is not identical to the idea that the opinions of experts are more accurate. Suppose they are more accurate, on average. Does that make appealing to one a good argument?
What do you mean by good argument? The Bayesians have an answer to this. They mean that P(claim|argument)> P(claim). Now, one might argue in that framework that if P(claim|argument)/P(claim) is close to 1 then this isn't a good argument , or if if log P(claim|argument)/P(claim) is small compared to the effort to present and evaluate the argument then it isn't a good argument.
However, that's obviously not what you mean. It isn't clear to me what you mean by "good argument" and how this connects to the notion of a fallacy. Please expand your definitions or taboo the terms.
↑ comment by jimrandomh · 2011-04-07T03:18:22.658Z · LW(p) · GW(p)
Cox's theorem is a proof of Bayes rule, from the conditions above. "Consistency" in t his context means (Jaynes 19): If a conclusion can be reasoned out in more than one way, then every possible way must lead to the same result; we always take into account all of the evidence we have relevant to a question; and we always represent equivalent states of knowledge by equivalent plausibility assignments. By "reason in more than one way", we specifically mean adding the same pieces of evidence in different orders.
(Edit: It's page 114 in the PDF you linked. That seems to be the same text as my printed copy, but with the numbering starting in a different place for some reason.)
Replies from: None↑ comment by [deleted] · 2011-04-07T03:43:28.776Z · LW(p) · GW(p)
Assigning degrees of plausibility to theories is an attempt to justify them. Cox's theorem just assumes you can do this. Popper argued that justification, including probabilistic justification, is impossible. How does just assuming something that Popper refuted show anything?
Replies from: benelliott, timtyler↑ comment by benelliott · 2011-04-07T06:16:17.756Z · LW(p) · GW(p)
One argument for plausibility would be this.
At some point you may be called on to base a decision on whether something is true or false. The simplest of these decisions can be reduced to betting for or against something, and you cannot always choose not to bet. There must be some odds at which you switch from betting on falsity to betting on truth, and those can be taken to demonstrate your plausibility assignment.
Replies from: None↑ comment by [deleted] · 2011-04-07T07:41:01.997Z · LW(p) · GW(p)
How does betting on the truth of a universal theory work? I can't see a bookie ever paying out on that, although it would be good business to get punters to take such bets.
Replies from: benelliott, timtyler↑ comment by benelliott · 2011-04-07T11:16:26.093Z · LW(p) · GW(p)
The usual way on Less Wrong is to bring in Omega, the all powerful all knowing entity who spends his free time playing games with us mortals, and for some reason most of his games illustrate some point of probability or decision theory. With Omega acting as the bookie you can be forced to assign a probability to any meaningful statement. Some people just respond to such scenarios by asserting that Omega is impossible, I don't know if you're one of those people but I'll try a different approach anyway.
Imagine that in 2050 the physicists have narrowed down all their candidates for a Theory of Everything to just two possibilities, creatively named X-theory and Y-theory.
An engineer who is a passionate supporter of X-theory has designed and built a new power plant. If X-theory is correct, his power plant will produce a limitless supply of free energy and bring humanity into a post-scarcity era.
However, a number of physicists have had a look at his designs, and have shown that if Y-theory is correct his power plant will create a black hole and wipe out humanity as soon as it is turned on. Somehow, it has ended up being your decision whether or not it goes on.
This is one such 'bet', it may not a very likely scenario but you should still be able to handle it. If we combine it with many slightly altered dilemma we can figure out your probability estimate of theory X being correct, whether you admit to having one or not.
Replies from: None↑ comment by [deleted] · 2011-04-07T23:18:05.462Z · LW(p) · GW(p)
You've presented this as a scenario in which you have to make a choice between two conflicting theories. But the problem you face isn't should I choose X or should I choose Y; the problem you face is that given you have this conflict, what should I do now. This problem is objective, it is different to the problem of whether X is right or Y is right, and it is solvable. Given that this is the year 2050 and humanity won't in fact be wanting, the best solution to the problem may be to wait, pending further research to resolve the conflict. This isn't an implicit bet against X and for Y, it is a solution to a different problem to the problems X and Y address.
Replies from: benelliott, JoshuaZ↑ comment by benelliott · 2011-04-08T06:56:28.158Z · LW(p) · GW(p)
For sake of argument we say that the plant requires a rare and unstable isotope to get started. Earth's entire supply is contained in the plant and will decay in 24 hours.
I could also ask you a similar dilemma, but this time there is only one theory which acknowledges that whether the plant works or creates a black hole depends on a single quantum event, which has a 50% chance of going either way. What do you do? If you wouldn't launch I can ask the same question but now there's only a 25% chance of a black hole, and so on until I learn the ratio of the utility values that you assign to "post scarcity future" and "extinction of humanity". This might for example tell me that the chance of a black hole has to be less than 30% for you to press the button.
Then I ask you the original dilemma, and learn whether the probability you assign to theory X is above or below 70%. If I have far too much time on my hands I can keep modifying the dilemma with slightly altered pay-offs until I pinpoint your estimate.
Replies from: wedrifid↑ comment by wedrifid · 2011-04-08T07:07:57.662Z · LW(p) · GW(p)
creates a black whole
I suppose you get that when the container containing the black dye explodes....
Replies from: benelliott↑ comment by benelliott · 2011-04-08T07:37:13.574Z · LW(p) · GW(p)
Damn, I made that mistake every single time I typed it and I thought I'd corrected them all.
↑ comment by JoshuaZ · 2011-04-08T03:32:23.612Z · LW(p) · GW(p)
This avoids the question. If it helps, try to construct a version of this in the least convenient possible world. For example, one obvious thing to do would be that something about theory X means the plant can only be turned on at a certain celestial conjunction (yes, this is silly but it gets the point across. That's why it is a least convenient world) and otherwise would need to wait a thousand years.
One can vary the situation. For example, it might be that under theory X, medicine A will save a terminally ill cancer patient, and under theory Y, medicine B will save them. And A and B together will kill the patient according to both theories.
↑ comment by timtyler · 2011-04-07T12:31:50.290Z · LW(p) · GW(p)
So: just bet on things the theory predicts instead.
Replies from: None↑ comment by [deleted] · 2011-04-07T23:25:40.937Z · LW(p) · GW(p)
Having the prediction turn out doesn't make the theory true or more likely, it is just consistent evidence. There are an infinitude of other theories that the same evidence is consistent with.
Replies from: timtyler↑ comment by timtyler · 2011-04-08T12:16:34.898Z · LW(p) · GW(p)
To give a simple example, consider flipping a coin. You observe HHH. Is this a fair coin? or a double-headed one? or a biased coin? Different theories describe these situations, and you could be asked to bet on them. Imagine you then further observe HHHH - making a total of HHHHHHH. This makes your estimate of the chances of the "double-headed coin" hypothesis go up. Other hypotheses may increase in probability too - but we are not troubled by there being an infinity of them, since we give extra weight to the simpler ones, using Occam's razor.
↑ comment by timtyler · 2011-04-07T12:36:50.163Z · LW(p) · GW(p)
Assigning degrees of plausibility to theories is an attempt to justify them. Cox's theorem just assumes you can do this.
Do you think that grue and bleen are as plausible as blue and green? Would you like to bet?
Replies from: benelliott↑ comment by benelliott · 2011-04-07T12:44:50.037Z · LW(p) · GW(p)
Nitpicking here, grue and bleen aren't statements and thus can't be assigned probabilites. "This object is grue" and "this object is bleen" are statements.
Replies from: timtyler↑ comment by timtyler · 2011-04-07T13:21:50.417Z · LW(p) · GW(p)
Yes, I left making up more specific examples as an exercise for the reader.
Replies from: None↑ comment by [deleted] · 2011-04-07T23:55:24.600Z · LW(p) · GW(p)
Assuming that the object in question is an emerald, then grue is in conflict with our best explanations about emeralds whereas there are no known problems with the idea that the emerald is green. So I go with green, but not because I have assigned degrees of plausibility but because I see no problem with green.
comment by paulfchristiano · 2011-04-07T00:19:41.752Z · LW(p) · GW(p)
I don't understand Popper's work beyond the Wikipedia summary of critical rationalism. That summary, as well as the debate here at LW, appear to be confused and essentially without value. If this is not the case, you should update this post to include not just a description of how supporters of Bayesianism don't understand Popper, but why they should care about this discussion--why Bayesianism is not, as it seems, obviously the correct answer to the question Popper is trying to answer.
If you want to make bets about the future, Bayesianism will beat whatever else you could use. To suggest that something else is an improved method of doing science is nothing more than to suggest that it is a more feasible approximation to Bayesianism. These things are mathematical facts, if you define Bayesianism and "winning" precisely.
It seems like the only possible room for debate is the choice of prior. Everyone is forced to either implicitly choose a prior or else bet in a way that is manifestly irrational. This is also a mathematical fact. The Solomonoff prior provably isn't too bad. You just have to get over the arbitrariness.
Edit: Lets make this more precise. I claim that if we play a betting game, I can reconstruct a prior from your strategy such that a Bayesian using that prior will beat you in expectation. Do you object to this mathematical statement, or do you object to the interpretation of this fact as "Bayesianism is correct"? I'm not sure which side of the fence you are on, but I suppose it must be one or the other, so if we get that sorted out maybe we can make progress.
Replies from: curi↑ comment by curi · 2011-04-07T00:30:32.220Z · LW(p) · GW(p)
I don't understand Popper's work beyond the Wikipedia summary of critical rationalism
FYI that won't work. Wikipedia doesn't understand Popper. Secondary sources promoting myths, like Jaynes did, is common. A pretty good overview is the Popper book by Bryan Magee (only like 100 pages).
without value
I posted criticisms of Jaynes' arguments (or more accurately, his assumptions). I posted an argument about support. Why don't you answer it?
You just have to get over the arbitrariness.
You are basically admitting that your epistemology is wrong. Given that Popper has an epistemology which does not have this feature, and the rejections of him by Bayesians are unscholarly mistakes, you should be interested in it!
Of course if I wrote up his whole epistemology and posted it here for you that would be nice. But that would take a long time, and it would repeat content from his books.
If you want somewhere to start online, you could read
If you want to make bets about the future
That is not primarily what we want. And what you're doing here is conflating Bayes' theorem (which is about probability, and which is a matter of logic, and which is correct) with Bayesian epistemology (the application of Bayes' theorem to epistemological problems, rather than to the math behind betting).
To suggest that something else is an improved method of doing science is nothing more than to suggest that it is a more feasible approximation to Bayesianism. These things are mathematical facts,
Are you open to the possibility that the general outline of your approach is itself mistaken, and there the theorems you have proven within your framework of assumptions are therefore not all true? Or:
It seems like the only possible room for debate is the choice of prior.
Are you so sure of yourself -- that you are right about many things -- that you will dismiss all rival ideas without even having to know what they say? Even when they offer things your approach doesn't have, such as not having arbitrary foundations.
What you're doing is accepting ideas which have been popular since Aristotle. When you think no other ways are possible, that's bias talking. Your ideas have become common sense (not the Bayes part, but the philosophical approach to epistemology you are taking which comes before you use Bayes's theorem at all).
Here let me ask you a question: has any Bayesian ever published any substantive criticism of an important idea in Popper's epistemology? Someone should have done it, right? And if no one ever has, then you should be interested in investigating, right? And also interested in investigating what is wrong with your movement that it never addressed rival ideas in scholarly debate. (I have looked for such a criticism. Never managed to find one.)
Replies from: paulfchristiano, Peterdjones, paulfchristiano, timtyler↑ comment by paulfchristiano · 2011-04-07T01:02:58.619Z · LW(p) · GW(p)
Here let me ask you a question: has any Bayesian ever published any substantive criticism of an important idea in Popper's epistemology? Someone should have done it, right?
Most things in the space of possible documents can't be refuted, because they don't correspond to anything refutable. They are simply confused, and irredeemably. In the case of epistemology, virtually everything that has ever been said falls into this category. I am glad that I don't have to spend time thinking about it, because it is solved. I would not generally criticize a rival's ideas, because I no longer care. The problem is solved, and I can go work on things that still matter.
Are you so sure of yourself -- that you are right about many things -- that you will dismiss all rival ideas without even having to know what they say?
Once I know the definitive answer to a question, I will dismiss all other answers (rather than trying to poke holes in them). The only sort of argument which warrants response is an objection to my current definitive answer. So ignorance of Popper is essentially irrelevant (and I suspect I couldn't object to anything in his philosophy, because it has essentially no content concrete enough to be defeated by mere reasoning).
The real question, in fact the only question, is whether the arbitrariness of choosing a prior can be surmounted--whether my current answer is not actually definitive. If someone came to me and said they had a solution to this problem I would be interested, except that I am fairly confident the problem has no solution for what are essentially obvious reasons. Popper avoids this problem by not even describing his epistemology precisely enough to express the difficulty.
Really this entire discussion comes down to what we want out of epistemology.
That [guiding betting] is not primarily what we want.
What do you want? I don't understand at all. Whatever you specify, I would be shocked if critical rationality provided it. Here is what I want, and maybe you will agree:
I want to decide between action A and action B. To do this, I want to evaluate the consequences of action A and action B. To do this, I want to predict something about the world. In particular, by choosing B instead of A, I am making a bet about the consequences of A and B. I would like to make such bets in the best possible way.
Lo! This is precisely what Bayesianism allows me to do. Why is there more to say?
You can object that it involves knowing a prior. But from the problem statement it is obvious (as a mathematical fact) that there is a universe in which each possible prior is the best one. Is there a strategy that does better than Bayesianism with a reasonable prior in all possible universes? Maybe, but Popper's ideas aren't nearly precise enough to answer the question (by which I mean, not even at the point where this question, to me clearly the most important one, is meaningful). Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore "avoids" this difficulty?
If I have to bet, or make a decision that effects peoples lives which amounts to a bet, I am going to use Bayesianism, or a computational heuristic which I justify by Bayesianism. Doing something else seems irresponsible.
Replies from: curi, None↑ comment by curi · 2011-04-07T02:07:06.987Z · LW(p) · GW(p)
Most things in the space of possible documents can't be refuted, because they don't correspond to anything refutable. They are simply confused, and irredeemably.
You don't think confused things can be criticized? You can, for example, point out ambiguous passages. That would be a criticism. If they have no clarification to offer, then it would be (tentatively and fallibly) decisive (pending some reason to reconsider).
But you haven't provided any argument that Popper in particular was confused, irrefutable, or whatever. I don't know about you, but as someone who wants to improve my epistemological knowledge I think it's important to consider all the major ideas in the field at the very least enough to know one good criticism of each.
Refusing to address criticism because you think you already have the solution is very closed minded, is it not? You think you're done with thinking, you have the final truth, and that's that..?
The only sort of argument which warrants response is an objection to my current definitive answer.
Popper published several of those. Where's the response from Bayesians?
One thing to note is it's hard to understand his objections without understanding his philosophy a bit more broadly (or you will misread stuff, not knowing the broader context of what he is trying to say, what assumptions he does not share with you, etc...)
The real question, in fact the only question, is whether the arbitrariness of choosing a prior can be surmounted--whether my current answer is not actually definitive. If someone came to me and said they had a solution to this problem I would be interested
Popper solved that problem.
I am fairly confident the problem has no solution for what are essentially obvious reasons
The standard reasons seem obvious because of your cultural bias. Since Aristotle some philosophical assumptions have been taken for granted by almost everyone. Now most people regard them as obvious. GIven those assumptions, I agree that your conclusion follows (no way to avoid arbitrariness). The assumptions are called "justificationism" by Popperians, and are criticized in detail. I think you ought to be interested in this.
One criticism of justificationism is that it causes the regress/arbitrariness/foundations problem. The problem doesn't exist automatically but is being created by your own assumptions.
Popper avoids this problem by not even describing his epistemology precisely enough to express the difficulty.
What are you talking about? You haven't read his books and claim he didn't give enough detail? He was something of a workaholic who didn't watch TV, didn't have a big social life, and worked and wrote all the time.
What do you want?
To create knowledge, including explanatory and non-instrumentalist knowledge. You come off like a borderline positivist to me, who has trouble with the notion that non-empirical stuff is even meaningful. (No offense intended, and I'm not assuming you actually are a positivist, but I'm not really seeing much difference yet.)
To do this, I want to evaluate the consequences of action A and action B. To do this, I want to predict something about the world.
To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don't think Bayesianism addresses this well.
Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore "avoids" this difficulty?
Neither. You can and should do better!
Replies from: David_Allen↑ comment by David_Allen · 2011-04-07T16:16:38.139Z · LW(p) · GW(p)
To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don't think Bayesianism addresses this well.
Given well defined contexts and meanings for good and bad I don't see why Bayesianism could not be effectively applied to to moral problems.
Replies from: curi↑ comment by curi · 2011-04-07T18:40:28.194Z · LW(p) · GW(p)
Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don't.
You can't create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can't evaluate).
Replies from: JoshuaZ, David_Allen, David_Allen↑ comment by JoshuaZ · 2011-04-07T18:58:15.099Z · LW(p) · GW(p)
You can't create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can't evaluate).
You've repeatedly claimed that the Popperian approach can somehow address moral issues. Despite requests you've shown no details of that claim other than to say that you do the same thing you would do but with moral claims. So let's work through a specific moral issue. Can you take an example of a real moral issue that has been controversial historically (like say slavery or free speech) and show how the Popperian would approach? An concrete worked out example would be very helpful.
Replies from: curi↑ comment by curi · 2011-04-07T19:00:42.244Z · LW(p) · GW(p)
http://lesswrong.com/lw/552/reply_to_benelliott_about_popper_issues/3uv7
And it creates moral knowledge by conjecture and refutation, same as any other knowledge. If you understand how Popper approaches any kind of knowledge (which I have written about a bunch here), then you know how he approaches moral knowledge too.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-07T19:10:36.434Z · LW(p) · GW(p)
And it creates moral knowledge by conjecture and refutation, same as any other knowledge. If you understand how Popper approaches any kind of knowledge (which I have written about a bunch here), then you know how he approaches moral knowledge too.
Consider that you are replying to a statement I just said that all you've done is say that it would use the same methodologies. Given that, does this reply seem sufficient? Do I need to repeat my request for a worked example (which is not included in your link)?
↑ comment by David_Allen · 2011-04-07T20:26:10.159Z · LW(p) · GW(p)
Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don't.
First of all, you shouldn't lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Bayes' theorem is an abstraction. If you don't have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn't use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.
This doesn't mean that Bayes' theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
You can't create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can't evaluate).
These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
Replies from: curi↑ comment by curi · 2011-04-07T20:41:46.188Z · LW(p) · GW(p)
First of all, you shouldn't lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Sorry. I have no idea who is who. Don't mind me.
This doesn't mean that Bayes' theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
The Popperian method is universal.
if Bayesianism is in some sense Turing complete then it can be used to do all of this
Well, umm, yes but that's no help. my iMac is definitely Turing complete. It could run an AI. It could do whatever. But we don't know how to make it do that stuff. Epistemology should help us.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods.
Example or details?
Replies from: David_Allen↑ comment by David_Allen · 2011-04-07T21:59:13.653Z · LW(p) · GW(p)
Sorry. I have no idea who is who. Don't mind me.
No problem, I'm just pointing out that there are other perspectives out here.
The Popperian method is universal.
Sure, in the sense it is Turing complete; but that doesn't make it the most efficient approach for all cases. For example I'm not going to use it to decide the answer to the statement "2 + 3", it is much more efficient for me to use the arithmetic abstraction.
But we don't know how to make it do that stuff. Epistemology should help us.
Agreed, it is one of the reasons that I am actively working on epistemology.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods.
Example or details?
The naive Bayes classifier can be an effective way to classify discrete input into independent classes. Certainly for some cases it could be used to classify something as "good" or "bad" based on example input.
Bayesian networks can capture the meaning within interdependent sets. For example the meaning of words forms a complex network; if the meaning of a single word shifts it will probably result in changes to the meanings of related words; and in a similar way ideas on morality form connected interdependent structures.
Within a culture a particular moral position may be dependent on other moral positions, or even other aspects of the culture. For example a combination of religious beliefs and inheritance traditions might result in a belief that a husband is justified in killing an unfaithful wife. A Bayesian network trained on information across cultures might be able to identify these kinds of relationships. With this you could start to answer questions like "Why is X moral in the UK but not in Saudi Arabia?"
Replies from: curi↑ comment by curi · 2011-04-08T00:37:39.876Z · LW(p) · GW(p)
Sure, in the sense it is Turing complete;
No, in the sense that it directly applies to all types of knowledge (which any epistemology applies to -- which i think is all of them, but that doesn't matter to universality).
Not in the sense that it's Turing complete so you could, by a roundabout way and using whatever methods, do anything.
I think the basic way we differ is you have despaired of philosophy getting anywhere, and you're trying to get rigor from math. But Popper saved philosophy. (And most people didn't notice.) Example:
With this you could start to answer questions like "Why is X moral in the UK but not in Saudi Arabia?"
You have very limited ambitious. You're trying to focus on small questions b/c you think bigger ones like: what is moral objectively? are too hard and, since you math won't answer them, it's hopeless.
Replies from: David_Allen↑ comment by David_Allen · 2011-04-08T02:13:42.089Z · LW(p) · GW(p)
No, in the sense that it directly applies to all types of knowledge (which any epistemology applies to -- which i think is all of them, but that doesn't matter to universality).
Perhaps I don't understand some nuance of what you mean here. If you can explain it or link to something that explains this in detail I will read it.
But to respond to what I think you mean... If you have a method that can be applied to all types of knowledge, that implies that it is Turing complete; it is therefore equivalent in capability to other Turing complete systems; that also means it is susceptible to the infinite regresses you dislike in "justificationist epistemologies"... i.e. the halting problem.
Also, just because it can be applied to all types of knowledge does not mean it is the best choice for all types of knowledge, or for all types of operations on that knowledge.
I think the basic way we differ is you have despaired of philosophy getting anywhere, and you're trying to get rigor from math. But Popper saved philosophy. (And most people didn't notice.) Example:
I would not describe my perspective that way; you may have forgotten that I am a third party in this argument. I think that there is a lot of historical junk in philosophy and that it is continuing to produce a lot junk -- Popper didn't fix this and neither will Bayesianism, it is more of a people problem -- but philosophy has also produced and is producing a lot of interesting and good ideas.
I think one way we differ is that you see a distinct difference between math and philosophy and I see a wide gradient of abstractions for manipulating information. Another is that you think that there is something special about Popper's approach that allows it to rise above all other approaches in all cases, and I think that there are many approaches and that it is best to choose the approach based on the context.
With this you could start to answer questions like "Why is X moral in the UK but not in Saudi Arabia?"
You have very limited ambitious. You're trying to focus on small questions b/c you think bigger ones like: what is moral objectively? are too hard and, since you math won't answer them, it's hopeless.
This was a response to your request for an example; you read too much into it to assume it implies anything about my ambitions.
A question like "what is moral objectively?" is easy. Nothing is "moral objectively". Meaning is created within contexts of assessment; if you want to know if something is "moral" you must consider that question with a context that will perform the classification. Not all contexts will produce the same result and not all contexts will even support a meaning for the concept of "moral".
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-08T13:39:21.333Z · LW(p) · GW(p)
But to respond to what I think you mean... If you have a method that can be applied to all types of knowledge, that implies that it is Turing complete; it is therefore equivalent in capability to other Turing complete systems;
Minor nitpick at least capable of modeling any Turing machine, not Turing complete. For example, something that had access to some form of halting oracle would be able to do more than a Turing machine.
↑ comment by David_Allen · 2011-04-07T20:25:07.467Z · LW(p) · GW(p)
Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don't.
First of all, you shouldn't lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Bayes' theorem is an abstraction. If you don't have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn't use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.
This doesn't mean that Bayes' theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
You can't create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can't evaluate).
These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
↑ comment by [deleted] · 2011-04-07T01:50:34.798Z · LW(p) · GW(p)
Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore "avoids" this difficulty?
Saying your epistemology has a "necessary flaw" is an admission of defeat, that it doesn't work. The "necessary flaw" is unavoidable if you are committed to the justificationist way of thinking. Popper saw that the whole idea of justification is wrong and he offered a different idea to replace it - an idea with no known flaws. You criticize Popper for being underspecified, yet he elaborated on his ideas in many books. And, furthermore, no amount of mathematical precision or formalism will paper over cracks in justificationist epistemologies.
Replies from: paulfchristiano, curi↑ comment by paulfchristiano · 2011-04-07T01:53:27.284Z · LW(p) · GW(p)
Saying your epistemology has a "necessary flaw" is an admission of defeat,
In this case, its recognition of reality. I repeat that I would like to defer this conversation until we have something concrete to disagree about. Until then I don't care about that difference.
Replies from: None↑ comment by [deleted] · 2011-04-07T02:24:53.728Z · LW(p) · GW(p)
The "necessary flaw" arises because all justificationist epistemologies lead to infinite regress or circular arguments or appeals to authority (or even sillier things). That you think there is no alternative to justificationism and I don't is something concrete we disagree about.
Replies from: David_Allen↑ comment by David_Allen · 2011-04-07T16:02:37.342Z · LW(p) · GW(p)
Adding a reference for this comment: Münchhausen Trilemma.
↑ comment by curi · 2011-04-07T01:54:23.885Z · LW(p) · GW(p)
It's interesting how different Bayesians say different things. They don't seem to all agree with each other even about their basic claims. Sometimes Bayesianism is proved, other times it is acknowledged to have known flaws. Sometimes it may be completely compatible with Popper, other times it is dethroning Popper. It seems to me that perhaps Bayesianism is a bit underspecified. I wonder why they haven't sorted out these internal disputes.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-07T03:41:08.834Z · LW(p) · GW(p)
Sometimes Bayesianism is proved, other times it is acknowledged to have known flaws. Sometimes it may be completely compatible with Popper, other times it is dethroning Popper. It seems to me that perhaps Bayesianism is a bit underspecified. I wonder why they haven't sorted out these internal disputes.
There are disputes among the Bayesians. But you are confusing different issues. First, the presence of internal disputes about the borders of an idea is not a priori a problem with an idea that is in progress. The fact that evolutionary biologists disagree about how much neutral drift matters isn't a reason to reject evolution. (It is possible that I'm reading an unintended implication here.)
Moreover, most of what you are talking about here are not contradictions but failure to understand. That Bayesianism has flaws is a distinct claim from when someone talks about something like Cox's theorem which is the sort of result that Bayesians are talking about that you refer to as "Sometimes Bayesianism is proved"(which incidentally is a terribly unhelpful and vague way of discussing the point). The point of results like Cox's theorem is that if one very broad attempts under certain very weak assumptions to formalize epistemology you must end up with some form of Bayesianism. At the same time it is important to keep in mind that this isn't saying all that much. It doesn't for example say anything about what one's priors should be. Thus one has the classical disagreement between objective and subjective Bayesians based on what sort of priors to use (and within each of those there is further breakdown. LessWrong seems to mainly have objective Bayesians favoring some form Occam prior, although just what is not clear.) Similarly, when discussing whether or not Bayesianism is compatible with Popper depends a lot on what one means by "Bayesianism", "compatible" and "Popper". Bayesianism is certainly not compatible with a naive-Popperian approach, which is what many are talking about when they say that it is not compatible (and as you've already noted Popper himself wasn't a naive Popperian). But some people use Popper to mean the idea that given an interesting hypothesis one should search out for experiments which would be likely to falsify the hypothesis if it is false (an idea that actually predates Popper) but what one means by falsify can be a problem.
↑ comment by Peterdjones · 2011-04-15T15:14:18.001Z · LW(p) · GW(p)
Why don't you fix the WP article?
↑ comment by paulfchristiano · 2011-04-07T01:12:55.903Z · LW(p) · GW(p)
Having read the website you linked to in its entirety, I think we should defer this discussion (as a community) until the next time you explain why someone's particular belief is wrong, at which point you will be forced to make an actual claim which can be rejected.
In particular, if you ever try to make a claim of the form "You should not believe X, because Bayesianism is wrong, and undesirable Y will happen if you act on this belief" then I would be interested in the resulting discussion. We could do the same thing now, I guess, if you want to make such a claim of some historical decision.
Edit: changed wording to be less of an ass.
Replies from: curi↑ comment by curi · 2011-04-07T01:24:01.936Z · LW(p) · GW(p)
In its entirety? Assuming you spent 40 minutes reading, 0 minutes delay before you saw my post, 0 minutes reading my post here, and 2:23 writing your reply, then you read at a speed of around 833 words per minute. That is very impressive. Where did you learn to do that? How can I learn to do that too?
Given that I do make claims on my website, I wonder why you don't pick one and point out something you think is wrong with it.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-04-07T01:33:16.238Z · LW(p) · GW(p)
Fair, fair. I should have thought more and been less heated. (My initial response was even worse!)
I did read the parts of your website that relate to the question at hand. I do skim at several hundred words per minute (in much more detail than was needed for this application), though I did not spend the entire time reading. Much of the content of the website (perfectly reasonably) is devoted to things not really germane to this discussion.
If you really want (because I am constitutively incapable of letting an argument on the internet go) you could point to a particular claim you make, of the form I asked for. My issue is not really that I have an objection to any of your arguments--its that you seem to offer no concrete points where your epistemology leads to a different conclusion than Bayesianism, or in which Bayesianism will get you into trouble. I don't think this is necessarily a flaw with your website--presumably it was not designed first and foremost as a response to Bayesianism--but given this observation I would rather defer discussion until such a claim does come up and I can argue in a more concrete way.
To be clear, what I am looking for is a statement of the form: "Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage. I would not believe such a thing because of my improved epistemology. Here is why my belief is more correct, and why your belief will do damage." Or whatever example it is you would like to use. Any example at all. Even an argument that Bayesian reasoning with the Solomonoff prior has been "wrong" where Popper would be clearly "right" at any historical point would be good enough to argue about.
Replies from: curi↑ comment by curi · 2011-04-07T01:47:06.417Z · LW(p) · GW(p)
statement of the form: "Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage I would not believe such a thing because of my improved epistemology.
Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks
While Fallible Ideas does not comment on Bayesian Epistemology directly, it takes a different approach. You do not find Bayesians advocating the same ways of thinking. They have a different (worse, IMO) emphasis.
I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren't because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn't mean we agree or have nothing to discuss.
Fair, fair. I should have thought more and been less heated. (My initial response was even worse!)
Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn't solve your problem, whereas my epistemology did solve mine.
Replies from: Desrtopa, paulfchristiano↑ comment by Desrtopa · 2011-04-07T01:58:08.683Z · LW(p) · GW(p)
Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn't solve your problem, whereas my epistemology did solve mine.
It doesn't take Popperian epistemology to learn social fluency. I've learned to limit conflict and improve the productivity of my discussions, and I am (to the best of my ability) Bayesian in my epistemology.
If you want to credit a particular skill to your epistemology, you should first see whether it's more likely to arise among those who share your epistemology than those who don't.
Replies from: JoshuaZ, curi↑ comment by JoshuaZ · 2011-04-07T02:07:05.703Z · LW(p) · GW(p)
If you want to credit a particular skill to your epistemology, you should first see whether it's more likely to arise among those who share your epistemology than those who don't.
That's a claim that only makes sense in certain epistemological systems...
Replies from: curi↑ comment by curi · 2011-04-07T02:09:13.742Z · LW(p) · GW(p)
I don't have a problem with the main substance of that argument, which I agree with. Your implication that we would reject this idea is mistaken.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-07T02:36:12.119Z · LW(p) · GW(p)
I don't have a problem with the main substance of that argument, which I agree with. Your implication that we would reject this idea is mistaken.
Hmm? I'm not sure who you mean by we? If you mean that someone supporting a Popperian approach to epistemology would probably find this idea reasonable than I agree with you (at least empirically, people claiming to support some form of Popperian approach seem ok with this sort of thing. That's not to say I understand how they think it is implied/ok in a Popperian framework).
↑ comment by curi · 2011-04-07T02:11:59.585Z · LW(p) · GW(p)
If you want to credit a particular skill to your epistemology, you should first see whether it's more likely to arise among those who share your epistemology than those who don't.
I have considered that. Popperian epistemology helps with these issues more. I don't want to argue about that now because it is an advanced topic and you don't know enough about my epistemology to understand it (correct me if I'm wrong), but I thought the example could help make a point to the person I was speaking to.
Replies from: Desrtopa↑ comment by Desrtopa · 2011-04-07T02:15:42.668Z · LW(p) · GW(p)
If I don't understand your explanation and am interested in it, I'm prepared to do the research in order to understand it, but if you can only assert why your epistemology should result in better social learning and not demonstrate that it does so for people in general, I confess that I will probably not be interested enough to follow up.
I will note though, that stating the assumption that another does not understand, but leaving them free to correct you, strikes me as a markedly worse way to minimize conflict and aggression than asking if they have the familiarity necessary to understand the explanation.
Replies from: curi↑ comment by curi · 2011-04-07T02:25:58.545Z · LW(p) · GW(p)
You could begin by reading
http://fallibleideas.com/emotions
And the rest of the site. If you don't understand any connections between it and Popperian epistemology, feel free to ask.
I'm not asking you to be interested in this, but I do think you should have some interest in rival epistemologies.
Replies from: Desrtopa↑ comment by Desrtopa · 2011-04-07T03:00:48.149Z · LW(p) · GW(p)
I studied philosophy as part of a double major (which I eventually dropped because of the amount of confusion and sophistry I was being expected to humor,) and my acquaintance with Popper, although not as deep as yours, I'm sure, precedes my acquaintance with Bayes. Although it may be that others who I have not read better presented and refined his ideas, Popper's philosophy did not particularly impress me, whereas the ideas presented by Bayesianism immediately struck me as deserving of further investigation. It's possible that I haven't given Popper his fair shakes, but it's not for lack of interest in other epistemologies that I've come to identify as Bayesian.
I wouldn't describe the link as unhelpful, exactly, but I also wouldn't say that it's among the best advice for controlling one's emotions that I've received (this was a process I put quite a bit of effort into learning, and I've received a fair amount,) so I don't see how it functions as a demonstration of the superiority of Popperian epistemology.
Replies from: curi↑ comment by curi · 2011-04-07T03:05:54.713Z · LW(p) · GW(p)
You say Popper didn't impress you. Why not? Did you have any criticism of his ideas? Any substantive argument against them?
Do you have any criticism of the linked ideas? You just said it doesn't seem that good to you, but you didn't give any kind of substantive argument.
Replies from: Desrtopa↑ comment by Desrtopa · 2011-04-07T03:38:56.008Z · LW(p) · GW(p)
With regards to the link, it's simply that it's less in depth than other advice I've received. There are techniques that it doesn't cover in meaningful detail, like manipulation of cognitive dissonance (habitually behaving in certain ways to convince yourself to feel certain ways,) or recognition of various cognitive biases which will alter our feelings. It's not that bad as an introduction, but it could do a better job opening up connections to specific techniques to practice or biases to be aware of.
Popper didn't impress me because it simply wasn't apparent to me that he was establishing any meaningful improvements to how we go about reasoning and gaining information. Critical rationalism appeared to me to be a way of looking at how we go about the pursuit of knowledge, but to quote Feynman, "Philosophy of science is about as useful to scientists as ornithology is to birds." It wasn't apparent to me that trying to become more Popperian should improve the work of scientists at all; indeed, in practice it is my observation that those who try to think of theories more in the light of the criticism they have withstood than their probability in light of the available evidence are more likely to make significant blunders.
Attempting to become more Bayesian in one's epistemology, on the other hand, had immediately apparent benefits with regards to conducting science well (which are are discussed extensively on this site.)
I had criticisms of Popper's arguments to offer, and could probably refresh my memory of them by revisiting his writings, but the deciding factor which kept me from bothering to read further was that, like other philosophers of science I had encountered, it simply wasn't apparent that he had anything useful to offer, whereas it was immediately clear that Bayesianism did.
Replies from: curi↑ comment by curi · 2011-04-07T03:47:39.012Z · LW(p) · GW(p)
Feynman meant normal philosophers of science. Including, I think, Bayesians. He didn't mean Popper, who he read and appreciated. Feynman himself engaged in philosophy of science, and published it. It's academic philosophers, of the dominant type, that he loathed.
that those who try to think of theories more in the light of the criticism they have withstood than their probability in light of the available evidence
That's not really what Popperian epistemology is about. But also: the concept of evidence for theories is a mistake that doesn't actually make sense, as Popper explained. If you doubt this, do what no one else on this site has yet managed: tell me what "support" means (like in the phrase "supporting evidence") and tell me how support differs from consistency.
The biggest thing Popper has to offer is the solution to justificationism which has plagued almost everyone's thinking since Aristotle. You won't know quite what that is because it's an unconscious bias for most people. In short it is the idea that theories should be supported/justified/verified/proven, or whatever, whether probabilistically or not. A fraction of this is: he solved the problem of induction. Genuinely solved it, rather than simply giving up and accepting regress/foundations/circularly/whatever.
Replies from: Desrtopa, FAWS↑ comment by Desrtopa · 2011-04-07T03:59:01.599Z · LW(p) · GW(p)
That's not really what Popperian epistemology is about. But also: the concept of evidence for theories is a mistake that doesn't actually make sense, as Popper explained. If you doubt this, do what no one else on this site has yet managed: tell me what "support" means (like in the phrase "supporting evidence") and tell me how support differs from consistency.
I've read his arguments for this, I simply wasn't convinced that accepting it in any way improved scientific conduct.
"Support" would be data in light of which the subjective likelihood of a hypothesis is increased. If consistency does not meaningfully differ from this with respect to how we respond to data, can you explain why it is is more practical to think about data in terms of consistency than support?
I'd also like to add that I do know what justificationism is, and your tendency to openly assume deficiencies in the knowledge of others is rather irritating. I normally wouldn't bother to remark upon it, but given that you posed a superior grasp of socially effective debate conduct as evidence of the strength of your epistemology, I feel the need to point out that I don't feel like you're meeting the standards of etiquette I would expect of most members of Less Wrong.
Replies from: curi↑ comment by curi · 2011-04-07T04:08:17.569Z · LW(p) · GW(p)
I've read his arguments for this, I simply wasn't convinced that accepting it in any way improved scientific conduct.
Yet again you disagree with no substantive argument. If you don't have anything to say, why are you posting?
can you explain why it is is more practical to think about data in terms of consistency than support?
Well, consistency is good as far as it goes. If we see 10 white swans, we should reject "all swans are black" (yes, even this much depends on some other stuff). Consistency does the job without anything extraneous or misleading.
The support idea claims that sometimes evidence supports one idea it is consistent with more than another. This isn't true, except in special cases which aren't important.
The way Popper improves on this is by noting that there are always many hypotheses consistent with the data. Saying their likelihood increases is pointless. It does not help deal with the problem of differentiating between them. Something else, not support, is needed. This leaves the concept of support with nothing useful to do, except be badly abused in sloppy arguments (I have in mind arguments I've seen elsewhere. Lots of them. What people do is they find some evidence, and some theory it is consistent with, and they say the theory is supported so now they have a strong argument or whatever. And they are totally selective about this. You try to tell them, "well, theory is also consistent with the data. so it's supported just as much. right?" and they say no, theirs fits the data better, so it's supported more. but you ask what the difference is, and they can't tell you because there is no answer. the idea that a theory can fit the data better than another, when both are consistent with the data, is a mistake (again there are some special cases that don't matter in practice).)
Replies from: Desrtopa↑ comment by Desrtopa · 2011-04-07T04:15:52.650Z · LW(p) · GW(p)
The support idea claims that sometimes evidence supports one idea it is consistent with more than another. This isn't true, except in special cases which aren't important.
Suppose I ask a woman if she has children. She says no.
This is supporting evidence for the hypothesis that she does not have children; it raises the likelihood from my perspective that she is childless.
It is entirely consistent with the hypothesis that she has children; she would simply have to be lying.
So it appears to me that in this case, whatever arguments you might make regarding induction, viewing the data in terms of consistency does not inform my behavior as well.
Replies from: curi↑ comment by curi · 2011-04-07T04:25:22.130Z · LW(p) · GW(p)
This is the standard story. It is nothing but an appeal to intuition (and/or unstated background knowledge, unstated explanations, unstated assumptions, etc). There is no argument for it and there never has been one.
Refuting this common mistake is something important Popper did.
Try reading your post again. You simply assumed that her having children is more likely. That is not true from the example presented, without some unstated assumptions being added. There is no argument in your post. That makes it very difficult to argue against because there's nothing to engage with.
It could go either way. You know it could go either way. You claim one way fits the data better, but you don't offer any rigorous guidelines (or anything else) for figuring out which way fits better. What are the rules to decide which consistent theories are more supported than others?
Replies from: Desrtopa↑ comment by Desrtopa · 2011-04-07T04:40:58.790Z · LW(p) · GW(p)
Of course it could go either way. But if I behaved in everyday life as if it were equally likely to go either way, I would be subjecting myself to disaster. For practical purposes it has always served me better to accept that certain hypotheses that are consistent with the available data are more probable than others, and while I cannot prove that this makes it more likely that it will continue to do so in the future, I'm willing to bet quite heavily that it will.
If Popper's epistemology does not lead to superior results to induction, and at best, only reduces to procedures that perform as well, then I do not see why I should regard his refutation of induction as important.
↑ comment by FAWS · 2011-04-07T04:13:12.025Z · LW(p) · GW(p)
tell me what "support" means (like in the phrase "supporting evidence") and tell me how support differs from consistency.
Support is the same thing as more consistent with that hypothesis than with the alternatives (P(E|H) >P(E|~H)).
Replies from: curi↑ comment by curi · 2011-04-07T04:17:51.677Z · LW(p) · GW(p)
What is "more consistent"?
Consistent = does not contradict. But you can't not-contradict more. It's a boolean issue.
Replies from: FAWS↑ comment by FAWS · 2011-04-07T04:27:28.610Z · LW(p) · GW(p)
Then you have your answer: Support is non-boolean. I don't think a boolean concept of consistency of observations with anything makes sense, though (consistent would mean P(E|H)>0, but observations never have a probability of 0 anyway, so every observation would be consistent with everything, or you'd need an arbitrary cut-off. P(observe black sheep|all sheep are white) > 0, but is very small ).
Replies from: curi↑ comment by curi · 2011-04-07T04:29:21.873Z · LW(p) · GW(p)
Some theories predict that some things won't happen (0 probability). I consider this kind of theory important.
You say I have my answer, but you have not answered. I don't think you've understood the problem. To try to repeat myself less, check out the discussion here, currently at the bottom:
http://lesswrong.com/lw/54u/bayesian_epistemology_vs_popper/3urr?context=3
Replies from: FAWS↑ comment by FAWS · 2011-04-07T04:44:48.946Z · LW(p) · GW(p)
Some theories predict that some things won't happen (0 probability). I consider this kind of theory important.
But they don't predict that you won't hallucinate, or misread the experimental data, or whatever. Some things not happening doesn't mean some things won't be observed.
You say I have my answer, but you have not answered.
You asked how support differed form consistent. Boolean vs real number is a difference. Even if you arbitrarily decide that real numbers are not allowed and only booleans are that doesn't mean that differentiating between their use of real numbers and your use of booleans is inconsistent on part of those who use real numbers.
↑ comment by paulfchristiano · 2011-04-07T02:09:39.653Z · LW(p) · GW(p)
Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference?
No. It provides an example of a way in which you are better than me. I am overwhelmingly confident that I can find ways in which I am better than you.
Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks
Could you explain how a Popperian disputes such an assertion? Through only my own fault, I can't listen to an mp3 right now.
My understanding is that anyone would make that argument in the same way: by providing evidence in the Bayesian sense, which would convince a Bayesian. What I am really asking for is a description of why your beliefs aren't the same as mine but better. Why is it that a Popperian disagrees with a Bayesian in this case? What argument do they accept that a Bayesian wouldn't? What is the corresponding calculation a Popperian does when he has to decide how to gamble with the lives of six billion people on an uncertain assertion?
I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren't because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn't mean we agree or have nothing to discuss.
I agree that different ways of thinking can be better or worse even when they come to the same conclusions. You seem to be arguing that Bayesianism is wrong, which is a very different thing. At best, you seem to be claiming that trying to come up with probabilities is a bad idea. I don't yet understand exactly what you mean. Would you never take a bet? Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?
This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian's willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.
I do not plan to continue this discussion except in the pursuit of an example about which we could actually argue productively.
Replies from: curi↑ comment by curi · 2011-04-07T02:46:51.481Z · LW(p) · GW(p)
Could you explain how a Popperian disputes such an assertion? [(50% probability of humanity surviving the next century)]
e.g. by pointing out that whether we do or don't survive depends on human choices, which in turn depends on human knowledge. And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge. So this is not prediction but prophecy. And prophecy has build in bias towards pessimism: because we can't make predictions about future knowledge, prophets in general make predictions that disregard future knowledge. These are explanatory, philosophical arguments which do not rely on evidence (that is appropriate because it is not a scientific or empirical mistake being criticized). No corresponding calculation is made at all.
You ask about how Popperians make decisions if not with such calculations. Well, say we want to decide if we should build a lot more nuclear power plants. This could be taken as gambling with a lot of lives, and maybe even all of them. Of course, not doing it could also be taken as a way of gambling with lives. There's no way to never face any potential dangers. So, how do Popperians decide? They conjecture an answer, e.g. "yes". Actually, they make many conjectures, e.g. also "no". Then they criticize the conjectures, and make more conjectures. So for example I would criticize "yes" for not providing enough explanatory detail about why it's a good idea. Thus "yes" would be rejected, but a variant of it like "yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites" would be better. If I didn't understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the "no" answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?
You seem to be arguing that Bayesianism is wrong, which is a very different thing.
I think it's wrong as an epistemology. For example because induction is wrong, and the notion of positive support is wrong. Of course Bayes' theorem is correct, and various math you guys have done is correct. I keep getting conflicting statements from people about whether Bayesianism conflicts with Popperism or not, and I don't want to speak for you guys, nor do I want to discourage anyone from finding the shared ideas or discourage them from learning from both.
Would you never take a bet?
Bets are made on events, like which team wins a sports game. Probabilities are fine for events. Probabilities of the truth of theories is problematic (b/c e.g. there is no way to make them non-arbitrary). And it's not something a fallibilist can bet on because he accepts we never know the final truth for sure, so how are we to set up a decision procedure that decides who won the bet?
Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?
We are not afraid of uncertainty. Popperian epistemology is fallibilist. It rejects certainty. Life is always uncertain. That does not imply probability is the right way to approach all types of uncertainty.
This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian's willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.
Bayesian reasoning diverges when it says that ideas can be positively supported. We diverge because Popper questioned the concept of positive support, as I posted in the original text on this page, and which no one has answered yet. The criticism of positive support begins by considering what it is (you tell me) and how it differs from consistency (you tell me).
Replies from: jake987722, Larks↑ comment by jake987722 · 2011-04-07T03:24:11.830Z · LW(p) · GW(p)
So, how do Popperians decide? They conjecture an answer, e.g. "yes". Actually, they make many conjectures, e.g. also "no". Then they criticize the conjectures, and make more conjectures. So for example I would criticize "yes" for not providing enough explanatory detail about why it's a good idea. Thus "yes" would be rejected, but a variant of it like "yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites" would be better. If I didn't understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the "no" answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?
Almost, but you seem to have left out the rather important detail of how actually make the decision. Based on the process of criticizing conjectures you've described so far, it seems that there are two basic routes you can take to finish the decision process once the critical smoke has cleared.
First, you can declare that, since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other. In this way you don't actually make a decision and the problem remains unsolved.
Second, you can choose to go with the conjecture that best weathered the criticisms you were able to muster. That's fine, but then it's not clear that you've done anything different from what a Bayesian would have done--you've simply avoided explicitly talking about things like probabilities and priors.
Which of these is a more accurate characterization of the Popperian decision process? Or is it something radically different from these two altogether?
Replies from: curi↑ comment by curi · 2011-04-07T03:59:34.502Z · LW(p) · GW(p)
When you have exactly one non-refuted theory, you go with that.
The other cases are more complicated and difficult to understand.
Suppose I gave you the answer to the other cases, and we talked about it enough for you to understand it. What would you change your mind about? What would you concede?
If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?
If you have lots of other objections you are interested in, I would suggest you just accept for now that we have a method and focus on the other issues first.
[option 1] since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other.
But some are criticized and some aren't.
[option 2] conjecture that best weathered the criticisms you were able to muster
But how is that to be judged?
No, we always go with uncriticized ideas (which may be close variants of ideas that were criticized). Even the terminology is very tricky here -- the English language is not well adapted to expressing these ideas. (In particular, the concept "uncriticized" is a very substantive one with a lot of meaning, and the word for it may be misleading, but other words are even worse. And the straightforward meaning is OK for present purposes, but may be problematic in future discussion.).
Or is it something radically different from these two altogether?
Yes, different. Both of these are justificationist ways of thinking. They consider how much justification each theory has. The first one rejects a standard source of justification, does not replace it, and ends up stuck. The second one replaces it, and ends up, as you say, reasonably similar to Bayesianism. It still uses the same basic method of tallying up how much of some good thing (which we call justification) each theory has, and then judging by what has the most.
Popperian epistemology does not justify. It uses criticism for a different purpose: a criticism is an explanation of a mistake. By finding mistakes, and explaining what the mistakes are, and conjecturing better ideas which we think won't have those mistakes, we learn and improve our knowledge.
Replies from: jake987722↑ comment by jake987722 · 2011-04-07T04:37:27.582Z · LW(p) · GW(p)
If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?
Yes, we will have gotten somewhere. This issue is my primary criticism of Popperian epistemology. That is, given what I understand about the set of ideas, it is not clear to me how we would go about making practical scientific decisions. With that said, I can't reasonably guarantee that I will not have later objections as well before we've even had the discussion!
So let me see if I'm understanding this correctly. What we are looking for is the one conjecture which appears to be completely impervious to any criticism that we can muster against it, given our current knowledge. Once we have found such a conjecture, we -- I don't want to say "assume that it's true," because that's probably not correct -- we behave as if it were true until it finally is criticized and, hopefully, replaced by a new conjecture. Is that basically right?
I'm not really seeing how this is fundamentally anti-justificationist. It seems to me that the Popperian epistemology still depends on a form of justification, but that it relies on a sort of boolean all-or-nothing justification rather than allowing graded degrees of justification. For example, when we say something like, "in order to make a decision, we need to have a guiding theory which is currently impervious to criticism" (my current understanding of Popper's idea, roughly illustrated), isn't this just another way of saying: "the fact that this theory is currently impervious to criticism is what justifies our reliance on it in making this decision?"
In short, isn't imperviousness to criticism a type of justification in itself?
Replies from: curi↑ comment by curi · 2011-04-07T05:02:40.598Z · LW(p) · GW(p)
Yes, we will have gotten somewhere. This issue is my primary criticism of Popperian epistemology.
OK then :-) Should we go somewhere else to discuss, rather than heavily nested comments? Would a new discussion topic page be the right place?
Is that basically right?
That is the general idea (but incomplete).
The reason we behave as if it's true is that it's the best option available. All the other theories are criticized (= we have an explanation of what we think is a mistake/flaw in them). We wouldn't want to act on an idea that we (thought we) saw a mistake in, over one we don't think we see any mistake with -- we should use what (fallible) knowledge we have.
A justification is a reason a conjecture is good. Popperian epistemology basically has no such thing. There are no positive arguments, only negative. What we have instead of positive arguments is explanations. These are to help people understand an idea (what it says, what problem it is intended to solve, how it solves it, why they might like it, etc...), but they do not justify the theory, they play an advisory role (also note: they pretty much are the theory, they are the content that we care about in general).
One reason that not being criticized isn't a justification is that saying it is gets you a regress problem. So let's not say that! The other reason is: what would that be adding as compared with not saying it? It's not helpful (and if you give specific details/claims of how it is helpful, which are in line with the justificationist tradition, then I can give you specific criticisms of those).
Terminology isn't terribly important. David Deutsch used the word justification in his explanation of this in the dialog chapter of The Fabric of Reality (highly recommended). I don't like to use it. But the important thing is not to mean anything that causes a regress problem, or to expect justification to come from authority, or various other mistakes. If you want to take the Popperian conception of a good theory and label it "justified" it doesn't matter so much.
Replies from: jake987722↑ comment by jake987722 · 2011-04-07T05:43:49.011Z · LW(p) · GW(p)
Should we go somewhere else to discuss, rather than heavily nested comments? Would a new discussion topic page be the right place?
I agree that the nested comment format is a little cumbersome (in fact, this is a bit of a complaint of mine about the LW format in general), but it's not clear that this discussion warrants an entirely new topic.
Terminology isn't terribly important . . . If you want to take the Popperian conception of a good theory and label it "justified" it doesn't matter so much.
Okay. So what is really at issue here is whether or not the Popperian conception of a good theory, whatever we call that, leads to regress problems similar to those experienced by "justificationist" systems.
It seems to me that it does! You claim that the particular feature of justificationist systems that leads to a regress is their reliance on positive arguments. Popper's system is said to avoid this issue because it denies positive arguments and instead only recognizes negative arguments, which circumvents the regress issue so long as we accept modus tollens. But I claim that Popper's system does in fact rely on positive arguments at least implicitly, and that this opens the system to regress problems. Let me illustrate.
According to Popper, we ought to act on whatever theory we have that has not been falsified. But that itself represents a positive argument in favor of any non-falsified theory! We might ask: okay, but why ought we to act only on theories which have not been falsified? We could probably come up with a pretty reasonable answer to this question--but as you can see, the regress has begun.
Replies from: curi, curi↑ comment by curi · 2011-04-07T06:42:57.466Z · LW(p) · GW(p)
I think it's a big topic. Began answering your question here:
http://lesswrong.com/r/discussion/lw/551/popperian_decision_making/
↑ comment by curi · 2011-04-07T06:33:37.818Z · LW(p) · GW(p)
We might ask: okay, but why ought we to act only on theories which have not been falsified? We could probably come up with a pretty reasonable answer to this question--but as you can see, the regress has begun.
No regress has begun. I already answered why:
The reason we behave as if it's true is that it's the best option available. All the other theories are criticized (= we have an explanation of what we think is a mistake/flaw in them). We wouldn't want to act on an idea that we (thought we) saw a mistake in, over one we don't think we see any mistake with -- we should use what (fallible) knowledge we have.
Try to regress me.
It is possible, if you want, to create a regress of some kind which isn't the same one and isn't important. The crucial issue is: are the questions that continue the regress any good? Do they have some kind of valid point to them? If not, then I won't regard it as a real regress problem of the same type. You'll probably wonder how that's evaluated, but, well, it's not such a big deal. We'll quickly get to the point where your attempts to create regress look silly to you. That's different than the regresses inductivists face where it's the person trying to defend induction who runs out of stuff to say.
↑ comment by Larks · 2011-04-08T01:03:10.962Z · LW(p) · GW(p)
And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge.
You're equivocating between "knowing exactly the contents of the new knowledge", which may be impossible for the reason you describe, and "know some things about the effect of the new knowledge", which we can do. As Eliezer said, I may not know which move Kasparov will make, but I know he will win.
↑ comment by timtyler · 2011-04-07T12:48:52.332Z · LW(p) · GW(p)
what you're doing here is conflating Bayes' theorem (which is about probability, and which is a matter of logic, and which is correct) with Bayesian epistemology (the application of Bayes' theorem to epistemological problems, rather than to the math behind betting).
That's because to a Bayesian, these things are the same thing. Epistemology is all about probability - and visa versa. Bayes's theorem includes induction and confirmation. You can't accept Bayes's theorem and reject induction without crazy inconsistency - and Bayes's theorem is just the math of probability theory.
Replies from: None↑ comment by [deleted] · 2011-04-07T13:05:19.937Z · LW(p) · GW(p)
If I understand correctly, I think curi is saying that there's no reason for probability and epistemology to be the same thing. That said, I don't entirely understand his/her argument in this thread, as some of the criticisms he/she mentions are vague. For example, what are these "epistemological problems" that Popper solves but Bayes doesn't?
comment by JoshuaZ · 2011-04-07T01:00:33.183Z · LW(p) · GW(p)
There's an associated problem here that may be getting ignored: Popper isn't a terribly good writer." The Logic of Scientific Discovery" was one of the first phil-sci books I ever read and it almost turned me off of phil-sci. This is in contrast for example with Lakatos or Kuhn who are very readable. Some of the difficulty with reading Popper and understanding his viewpoints is that he's just tough to read.
That said, I think that chapter 3 of that books makes clear that Popper's notion of falsification is more subtle than what I would call "naive Popperism". But Popper never fully gave an explanation of how to distinguish between strict falsification theory and his notions.
There's an associated important issue: many people claim to support naive Popperism as an epistemological position, either as a demarcation between science and non-science or as a general epistemological approach. In so far as both are somewhat popular viewpoints (especially among scientists) responding to and explaining what is wrong with that approach is important even as one should acknowledge that Popper's own views were arguably more nuanced.
Replies from: curi↑ comment by curi · 2011-04-07T03:03:25.857Z · LW(p) · GW(p)
I do not find Popper hard to read.
Popper never fully gave an explanation of how to distinguish between strict falsification theory and his notions.
Did you read his later books? He does explain his position. One distinguishing difference is that Popper is not a justificationist and they are. Tell me if you don't know what that means.
comment by benelliott · 2011-04-07T06:29:42.823Z · LW(p) · GW(p)
I gave a description of how a Bayesian sees the difference between "X supports Y" and "X is consistent with Y" in our previous discussion. I don't know if you saw it, you havn't responded to it and you aren't acting like you accepted it so I'll give it again here:
"X is consistent with Y" is not really a Bayesian way of putting things, I can see two ways of interpreting it. One is as P(X&Y) > 0, meaning it is at least theoretically possible that both X and Y are true. The other is that P(X|Y) is reasonably large, i.e. that X is plausible if we assume Y.
"X supports Y" means P(Y|X) > P(Y), X supports Y if and only if Y becomes more plausible when we learn of X. Bayes tells us that this is equivalent to P(X|Y) > P(X), i.e. if Y would suggest that X is more likely that we might think otherwise then X is support of Y.
Suppose we make X the statement "the first swan I see today is white" and Y the statement "all swans are white". P(X|Y) is very close to 1, P(X|~Y) is less than 1 so P(X|Y) > P(X), so seeing a white swan offers support for the view that all swans are white. Very, very weak support, but support nonetheless.
For a Popperian definition, you guys are allowed to criticise something right? In that case could we say that support for a proposition is logically equivalent to a criticism of its negation?
The whole 'there is no positive support' thing seems like an overreaction to the whole Cartesian 'I can prove ideas with certainty thing'. I agree that certain support is a flawed concept, but you seem to be throwing the baby out with the bathwater by saying uncertain support is guilty by association and should be rejected as well.
Also, I'm a little incredulous here, do you really reject the policeman's syllogism? Would you say he is wrong to chase the man down the road? If you encountered such a person, would you genuinely treat them as you would treat anyone else?
Replies from: curi↑ comment by curi · 2011-04-07T07:07:21.618Z · LW(p) · GW(p)
I missed your comment. I found it now. I will reply there.
http://lesswrong.com/lw/3ox/bayesianism_versus_critical_rationalism/3uld?context=1#3uld
could we say that support for a proposition is logically equivalent to a criticism of its negation?
No. The negation of a universal theory is not universal, and the negation of an explanatory theory is not explanatory. So, the interesting theories would still be criticism only, and the uninteresting ones (e.g. "there is a cat") support only. And the meaning of "support" is rather circumscribed there.
If you want to say theories of the type "the following explanation isn't true: ...." get "supported" it doesn't contribute anything useful to epistemology. the support idea, as it is normally conceived, is still wrong, and this rescues none of the substance.
The other issue is that criticism isn't the same kind of thing as support. It's not in the same category of concept.
Yes I really reject the policeman's syllogism. In the sense of: I don't think the argument in the book is any good. There are other arguments which are OK for reaching the conclusion (but which rely on things the book left unstated, e.g. background knowledge and context. Without adding anything at all, no cultural biases or assumptions or hidden claims, and even doing our best to not use the biases and assumptions built into the English language, then no there isn't any way to guess what's more likely).
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-15T15:08:22.090Z · LW(p) · GW(p)
If the Policeman's argument is only valid in the light of background assumptions, why would they need to be stated? Surely we would only need to make the same tacit assumptions to agree with the conclusions. Everyday reasoning differs from formal logic in various ways, and mainly because it takes short cuts. I don't think that invalidates it.
comment by JGWeissman · 2011-04-07T06:19:50.739Z · LW(p) · GW(p)
A huge strength of Bayesian epistemology is that it tells me how to program computers to form accurate beliefs. Has Popperian epistemology guided the development of any computer program as awesome as Gmail's spam filter?
Replies from: curi↑ comment by curi · 2011-04-07T06:59:03.195Z · LW(p) · GW(p)
Bayesian epistemology didn't do that. Bayes' theorem did. See the difference?
Replies from: JGWeissman↑ comment by JGWeissman · 2011-04-07T16:40:10.924Z · LW(p) · GW(p)
Bayesian epistemology didn't do that. Bayes' theorem did.
Bayes' theorem is part of probability theory. Bayesian epistemology essentially says to take probability theory seriously as a normative description of degrees of belief.
If you don't buy that and really want to split the hair, then I am willing to modify my question to: Has the math behind Popperian epistemology guided the development of any computer program as awesome as Gmail's spam filter? (Is there math behind Popperian epistemology?)
Replies from: curi↑ comment by curi · 2011-04-07T17:59:18.783Z · LW(p) · GW(p)
gmail's spam filter does not have degrees of belief or belief.
It has things which you could call by those words if you really wanted to. But it wouldn't make them into the same things those words mean when referring to people.
Replies from: JGWeissman, Alicorn↑ comment by JGWeissman · 2011-04-07T18:14:51.791Z · LW(p) · GW(p)
But it wouldn't make them into the same things those words mean when referring to people.
I want the program to find the correct belief, and then take good actions based on that correct belief. I don't care if lacks the conscious experience of believing.
You are disputing definitions and ignoring my actual question. Your next reply should answer the question, or admit that you do not know of an answer.
↑ comment by Alicorn · 2011-04-07T18:12:49.778Z · LW(p) · GW(p)
gmail's spam filter does not have degrees of belief or belief.
It has things which you could call by those words if you really wanted to. But it wouldn't make them into the same things those words mean when referring to people.
Augh, this reminded me of a quote that I can't seem to find based on my tentative memory of its component words... it was something to the effect that we anthropomorphize computers and talk about them "knowing" things or "communicating" with each other, and some people think that's wrong and they don't really do those things, and the quote-ee was of the opinion that computers were clarifying what we meant by those concepts all along. Anybody know what I'm talking about?
Replies from: curi↑ comment by curi · 2011-04-07T18:37:10.717Z · LW(p) · GW(p)
To be clear, I think computers can do those things and AIs will, and that will help clarify the concepts a lot.
But I don't think that microsoft word does it. Nor any game "AI" today. Nor gmail's spam filter which just does mindlessly math.
comment by David_Gerard · 2011-04-07T12:15:23.865Z · LW(p) · GW(p)
It has occurred to me before that the lack of a proper explanation on LessWrong of Bayesian epistemology (and not just saying` "Here's Bayes' theorem and how it works, with a neat Java applet") is a serious lack. I've been reduced to linking the Stanford Encyclopedia of Philosophy article, which is really not well written at all.
It is also clear from the comments on this post that people are talking about it without citable sources, and are downvoting as a mark of disagreement rather than anything else. This is bad as it directly discourages thought or engagement on the topic from those trying to disagree in good faith, as curi is here.
Is there a decent explanation of Bayesian epistemology per se (not the theorem, the epistemology) that doesn't start by talking about Popper or something else, that the Bayesian epistemology advocates here could link to? This would lead to a much more productive discussion, as everyone might at least start on approximately the same page.
Replies from: benelliott↑ comment by benelliott · 2011-04-07T12:40:43.416Z · LW(p) · GW(p)
I don't know if these are what you're looking for but:
Probability Theory: The Logic of Science by Jaynes, spends its first chapter explaining why we need a 'calculus of plausibility' and what such a calculus should hope to achieve. The rest of the book is mostly about setting it up and showing what it can do. (The link does not contain the whole book, only the first few chapters, you may need to buy or borrow it to get the rest).
Yudkowsky's Technical explanation, which assumes the reader is already familiar with the theorem, explains some of its implications for scientific thinking in general.
Replies from: David_Gerard↑ comment by David_Gerard · 2011-04-07T13:29:13.531Z · LW(p) · GW(p)
See here for what I see the absence of. There's a hole that needs filling here.
comment by [deleted] · 2011-04-07T00:25:14.245Z · LW(p) · GW(p)
The naturalist philosopher Peter Godfrey Smith said this of Popper's position:
[F]or Popper, it is never possible to confirm or establish a theory by showing its agreement with observations. Confirmation is a myth. The only thing an observational test can do is to show that a theory is false...Popper, like Hume, was an inductive skeptic, and Popper was skeptical about all forms of confirmation and support other than deductive logic itself...This position, that we can never be completely certain about factual issues, is often known as fallibilism...According to Popper, we should always retain a tentative attitude towards our theories, no matter how successful they have been in the past...[a]ll we can do is try out one theory after another. A theory that we have failed to falsify up till now might, in fact, be true. But if so, we will never know this or even have reason to increase our confidence.
(From Theory and Reality, p. 59-61.) Is this not an accurate description? You seem to think Popper didn't believe in definitive falsification, but this doesn't seem to be a universally accepted interpretation. Note also that Godfrey-Smith does refer to Popper's position as fallibilism, so he is not being "unscholarly." Though Popper may have held the position that falsification can't be perfectly certain, he definitely didn't take this idea too seriously because his description of science as a process (step one: come up with conjectures; step two: falsify them) makes use of falsification by experiment.
I think the answer to your overarching question can be found here. If we know that certain events are more probable given that certain other events happened, i.e. conditional probability, we can make inferences about the future.
Replies from: curi, falenas108↑ comment by curi · 2011-04-07T00:44:42.480Z · LW(p) · GW(p)
Is this not an accurate description?
No. To start with, it's extremely incomplete. It doesn't really discuss what Popper's position is. It just makes a few scattered statements which do not explain what Popper is about.
The word "show" is ambiguous in the phrase "show that a theory is false". To a Popperian, equivocation over the issue of what is meant there is an important issue. It's ambiguous between "show definitively" and "show fallibly".
The idea that we can show a theory is false by an experimental test (even fallibly) is also, strictly, false, as Popper explained in LScD. When you reach a contradiction, something in the whole system is false. It could be an idea you had about how to measure what you wanted to measure. There's many possibilities.
You seem to think Popper didn't believe in definitive falsification, but this doesn't seem to be a universally accepted interpretation.
It's right there in LScD on page 56. I think it's in most of his other books too. I am familiar with the field and know of no competent Popper scholars who say otherwise.
Anyone publishing to the contrary is simply incompetent, or believed low quality secondary sources without fact checking them.
Though Popper may have held the position that falsification can't be perfectly certain, he definitely didn't take this idea too seriously because his description of science as a process (step one: come up with conjectures; step two: falsify them) makes use of falsification by experiment.
You have misinterpreted when you took "falsify them" to mean "falsify them with certainty". Popper is a fallibilist.
If we know that certain events are more probable given that certain other events happened
This does not even attempt to address important problems in epistemology such as how explanatory or philosophical knowledge is created.
Replies from: None↑ comment by [deleted] · 2011-04-07T01:07:57.514Z · LW(p) · GW(p)
I'll agree that Godfrey-Smith's definition is incomplete, but I don't think it really matters for the purpose of this discussion: I've already said I agree that Popper did not believe in certain confirmation, and this seems to be your main problem with this quote and with the ones other people gave. You wrote:
You have misinterpreted when you took "falsify them" to mean "falsify them with certainty". Popper is a fallibilist.
No, that is not what I meant at all. What I meant was, Popper was content with the fact that experimental evidence can say that something is probably false. If he wasn't, he wouldn't have included this his view of science as a process. So even though Popper was a falibilist, he thought that when an experimental result argued against a hypothesis, it was good enough for science.
Next:
The idea that we can show a theory is false by an experimental test (even fallibly) is also, strictly, false, as Popper explained in LScD. When you reach a contradiction, something in the whole system is false. It could be an idea you had about how to measure what you wanted to measure. There's many possibilities.
Yes, this is the old "underdetermination of theory by data" problem, which Solomonoff Induction solves--see the coinflipping example here.
Moving on, you wrote:
This does not even attempt to address important problems in epistemology such as how explanatory or philosophical knowledge is created.
Would you mind elaborating on this? What specific problems are you referring to here?
Replies from: curi↑ comment by curi · 2011-04-07T01:37:39.702Z · LW(p) · GW(p)
Popper was content with the fact that experimental evidence can say that something is probably false
That is not Popper's position. That is not even close. In various passages he explicitly denies it like "not certain or probable". To Popper, the claims that the evidence tells us something is certainly true, or probably true, are cousins which share an underlying mistake. You're assuming Popper would agree with you about probability without reading any of his passages on probability in which he, well, doesn't.
Arguing what books say with people who haven't read them gets old fast. So how about you just imagine a hypothetical person who had the views I attribute to Popper and discuss that?
Would you mind elaborating on this? What specific problems are you referring to here?
For example, the answers to all questions that have a "why" in them. E.g. why is the Earth roughly spherical? Statements with "because" (sometimes implied) is a pretty accurate way to find explanations, e.g. "because gravity is a symmetrical force in all directions". Another example is all of moral philosophy. Another example is epistemology itself, which is a philosophy not an empirical field.
Yes, this is the old "underdetermination of theory by data" problem
Yes
Which Solomonoff Induction solves--see the coinflipping example here.
This does not solve the problem to my satisfaction. It orders theories which make identical predictions (about all our data, but not about the unknown) and then lets you differentiate by that order. But isn't that ordering arbitrary? It's just not true that short and simple theories are always best; sometimes the truth is complicated.
Replies from: jimrandomh, None↑ comment by jimrandomh · 2011-04-07T01:48:41.200Z · LW(p) · GW(p)
For example, the answers to all questions that have a "why" in them. E.g. why is the Earth roughly spherical? Statements with "because" (sometimes implied) is a pretty accurate way to find explanations, e.g. "because gravity is a symmetrical force in all directions". Another example is all of moral philosophy. Another example is epistemology itself, which is a philosophy not an empirical field.
For a formal mathematical discussion of these sorts of problems, read Causality by Judea Pearl. He reduces cause to a combination of conditional independence and ordering, and from this he defines algorithms for discovering causal models from data, predicting the effect of interventions and computing counterfactuals.
Replies from: curi↑ comment by curi · 2011-04-07T01:51:03.949Z · LW(p) · GW(p)
Could you give a short statement of the main ideas? How can morality be reduced to math? Or could you say something to persuade me that that book will address the issues in a way I won't think misses the point? (e.g. by showing you understand what I think the point is, otherwise I won't except you to be able to judge if it misses the point in the way I would).
Replies from: jimrandomh↑ comment by jimrandomh · 2011-04-07T02:01:00.049Z · LW(p) · GW(p)
Sorry, I over-quoted there; Pearl only discusses causality, and a little bit of epistemology, but he doesn't talk about moral philosophy at all.
His book is all about causal models, which are directed graphs in which each vertex represents a variable and each edge represents a conditional dependence between variables. He shows that the properties of these graphs reproduce what we intuitively think of as "cause and effect", defines algorithms for building them from data and operating on them, and analyzes the circumstances under which causality can and can't be inferred from the data.
Replies from: curi↑ comment by curi · 2011-04-07T02:28:44.440Z · LW(p) · GW(p)
I don't understand the relevance.
Replies from: jimrandomh↑ comment by jimrandomh · 2011-04-07T02:39:41.387Z · LW(p) · GW(p)
Your quote seemed to be saying that that Bayesianism couldn't handle why/because questions, but Popperian philosophy could. I mentioned Pearl as a treatment of that class of question from a Bayes-compatible perspective.
Replies from: curi↑ comment by curi · 2011-04-07T02:54:20.145Z · LW(p) · GW(p)
Causality isn't explanation. X caused Y isn't the issue I was talking about.
For example, the statement "Murder is bad because it is illiberal" is an explanation of why it is bad. It is not a statement about causality.
You may say that "illiberal" is a short cut for various other ideas. And you may claim that eventually that reduce away to causal issues. But that would be reductionism. We do not accept that high level concepts are a mistake or that emergence isn't important.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-07T03:02:20.048Z · LW(p) · GW(p)
Huh? It may be that I haven't read Logic of Scientific Discovery in a long time, but as far as I remember/can tell, Popper doesn't care about moral whys like "why is murder bad" at all. That seems to be an issue generally independent of both Bayesian and Popperian epistemology. One could be a Bayesian and be a utilitarian, or a virtue ethicist, or some form of deontologist. What am I missing?
Replies from: curi↑ comment by curi · 2011-04-07T03:09:00.808Z · LW(p) · GW(p)
Huh? It may be that I haven't read Logic of Scientific Discovery in a long time, but as far as I remember/can tell, Popper doesn't care about moral whys like "why is murder bad" at all.
He doesn't discuss them in LScD (as far as I remember). He does elsewhere, e.g. in The World of Parmenides. Whether he published moral arguments or not, his epistemology applies to them and works with them -- it is general purpose.
Epistemology is about how we get knowledge. Any epistemology which can't deal with entire categories of knowledge has a big problem. It would mean a second epistemology would be needed for that other category of knowledge. And that would raise questions like: if this second one works where the first failed, why not use it for everything?
Popper's method does not rely on only empirical criticism but also allows for all types of philosophical criticism. So it's not restricted to only empirical issues.
Replies from: ShardPhoenix↑ comment by ShardPhoenix · 2011-04-07T04:38:13.278Z · LW(p) · GW(p)
You seem to be assuming that "morality" is a fact about the universe. Most people here think it's a fact about human minds.
(ie we aren't moral realists, at least not in the sense that a religious person is).
Replies from: curi↑ comment by curi · 2011-04-07T04:40:28.243Z · LW(p) · GW(p)
Yes, morality is objective.
I don't want to argue terminology.
There are objective facts about how to live, call them what you will. Or, maybe you'll say there aren't. If there are, then it's not objectively wrong to be a mass murderer. Do you really want to go there into full blown relativism and subjectivism?
Replies from: ShardPhoenix↑ comment by ShardPhoenix · 2011-04-07T04:42:47.813Z · LW(p) · GW(p)
Well, that's just like, your opinion, man.
Seriously: Morality is in the brain. Murder is "wrong" because I, and people sufficiently similar to me, don't like it. There's nothing more objective about it than any of my other opinions and desires. If you can't even agree on this, then coming here and arguing is hopeless - you might as well be a Christian and try to tell us to believe in God.
Replies from: zaph↑ comment by zaph · 2011-04-07T14:32:45.801Z · LW(p) · GW(p)
Well stated. And I would further add that there are issues with significant minority interests that staunchly disagree with majority opinion. Take the debates on homosexual marriage or abortion. The various sides have such different viewpoints that there isn't a common ground where any agreeably objective position can be reached. The "we all agree mass murder is wrong" is a cop out, because it implies all moral questions are that black and white. And even then, if it's such a universal moral, why does it happen in the first place? In the brain based morality model, I can say Dennis Rader's just a substantially different brain. With universal morality, you're stuck with the problem of people knowing something is wrong, but doing it anyway.
↑ comment by [deleted] · 2011-04-07T01:58:09.241Z · LW(p) · GW(p)
Actually, one of the reason I stood by this interpretation of Popper was because one of the quotes posted in one of the other threads here:
"the falsificationists or fallibilists say, roughly speaking, that what cannot (at present) in principle be overthrown by criticism is (at present) unworthy of being seriously considered; while what can in principle be so overthrown and yet resists all our critical efforts to do so may quite possibly be false, but is at any rate not unworthy of being seriously considered and perhaps even of being believed"
Which is apparently from Conjectures and Refutations, pg 309. Regardless, I don't care about this argument overmuch, since we seem to have moved on to some other points.
[Solomonoff Induction] does not solve the problem to my satisfaction. It orders theories which make identical predictions (about all our data, but not about the unknown) and then lets you differentiate by that order. But isn't that ordering arbitrary? It's just not true that short and simple theories are always best; sometimes the truth is complicated.
Remember that in Bayesian epistemology, probabilities represent our state of knowledge, so as you pointed out, the simplest hypothesis that fits the data so far may not be the true one because we haven't seen all of the data. But it is necessarily our best guess because of the conjunction rule.
Replies from: curi, JoshuaZ↑ comment by curi · 2011-04-07T02:22:42.206Z · LW(p) · GW(p)
There are so many problems here that it's hard to choose a starting point.
1) the data set you are using is biased (it is selective. all observation is selective)
2) there is no such thing as "raw data" -- all your observations are interpreted. your interpretations may be mistaken.
3) what do you mean by "best guess"? one meaning is "most likely to be the final, perfect truth". but a different meaning is "most useful now".
4) You say "probabilities represent our state of knowledge". However there are infinitely many theories with the same probability. Or there would be, except for your solomonoff prior about simpler theories having higher probability. So the important part of "state of our knowledge" as represented by these probabilities consists mostly of the solomonoff prior and nothing else, because it, and it alone, is dealing with the hard problem of epistemology (dealing with theories which make identical predictions about everything we have data for).
5) you can have infinite data and still get all non-emprical issues wrong
6) regarding the conjunction rule, there is miscommunication. this does not address the point i was trying to make. i think you have a premise like "all more complicated theories are merely conjunctions of simpler theories". But that is to conceive of theories very differently than Popperians do, in what we see as a limited and narrow way. To begin to address these issues, let's consider what's better: a bald assertion, or an assertion and an explanation of why it is correct? If you want "most likely to happen to be the perfect, final truth" you are better off with only an unargued assertion (since any argument may be mistaken). But if you want to learn about the world, you are better off not relying on unargued assertions.
↑ comment by JoshuaZ · 2011-04-07T02:42:59.481Z · LW(p) · GW(p)
Remember that in Bayesian epistemology, probabilities represent our state of knowledge, so as you pointed out, the simplest hypothesis that fits the data so far may not be the true one because we haven't seen all of the data. But it is necessarily our best guess because of the conjunction rule.
You are going to have to expand on this. I don't see how the conjunction rule implies that simpler hypotheses are in general more probable. This is true if we have two hypotheses where one is X and the other is "X and Y" but that's not how people generally apply this sort of thing. For example, I might have a sequence of numbers that for the first 10,000 terms has the nth term as the nth prime number. One hypothesis is that the nth term is always the nth prime number. But I could have as another hypothesis some high degree polynomial that matches the first 10,000 primes. That's clearly more complicated. But one can't use conjunction to argue that it is less likely.
Replies from: None↑ comment by [deleted] · 2011-04-07T04:52:44.714Z · LW(p) · GW(p)
Imagine that I have some set of propositions, A through Z, and I don't know the probabilities of any of these. Now let's say I'm using these propositions to explain some experimental result--since I would have uniform priors for A through Z, it follows that an explanation like "M did it" is more probable than "A and B did it," which in turn is more probable than "G and P and H did it."
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-07T04:58:22.585Z · LW(p) · GW(p)
Yes, I agree with you there. But this is much weaker than any general form of Occam. See my example with primes. What we want to say in some form of Occam approach is much stronger than what you can get from simply using the conjunction argument.
↑ comment by falenas108 · 2011-04-07T00:31:38.835Z · LW(p) · GW(p)
Sorry, didn't see you posted this before I replied too...
Replies from: Nonecomment by endoself · 2011-04-07T00:05:45.808Z · LW(p) · GW(p)
The thing intended as the proof is most of chapter 2. I dislike Jaynes' assumptions there, since I find many of them superfluous compared to other proofs. You probably like them even less, since one is "Representation of degrees of plausibility by real numbers".
Replies from: curi↑ comment by curi · 2011-04-07T00:09:20.391Z · LW(p) · GW(p)
It cannot be a proof of Bayesian epistemology itself if it makes assumptions like that.
It is merely a proof of some theorems in Bayesian epistemology given some premises that Bayesians like.
If you have a different proof which does not make assumptions I disagree with, then let's hear it. Otherwise you can give up on proving and start arguing why I should agree with your starting points. Or maybe even, say, engaging with Popper's arguments and pointing out mistakes in them (if you can find any).
Replies from: Peterdjones, endoself↑ comment by Peterdjones · 2011-04-12T20:31:43.762Z · LW(p) · GW(p)
You are complaining it is not a deduction of Bayes from no assumpyions whatever. But all it needs to be is that those assumptions can be made to "work"--ie applied without con tradiction, qoudliber or other disaster.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-15T15:18:53.682Z · LW(p) · GW(p)
Remember, Popper himself said it all starts with common sense.
↑ comment by endoself · 2011-04-07T02:42:12.895Z · LW(p) · GW(p)
I agree that it is by no means a complete proof of Bayesian epistemology. The book I pointed you to might have a more complete one, though I doubt it will be complete since it seems more like a book about using statistics than about rigourously understanding epistemology.
I am currently collecting the necessary knowledge to write the full proof myself, if it is possible (not because of this debate, because I kept being annoyed by unjustified assumptions that didn't even seem necessary).
Replies from: curi↑ comment by curi · 2011-04-07T02:55:59.903Z · LW(p) · GW(p)
Good luck. But, umm, do you have some argument against fallibilism? Because you're going to need one.
Replies from: endoself↑ comment by endoself · 2011-04-07T03:35:59.018Z · LW(p) · GW(p)
I think I massively overstated my intention. I meant the full proof of the stuff we know; the thing I think could be in Mathematical Statistics, Volume 1: Basic and Selected Topics.
Anyways, I think I accept fallibilism, at least from the Wikipedia page. Why do you think I don't? This is understandable, because I've been talking about idealized agents a lot more than about humans actually applying Bayesianism.
Replies from: curi↑ comment by curi · 2011-04-07T03:48:40.486Z · LW(p) · GW(p)
I think you are not a fallibilist because you want to prove philosophical ideas.
But we can't have certainty. So what do you even think it means to "prove" them? Why do you want to prove them instead of give good arguments on the matter?
Replies from: endoself↑ comment by endoself · 2011-04-07T04:15:22.935Z · LW(p) · GW(p)
I use the word prove because I'm doing it deductively in math. I already linked you to the 2+2=3 thing, I believe. Also, the question of how I would, for example, change AI design if a well-known theorem is wrong (pretend it is the future and the best theorems proving Bayesianism are better-known and I am working on AI design) is both extremely hard to answer and unlikely to be necessary. Well unlikely is the wrong word; what is P(X | "There are no probabilities")? :)
Replies from: calef, curi↑ comment by calef · 2011-04-07T05:10:07.160Z · LW(p) · GW(p)
Probably the most damning criticism you'll find, curi, is that fallibilism isn't useful to the Bayesian.
The fundamental disagreement here is somewhere in the following statement:
"There exist true things, and we have a means of determining how likely it is for any given statement to be true. Furthermore, a statement that has a high likelihood of being true should be believed over a similar statement with a lower likelihood of being true."
I suspect your disagreement is in one of several places.
1) You disagree that there even exist epistemically "true" facts. 2) That we can determine how likely something is to be true. or 3) That likelihood of being true (as defined by us) is reason to believe the truth of something.
I can actually flesh out your objections to all of these things.
For 1, you could probably successfully argue that we aren't capable of determining if we've ever actually arrived at a true epistemic statement because real certainty doesn't exist, thus the existence or nonexistence of true epistemic statements is on the same epistemological footing as the existence of God--i.e. shaky to the point of not concerning oneself with them all together.
2 basically ties in with the above directly.
3 is a whole 'nother ball game, and I don't think it's really been broached yet by anyone, but it's certainly a valid point of contention. I'll leave it out unless you'd like to pursue it.
The Bayesian counter to all of these is simply, "That doesn't really do anything for me."
Declaring we have certainty, and quantifying it as best we can is incredibly useful. I can pick up an apple and let go. It will fall to the ground. I have an incredibly huge amount of certainty in my ability to repeat that experiment.
That I cannot foresee the philosophical paradigm that will uproot my hypothesis that dropped apples fall to the ground is not a very good reason to reject my relative certainty in the soundness of my hypothesis. Such a apples-aren't-falling-when-dropped paradigm would literally (and necessarily) uproot everything else we know about the world.
Basically, what I'm trying to say is that all you're ever going to get out of a Bayesian is, "No, I disagree. I think we can have certainty." And the only way you could disprove conclusions made by Bayesians are through means the Bayesian would have already seen, and thus the Bayesian would have already rejected said conclusion.
You've already outlined that the fallibilist will just keep tweaking explanations until an explanation with no criticism is reached. I think you might find Bayesianism more palatable if you just pretend that we aren't trying to find certainty, just say we're trying to minimize criticism.
This probably hasn't been a very satisfying answer. I certainly agree it's useful to have an understanding of the biases to our certainties. I also think Bayesianism happens to build that into itself quite well. Personally, I don't think there's anything I'm absolutely certain about, because to claim so would be silly.
Replies from: endoself↑ comment by endoself · 2011-04-07T05:32:57.499Z · LW(p) · GW(p)
Small nitpick: I don't like your use of the word 'certainty' here. Especially in philosophy, it has too much of a connotation of "literally impossible for me to be wrong" rather than "so ridiculously unlikely that I'm wrong that we can just ignore it", which may cause confusion.
Replies from: calef↑ comment by calef · 2011-04-07T05:40:16.595Z · LW(p) · GW(p)
Where don't you like it? I don't think anyone actually argues for your first definition, because, like I said, it's silly. I think curi's point is that fallibilism is predicated on your second definition not (ever?) being a valid claim.
My point is that the things we are "certain" about (as per your second definition) probably coincide almost exactly with "statements without criticism" as per curi's definition(s).
Replies from: endoself, Peterdjones↑ comment by endoself · 2011-04-07T06:01:44.377Z · LW(p) · GW(p)
It is a silly definition, but people are silly enough that I hear it often enough to be wary of it.
My point is that the things we are "certain" about (as per your second definition) probably coincide almost exactly with "statements without criticism" as per curi's definition(s).
I interpreted this as the first definition. I guess we should see what curi says.
↑ comment by Peterdjones · 2011-04-12T20:34:02.205Z · LW(p) · GW(p)
people genrally try to have their cake and eat it: they want certainty to mean "cannot be wrong", but only on the basis that they feel sure.
↑ comment by curi · 2011-04-07T04:35:29.154Z · LW(p) · GW(p)
I think we have very different goals, and that the Popperian ones are better.
There is more to epistemology, and to philosophy, than math.
I'd say you are practically trying to eliminate all philosophy. And that saying you have an epistemology at all is very misleading, because epistemology is a philosophical field.
Replies from: JoshuaZ, endoself↑ comment by JoshuaZ · 2011-04-07T05:24:07.806Z · LW(p) · GW(p)
I think we have very different goals, and that the Popperian ones are better.
So could you be more precise in how you think the goals differ and why the Popperian goals are better?
There is more to epistemology, and to philosophy, than math.
I 'd say you are practically trying to eliminate all philosophy. And that saying you have an epistemology at all is very misleading, because epistemology is a philosophical field.
Huh? Do you mean that because the Bayesians have made precise mathematical claims it somehow ceases to be an epistemological system? What does that even mean? I don't incidentally know what it means to eliminate philosophy, but areas can certainly be carved off from philosophy into other branches. Indeed, this is generally what happens. Philosophy is the big grab bag of things that we don't have a very good precise feel for. As we get more precise understanding things break off. For example, biology broke off from philosophy (when it broke off isn't clear, but certainly by 1900 it was a separate field) with the philosophers now only focusing on the remaining tough issues like how to define "species". Similarly, economics broke off. Again, where it broke off is tough (that's why Bentham and Adam Smith are often both classified as philosophers). A recent break off has been psychology, which some might argue is still in the process. One thing that most people would still see as clearly in the philosophy realm is moral reasoning. Indeed, some would argue that the ultimate goal of philosophy is to eliminate itself.
If it helps at all, in claiming that the Bayesians lack an epistemology or are not trying to philosophy it might help to taboo both epistemology and philosophy and restate those statements. What do those claims mean in a precise way?
Replies from: curi↑ comment by curi · 2011-04-07T05:33:11.524Z · LW(p) · GW(p)
Different people are telling me different things. I have been told some very strong instrumentalist and anti-philosophy arguments in my discussions here. I don't know just how representative of all Bayesians that is.
For example, moral philosophy has been trashed by everyone who spoke to me about it so far. I get told its meaningless, or that Bayesian epistemology cannot create moral knowledge. No one has yet been like "oh my god, epistemology should be able to create moral and other philosophical (non-empirical, non-observational) knowledge! Bayesian stuff is wrong since it can't!" Rather, people don't seem to mind, and will argue at length that e.g. explanatory knowledge and non-empirical knowledge don't exist or are worthless and prediction is everything.
By "philosophy" I mean things which can't be experimentally/empirically tested (as opposed to "science" by which I mean things that can be). So for philosophy, no observations are directly relevant.
Make sense? Where do you stand on these issues?
And the way I think Popperian goals are better is that they value explanations which help us understand the world instead of being instrumentalist, positivist, anti-philosophical, or anything like that.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-08T03:01:41.839Z · LW(p) · GW(p)
For example, moral philosophy has been trashed by everyone who spoke to me about it so far.
Have you never dealt with people who aren't moral realists before?
And the way I think Popperian goals are better is that they value explanations which help us understand the world instead of being instrumentalist, positivist, anti-philosophical, or anything like that.
You are going to have to expand on this. I'm still confused by what you mean by anti-philosophical. I also don't see why "instrumentalist" is a negative. The Bayesian doesn't have a problem with trying to understand the world: the way they measure that understanding is how well they can predict things. And Bayesianism is not the same as positivist by most definitions of that term, so how are you defining an approach as positivist and why do you consider that to be a bad thing?
↑ comment by endoself · 2011-04-07T05:10:34.467Z · LW(p) · GW(p)
In order for any philosophy to be valid, the human brain must be able to evaluate deductive arguments; they are a huge component of philosophy, with many often being needed to argue a single idea. Wondering what to do in case these are wrong is not only unnecessary but impossible.
Replies from: curi↑ comment by curi · 2011-04-07T05:22:01.215Z · LW(p) · GW(p)
I don't have any criticism of deductive logic itself. But I do have criticisms of some of the premises i expect you to use. For example, they won't all be deductively argued for themselves. That raises the problem of: how will you sort out good ideas from bad ideas for use as premises? That gets into various proposed solutions to that problem, such as induction or Popperian epistemology. But if you get into that, right in the premises of your supposed proof, then it won't be much of a proof because so much substantive content in the premises will be non-deductive.
Replies from: endoself↑ comment by endoself · 2011-04-07T05:43:25.956Z · LW(p) · GW(p)
Do you agree with the premises I have used in the discussion of Dutch books and VNM-utility so far? There it is basically "a decision precess that we actually care about must have the following properties" and that's it. I did skim over inferring probabilities from Dutch books and VNM axiom 3 and there may be some hidden premises in the former.
Replies from: curi↑ comment by curi · 2011-04-07T06:55:17.245Z · LW(p) · GW(p)
Do you agree with the premises I have used in the discussion of Dutch books and VNM-utility so far?
I don't think so. You said we have to assign probabilities to avoid getting Dutch Booked. I want an example of that. I got an example where probabilities weren't mentioned, which did not convince me they were needed.
comment by Peterdjones · 2011-04-12T20:11:16.706Z · LW(p) · GW(p)
Curi,
"Some first chapter assumptions are incorrect or unargued. It begins with an example with a policeman, and says his conclusion is not a logical deduction because the evidence is logically consistent with his conclusion being false."
Popper's epistemology doesn't explain that the conclusion of the argument has no validty, in the sense of being certainly false. In fact, it requires that the conclusion is not certainly false. No conjecture is certainly false.
Perhaps you meant he shows that the argument is invalid in the sense of being a non sequitur. (A non sequitur can still have a plausible or true conclusion). Of course it is not valid in the sense of traditional, necessitarian deduction. The whole point is that it is something different. And the argument that this non-traditional, plausibility based deduction works is just the informal observation that we use it all the time and it seems to work. What else could it be? If were valid by taditional deduction it would BE traditional deduction.
" Later when he gets into more mathematical stuff which doesn't (directly) rest on appeals to intution, it does rest on the ideas he (supposedly) established early on with his appeals to intuition."
The Popperian argument against probablistic reasoning is that it can't be shown how it works. If Jaynes maths shows how it works, that objection is removed.
"This is pure fiction. Popper is a fallibilist and said (repeatedly) that theories cannot be proved false (or anything else)."
Of course he has to believe in some FAPP refutation. or he ends up saying nothing at all.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-14T14:27:20.315Z · LW(p) · GW(p)
"Science, philosophy and rational thought must all start from common sense". KRP, Objective Knowledge, p33.
Starting with common sense is exactly what Jaynes is doing. (Popper says that what is important is not to take common sense as irrefutable).
comment by Peterdjones · 2011-07-18T00:03:38.356Z · LW(p) · GW(p)
If anyone can bear more of this, Poppers argument against induction using Bayes is being discussed here
comment by Peterdjones · 2011-04-12T20:23:30.455Z · LW(p) · GW(p)
"'What is support?' (This is not asking for its essential nature or a perfect definition, just to explain clearly and precisely what the support idea actually says) and 'What is the difference between "X supports Y" and "X is consistent with Y"?' If anyone has the answer, please tell me."
Bayesians appear to have answers to these questions. Moreovoer, far from wishing to refute Popper, they can actually incorporate a fomr of Popperianism.
"On the other hand, Popper's idea that there is only falsification and no such thing as confirmation turns out to be incorrect. Bayes' Theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued."
But of course Popper was a falliblist as well as a falsificationist, so his falsifications aren't absolute and certain anyway. Bayes just brings out that where you don't have absolute falsification, you can't have absolute lack of positive support. Falsification of T has to support not-T. But the support gets spread thinly...
comment by falenas108 · 2011-04-07T00:28:06.173Z · LW(p) · GW(p)
From the research I have done in the last 5 minutes, it seems as though Popper believed that all good scientific theories should be subject to experiments that could prove them wrong.
Ex:
"the falsificationists or fallibilists say, roughly speaking, that what cannot (at present) in principle be overthrown by criticism is (at present) unworthy of being seriously considered; while what can in principle be so overthrown and yet resists all our critical efforts to do so may quite possibly be false, but is at any rate not unworthy of being seriously considered and perhaps even of being believed" -Popper
This seems to imply that theories can be proved false.
Replies from: curi↑ comment by curi · 2011-04-07T00:32:34.700Z · LW(p) · GW(p)
Replying to accusations of unscholarly criticism of Popper with an unsourced Popper quote is very silly.
That the quote doesn't say what you claim it does (as I read it), and you make no attempt to explain your reading of it, is also silly.
Replies from: None, falenas108↑ comment by [deleted] · 2011-04-07T00:59:57.133Z · LW(p) · GW(p)
The quote came from Conjectures and Refutations, pg 309. I agree that it doesn't say what falenas108 claims. Plus a bit has been missed out at the end: " -- though only tentatively." Also, on the following page, Popper says:
For us [fallibilists] ... science has nothing to do with the quest for certainty or probability or reliability. We are not interested in establishing scientific theories as secure, or certain, or probable. Conscious of our own fallibility we are only interested in criticizing them and testing them, hoping to find out where we are mistaken; of learning from our mistakes; and , if we are lucky, of proceeding to better theories.
So Popper would not assert that theories can be established as definitely false.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-12T20:28:49.091Z · LW(p) · GW(p)
Of course, in reality, fallibilism just means you don't look for certainty. You can and should look for more probable theories, or as P. calls them, "better theories".
Replies from: curi↑ comment by falenas108 · 2011-04-07T00:56:18.214Z · LW(p) · GW(p)
Citation: Popper, K. R. (1963). Conjectures and Refutations: The Growth of Scientific Knowledge, New York: Harper and Row. Reprinted London: Routledge, 1974.
It says theories should resist being overthrown for them to be proper theories. That implies that it is possible for a theory to be overthrown.
Replies from: curi, curi↑ comment by curi · 2011-04-07T00:59:11.051Z · LW(p) · GW(p)
A theory can be fallibly overthrown, but not definitely overthrown, in Popper's view. Quotes out of context are easy to misread when you are not familiar with the ideas, and when you make assumptions (e.g. that overthrowing must be definitive) that the author does not make.
Replies from: falenas108, Peterdjones↑ comment by falenas108 · 2011-04-07T01:01:26.999Z · LW(p) · GW(p)
Ok, thanks for correcting me.
↑ comment by Peterdjones · 2011-04-12T20:25:35.500Z · LW(p) · GW(p)
"A theory can be fallibly overthrown, but not definitely overthrown, in Popper's view. "
So maybe Jaynes was using "disprove" to mean "fallibly overthrow".
↑ comment by curi · 2011-04-07T01:10:57.851Z · LW(p) · GW(p)
No page number isn't very nice. For anyone interested, it is on page 309, which is at the start of chapter 10 section 3.
If you read the context, you will find, for example, an explicit denouncement of the quest for certainty on the next page. Plus elaboration. Popper's position in these matters is not unclear.