GroupThink, Theism ... and the Wiki
post by byrnema · 2009-04-13T17:28:58.896Z · LW · GW · Legacy · 62 commentsContents
62 comments
In response to the The uniquely awful example of theism, I presented myself as a datapoint of someone in the group who disagrees that theism is uncontroversially irrational.
With a loss of considerable time, several karma points and two bad posts, I now retract my position.
Because I have deconverted? (Sorry, but no.)
I had a working assumption (inferred from here) that rationality meant believing that all beliefs must be rigorously consistent with empirical observation. I now think of this as a weak form of rationalism (see full definition below). A stronger form of rationalism held by (many, most?) rationalists is that there is no other valid source of knowledge. If we define a belief system as religious if and only if it claims knowledge that is independent of empirical experience (i.e., metaphysical) then it is trivially true that all religions are irrational -- using the stronger definition of rational.
A disagreement of definitions is not really a disagreement. Someone suggested on the April open thread that we define "rationality". My idea of a definition would look something like this:
Rationality assumes that:
(1) The only source of knowledge is empirical experience.
(2) The only things that are known are deduced from empirical experience by valid logical reasoning and mathematics.
Weak Rationality assumes that:
(1) The first source of knowledge is empirical experience.
(2) The only things that are known with certainty are deduced from empirical experience by valid logical reasoning and mathematics.
(3) Define a belief system as all knowledge deduced from empirical observation with all metaphysical beliefs, if any. Then the belief system is rational (nearly rational or weakly rational) if the belief system is internally consistent.
Probably these definitions have been outlined somewhere better than they are here. Perhaps I have misplaced emphasis and certainly there are important nuances and variations. Whether this definition works or not, I think it's important to have a working set of definitions that we all agree upon. The wiki has just started out, but I think it's a terrific idea and worth putting time into. Every time you struggle with finding the right definition for something I suggest you add your effort to the group knowledge by adding that definition to the Wiki.
I made the accusation that the consensus about religion was due to "group think". In its pejorative sense, group think means everyone thinks the same thing because dissent is eliminated in some way. However, group think can also be the common set of definitions that we are working with. I think that having a well-defined group think will make posting much more efficient for everyone (with fewer semantic confusions) and will also aid newcomers.
The "group think" defined in the Wiki would certainly need to be dynamic, nuanced and inclusive. A Wiki is already dynamic. To foster nuance and inclusion, the wiki might prompt for alternatives. For example, if I posted the two definitions of rationality above I might also write, "Do you have another working definition of rationalism? Please add it here." so that a newcomer to LW would know they were not excluded from the "group of rationalists" if they have a different definition.
What are some definitions that we could/should add to the Wiki? (I've noticed that "tolerance", as a verb or a noun, is problematic.)
62 comments
Comments sorted by top scores.
comment by PhilGoetz · 2009-04-15T03:28:43.644Z · LW(p) · GW(p)
How about an alphabetical list of links for all of the Yudkowskian and Hansonian phrases that get used here so frequently?
Replies from: David_Gerard↑ comment by David_Gerard · 2011-04-11T21:53:34.725Z · LW(p) · GW(p)
+1
LessWrong still needs a jargon file.
Edit: http://wiki.lesswrong.com/wiki/Jargon exists and is now somewhat maintained.
comment by Paul Crowley (ciphergoth) · 2009-04-13T22:51:49.673Z · LW(p) · GW(p)
I think I would encourage people not to add terms of their own invention to the wiki in general; if the term gains enough currency, others will start the relevant page. This system works well on Wikipedia.
Replies from: byrnema↑ comment by byrnema · 2009-04-13T23:12:56.332Z · LW(p) · GW(p)
I agree that inventive, one-use terms needn't be put on the wiki. (I wouldn't put my definition of rationality up because I have little or no knowledge of the group think -- I just wanted to get the conversation started). But if someone puts effort into defining a term accurately (for example a term that is gaining currency), especially in a way that is a good summary of how the group in general tends to understand it, they should add it the Wiki. Defining terms is important but might take a lot of work.
comment by CronoDAS · 2009-04-13T22:15:34.845Z · LW(p) · GW(p)
The way I see it, it's not so much that "theism" is itself irrational as that it has accumulated an awful lot of baggage around it.
Loosely speaking, yes, the universe could have had a creator. The hypothesis that the universe has a creator and that creator performs miracles, on the other hand, is not consistent with empirical evidence, although demonstrating this isn't trivial - it takes modern science (and philosophy of science) to explain why various forms of vitalism and mind-body dualism) are wrong.
On the other hand, any idiot should be able to see that the problem of evil is a knock-down argument against folk Christianity and many other religions. Things could be a lot better than they are, but they're not.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2009-04-15T03:21:13.491Z · LW(p) · GW(p)
According to Christianity, there is no lasting evil; hence, no serious problem of evil. Evil, like everything else, is a tool of God; the Bible makes references to God deciding to allow demons etc. to do evil, such as in Job. Jesus in some places performs miracles with commentary to the effect that he isn't doing them in order to help people's earthly lives; he's doing it so that they may believe. Evil causes a crisis of faith only for those whose faith is weak, who suspect that maybe this world is all there is.
Replies from: Psychohistorian↑ comment by Psychohistorian · 2009-04-16T22:24:07.476Z · LW(p) · GW(p)
If you think about it, the argument from evil is more of a problem with the cogency of a concept of God.
If I invented malaria in a laboratory and released it on the world, the only difficult moral question is what way of killing me wouldn't be too nice.
God does the exact same thing (under a creationist model) and he is still considered, "good." Thus, either we infer that I'm morally entitled to kill millions of people, or that when we say God is "good," we literally have no idea what we mean by it.
And for the idea of the devil created disease, or some other such cop-out, the issue is that God allows its continued existence, which would imply that the relief of suffering caused by others is not a good thing, since if it were a good thing, God would do it.
Then again, I doubt many theists think about having a cogent concept of God.
comment by Jack · 2009-04-14T01:48:43.237Z · LW(p) · GW(p)
So I'm confused (in an admittedly Socratic way). How do you define metaphysics?
I ask because I have quite a few beliefs I consider metaphysical and they all got there by considering general empirical facts and arguing with reasons.
Replies from: byrnema↑ comment by byrnema · 2009-04-14T03:30:40.407Z · LW(p) · GW(p)
I was defining metaphysical as anything completely independent of empirical observation. From the comments below, it sounds like it can be argued that reasoning intersects with the mataphysical world, in the sense that you can conclude something about the metaphysical world with reasoning. I thought that maybe metaphysical things didn't exist -- but perhaps that's empiricism. (What is an example of a rational metaphysical belief?) Perhaps... metaphysical beliefs that are concluded through reason are rational, but those that are still independent of reason are religious?
Replies from: Jack↑ comment by Jack · 2009-04-14T05:00:36.055Z · LW(p) · GW(p)
So traditionally metaphysics was the science of discovering facts about reality through the use of "pure reason". I take metaphysical issues to be things like the existence of so-called "universals", philosophy of time (like whether the past and the future exist in the same way the present does and why does time flow in one direction), what causation is, whether humans have free will, personal identity (what makes a something the same person at different times, what a person is), the ontological status of mathematical objects (are numbers real?) and, I suppose, the existence of God and a bunch of other issues. Now I would say there are all sorts of positions one could take on these issues and if one had good reasons for those positions one would at least be acting rationally even if one was still wrong.
Now its possible to hold positions on these issues for bad reasons. For example, your reason might be that the Bible told you you have free will. But this doesn't distinguish metaphysical issues from any other issue- you might think that pi = 3 because the Bible says so. Certainly you'll agree that there are religious beliefs that aren't metaphysical ones.
Now there is a trend in metaphysics that rejects the existence of supernatural or non-material things (perhaps this is what you meant by metaphysics). But this isn't an axiom. We have reasons for rejecting the supernatural and they're very strong reasons. When it comes to God's existence the reasons are even stronger against existence.
comment by timtyler · 2009-04-13T19:14:41.823Z · LW(p) · GW(p)
So: does induction come under "valid logical reasoning" or "mathematics"?
Replies from: GuySrinivasan↑ comment by SarahNibs (GuySrinivasan) · 2009-04-13T20:11:49.412Z · LW(p) · GW(p)
Related: rationality includes using Occam's Razor. Exactly which Razor we employ is in part determined empirically. If properties of your implied Razor are at odds with properties of empirically derived Razors, that may indicate a lack of rationality.
Metaphysical beliefs are still subject to the Razor. Right?
Replies from: robzahra↑ comment by robzahra · 2009-04-13T20:37:52.381Z · LW(p) · GW(p)
seconding timtyler and guysrinivasan--I think, but can't prove, that you need an induction principle to reach the anti-religion conclusion. See especially Occam's Razor and Inductive Bias. If someone wants to bullet point the reasons to accept an induction principle, that would be useful. Maybe I'll take a stab later. It ties into Solomonoff induction among other things.
EDIT---I've put some bullet points below which state the case for induction to the best of my knowledge.
Replies from: robzahra, byrnema↑ comment by robzahra · 2009-04-13T21:01:50.515Z · LW(p) · GW(p)
Why to accept an inductive principle:
Finite agents have to accept an "inductive-ish" principle, because they can't even process the infinitely many consistent theories which are longer than the number of computations they have in which to compute, and therefore they can't even directly consider most of the long theories. Zooming out and viewing from the macro, this is extremely inductive-ish, though it doesn't decide between two fairly short theories, like Christianity versus string theory.
Probabilities over all your hypotheses have to add to 1, and getting an extra bit of info allows you to rule out approximately half of the remaining consistent theories; therefore, your probability of a theory one bit longer being true ought to drop by that ratio. If your language is binary, this has the nice property that you can assign a 1-length hypothesis a probability of 1/2, a 2-length hypothesis a probability of 1/4, ... an n -length hypothesis a probability of 1/(2^n)...and you notice that 1/2+1/4+1/8 + ... + ~= 1. So the scheme fits pretty naturally.
Under various assumptions, an agent does only a constant factor worse using this induction assumption versus any other method, making this seem not only less than arbitrary, but arguably, "universal".
Ultimately, we could be wrong and our universe may not actually obey the Occam Prior. It appears we don't and can't even in principle have a complete response to religionists who are using solipsistic arguments. For example, there could be a demon making these bullet points seem reasonable to your brain, while they are in fact entirely untrue. However, this does not appear to be a good reason not to use Occam's razor.
Related to (2)--you can't assign equal probability greater than 0 to each of the infinite number of theories consistent with your data, and still have your sums converge to 1 (because for any rational number R > 0, the sum of an infinite number of R's will diverge). So, you have to discount some hypotheses relative to others, and induction looks to be the simplest way to do this (One could say of the previous sentence, "meta-occam's razor supports occam's razor"). The burden of proof is on the religionist to propose a plausible alternative mapping, since the Occam mapping appears to satisy the fairly stringent desiderata.
Further to (5), notice that to get the probability sum to converge to 1, and also to assign each of the infinite consistent hypotheses a probability greater than 0, most hypotheses need to have smaller probability than any fixed rational number. In fact, you need more than that, you actually need the probabilities to drop pretty fast, since 1/2 + 1/3 + 1/4 + .... + does not converge. On the other hand, you COULD have certain instances where you switch two theories around in their probability assignments (for example, you could arbitrarily say Christianity was more likely than string theory, even though Christianity is a longer theory), but for most of the theories, with increasing length you MUST drop your probability down towards 0 relatively fast to maintain the desiderata at all. To switch these probabilities only for particular theories you care about, while you also need and want to use the theory on other problems (including normal "common sense" intuitions, which are very well-explained by this framework), and you ALSO need to use it generally on this problem except for a few counter-examples you explicitly hard-code, seems incredibly contrived. You're better off just to go with occam's razor, unless some better alternative can be proposed.
Rob Zahra
Replies from: Wei_Dai, Eliezer_Yudkowsky↑ comment by Wei Dai (Wei_Dai) · 2011-01-12T01:29:31.675Z · LW(p) · GW(p)
I agree up to the first half of step 6, but I think the conclusion is wrong (or at least not justified from the argument). There are two different principles involved here:
- A finite agent must use an "inductive-ish" prior with a finite complexity
- One should use the simplest prior. (Occam's Razor)
If every finite agent must use an "inductive-ish" prior, then there is no need to invoke or appeal to Occam's Razor to explain or justify our own inductive tendencies, so Rob's argument actually undercuts Occam's Razor.
If we replace Occam's Razor with the principle that every finite agent must use a prior with finite complexity, then one's prior is just whatever it is, and not necessarily the simplest prior. There is no longer an argument against someone who says their prior assigns a greater weight to Christianity than to string theory. (In the second half of step 6, Rob says that's "contrived", but they could always answer "so what?")
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-14T02:25:14.046Z · LW(p) · GW(p)
Rob, just make it a post.
↑ comment by byrnema · 2009-04-13T21:03:27.372Z · LW(p) · GW(p)
The anti-religion conclusion in my post was just an application of the definitions given for religion and rational.
Are you saying that you would modify the first definition of rational to include these other ways of knowing (Occam's Razor and Inductive Bias), and that they can make conclusions about metaphysical things?
Oh, I see, these would be included under "logical reasoning". The part I would modify is (1) whether some metaphysical beliefs are acceptable and (2) that they can be constrained by logical reasoning.
Replies from: robzahra↑ comment by robzahra · 2009-04-13T22:12:20.906Z · LW(p) · GW(p)
Are you saying that you would modify the first definition of rational to include these >> other ways of knowing (Occam's Razor and Inductive Bias), and that they can make conclusions about metaphysical things?
yes, I don't think you can get far at all without an induction principle. We could make a meta-model of ourselves and our situation and prove we need induction in that model, if it helps people, but I think most people have the intuition already that nothing observational can be proven "absolutely", that there are an infinite number of ways to draw curved lines connecting two points, etc. Basically, one needs induction to move beyond skeptical arguments and do anything here. We're using induction implicitly in all or most of our applied reasoning, I think.
comment by PhilGoetz · 2009-04-13T19:04:02.767Z · LW(p) · GW(p)
I think that, given your definitions, whether we are rational or necessarily "religious" depends on how the symbols in human minds are grounded. Certainly our minds have built-in assumptions that we don't personally discover empirically. The question is whether it is possible to function without any such assumptions that are not necessarily true in the external world.
comment by byrnema · 2009-04-14T16:29:54.170Z · LW(p) · GW(p)
I wish that negative scores would show on the top of posts so that I could quantify just how unpopular this post was. I speculate that the main reason that I am down-voted is because people didn't like my definitions. However, my bad definitions just support the main argument of my post which is that it is difficult for a newcomer to figure out what you are talking about. (As suggested, I could go 'read the classics' for a few years and come back ... but I trust that is not required or expected.)
A negative scoring also indicates to me that my push for rigorous definitions as a precondition for discussion is not met with enthusiasm. I'm simultaneously disappointed and accepting of this. While I would prefer to know what you are talking about, I understand that much (arguably, more) progress can be made by proceeding organically; allowing the definitions to shift from speaker to speaker and argument to argument.
Replies from: orthonormal, AllanCrossman↑ comment by orthonormal · 2009-04-15T23:58:55.767Z · LW(p) · GW(p)
I speculate that the main reason that I am down-voted is because people didn't like my definitions.
I didn't up-vote or down-vote you, but I do believe that your definitions don't really capture the nature of the thing we're discussing. It's tempting to seek pithy definitions for our important concepts, but if it doesn't draw the right boundary around the instances we care about, it's not a good definition. I mean, depending on how you construe your premises, it looks like you've left out Occamian priors and Bayesian induction from your original definition. A genuine positivist would assent to your definition, but most of this community doesn't think you can get anywhere without those pieces. And there may well be more; rationality may be a slightly messy Art.
Secondly: I think we know where you're going with your concept of weak rationality. You're hoping that you're still permitted to believe in a vast unjustified theory, so long as you can't find a knock-down counterargument. The moral around here is: Don't ask what you're permitted to believe or what you're forced to believe. Just try to see which way the evidence runs.
All that being said, your post has an abundance of good-faith (no pun intended) toward those of us who see atheism as the clear right answer. Whether you stay theist or not, I for one am glad you're around.
Replies from: byrnema↑ comment by byrnema · 2009-04-16T03:38:24.452Z · LW(p) · GW(p)
Thank you. You're right, I do not see atheism as the clear right answer. I look forward to understanding this point of view. I would rather have left this undefined for the sake of my rhetorical position, but I am probably not really a theist: I feel neutral about the existence of God.
Don't ask what you're permitted to believe or what you're forced to believe.
That's exactly what interests me! But as a precise answer seems out of reach, I will try and be more oblique in my evidence-gathering.
↑ comment by AllanCrossman · 2009-04-14T16:33:28.662Z · LW(p) · GW(p)
I wish that negative scores would show on the top of posts so that I could quantify just how unpopular this post was.
I agree partially: such scores should at least be visible to the original poster.
However, even without that feature, you may still be able to work out the net vote by seeing how badly your karma was affected, if at all.
comment by smoofra · 2009-04-14T03:20:36.032Z · LW(p) · GW(p)
(1) The only source of knowledge is empirical experience.
I don't think so. I am extremely confident that all propositions in the elementary theory of integers are really either True or False, in the Real World. This is probably one of the strongest beliefs I hold, and it's based entirely on a gut feeling. I don't think I'm being irrational either.
Replies from: timtyler, arundelo↑ comment by arundelo · 2009-04-14T03:35:44.948Z · LW(p) · GW(p)
(Rhetorical question:) How did you find out about integers?
Replies from: smoofra↑ comment by smoofra · 2009-04-14T04:20:54.143Z · LW(p) · GW(p)
Not sure. I remember being confused about what a negative number was as a child, but I don't know where I first heard of them, or when I first perceived their true nature.
Replies from: arundelo↑ comment by arundelo · 2009-04-17T02:37:15.679Z · LW(p) · GW(p)
What I wanted to do is point out that you found out about integers the same way you found out about everything else, empirically.
But that doesn't change the fact that statements about integers are (usually) "True or False, in the Real World", and once you've formed the necessary concepts, you don't need any more sense data to find out new facts about them.
(Edited To Add: I say "usually" just to exclude Grelling-type statements and any other weird cases.)
comment by AndySimpson · 2009-04-14T02:07:13.289Z · LW(p) · GW(p)
Rationalism isn't exclusively or even necessarily empirical. Just ask Descartes.
comment by AlexU · 2009-04-13T21:04:49.909Z · LW(p) · GW(p)
Be careful about how you define those terms, as they may be idiosyncratic. "Rationalism" and "Empiricism" have long philosophical histories, and are typically seen as parallel, not-quite-rival schools of thought, with the rationalists striving to root all knowledge in a priori rational inquiry (Descartes' Meditations is the paradigm example). I'm not sure it's wise to flip that on its head by redefining such a common, well-denoted term.
Replies from: byrnema↑ comment by byrnema · 2009-04-13T21:10:49.677Z · LW(p) · GW(p)
I want to define the terms in the standard way; as it is commonly viewed in this group. I'm new on LW and those definitions were just my best guesses.
Replies from: AlexUcomment by Gordon Seidoh Worley (gworley) · 2009-04-13T18:34:47.310Z · LW(p) · GW(p)
Interesting thoughts.
I have tried in the past to follow the mathematical definitions I learned regarding rationality. Unfortunately they haven't always served well in discussions. They are:
Rationality: the application of Bayes rule
Well-informed: the possession of evidence with correct confidence
Prudence: the result of combining rationality with well-informed-ness
But, like I said, this doesn't work well in normal speech because each of those words doesn't have such a clean and precise meaning in the natural language. Instead, if we must choose words so that we may speak with mathematical precision, I propose we don't spend time choosing the right word to impose our definition on and instead follow a simple rule that has helped many software developers avoid time wasting conflicts: a construct will be given the worst name someone thinks up. The only way you can change the name of a construct is to propose a worse name. It stops the fighting about choosing the right word and let's us all get on with the discussion.
Not sure how well that will work here, but I thought it was worth mentioning.
Replies from: timtyler↑ comment by timtyler · 2009-04-13T19:16:47.898Z · LW(p) · GW(p)
Rationality is surely bigger than Bayes - since it incudes deductive reasoning.
Replies from: infotropism, robzahra↑ comment by infotropism · 2009-04-13T23:13:19.010Z · LW(p) · GW(p)
Well solomonoff induction and systems like AIXI are bigger than Bayes as they use it as a part of themselves. They are intractable.
And I'd guess there's a link between those and rationality. Epistemical and instrumental rationality respectively, pushed to their theoretical limits of optimality.
↑ comment by robzahra · 2009-04-13T20:46:43.488Z · LW(p) · GW(p)
this can be viewed the other way around, deductive reasoning as a special case of Bayes
Replies from: orthonormal, timtyler↑ comment by orthonormal · 2009-04-13T21:46:19.715Z · LW(p) · GW(p)
Exactly: the special case where the conditional probabilities are (practically) 0 or 1.
Replies from: robzahra↑ comment by robzahra · 2009-04-13T22:07:16.660Z · LW(p) · GW(p)
yes, exactly
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-13T22:46:33.011Z · LW(p) · GW(p)
And induction is a special case of deduction, since probability theory itself is a logic with theorems: what a given prior updates to, on given evidence, is a deductive mathematical fact.
Besides, I'm informed that I just use duction.
Replies from: timtyler, robzahra↑ comment by timtyler · 2009-04-15T01:33:56.379Z · LW(p) · GW(p)
No, see:
http://en.wikipedia.org/wiki/Problem_of_induction
Replies from: robzahra, byrnema↑ comment by robzahra · 2009-04-15T21:59:08.790Z · LW(p) · GW(p)
Tim--- To resolve your disagreement: Induction is not purely about deduction, but it nevertheless can be completely modelled by a deductive system.
More specifically, I agree with your claim about induction (see point 4 above). However, in defense of Eliezer's claim that induction is a special case of deduction, I think you can model it in a deductive system even though induction might require additional assumptions. For one thing, deduction in practice seems to me to require empirical assumptions as well (i.e., the "axioms" and "inference rules" are chosen based on how right they seem), so the fact that induction needs some axioms should not itself prevent deductive style proofs using an appropriately formalized version of it. So, once one decides on various axioms, such as the various desiderata I list above for a Solomonoff-like system, you CAN describe via a mathematical deduction system how the process of induction would proceed. So, induction can be formalized and proofs can be made about the best thing for an agent to do; the AIXI model is basically an example of this.
Replies from: timtyler↑ comment by timtyler · 2009-04-16T18:03:09.506Z · LW(p) · GW(p)
If that is a defense of induction being a special case of deduction, then it's a defense of anything being a special case of deduction - since logic can model anything.
The golden gate bridge is a special case of deduction, in this sense.
I am not impressed by the idea that induction is a special case of deduction - I would describe it as being wrong. You need extra axioms for induction. It is not the same thing at all.
Replies from: robzahra↑ comment by byrnema · 2009-04-15T03:22:47.620Z · LW(p) · GW(p)
Induction tells us whether something is probable; based on past experience we can make a prediction about the future. But to apply induction to decide something is a deduction:
First, make the assumption that induction can be applied to infer truth. Then, apply induction. The result is a valid conclusion deduced using (1) induction and (2) the belief that you can use induction.
Replies from: timtyler↑ comment by timtyler · 2009-04-15T10:04:52.188Z · LW(p) · GW(p)
To recap... induction is not a purely deductive principle - since it relies on an axiom known as "The Principle of Uniformity of Nature" - http://en.wikipedia.org/wiki/Principle_of_uniformity which states that the laws of physics are the same from place to place and that the past is a useful guide to the future.
That axiom is not available as a result of any deduction - and attempts to justify it always seem to be circular - i.e. they use induction.
According to http://en.wikipedia.org/wiki/Problem_of_induction#Ancient_origins this problem has been known about for over 2,000 years.
Replies from: robzahra, byrnema↑ comment by robzahra · 2009-04-15T11:55:29.097Z · LW(p) · GW(p)
It looks to me like those uniformity of nature principles would be nice but that induction could still be a smart thing to do despite non-uniformity. We'd need to specify in what sense uniformity was broken to distinguish when induction still holds.
Replies from: jimmy, ciphergoth↑ comment by jimmy · 2009-04-15T16:55:50.614Z · LW(p) · GW(p)
Right. We only assume uniformity for the same reason we assume all emeralds are green and not bleen. It's just the simpler hypothesis. If we had reason to think that the laws of physics alternated like a checkerboard, or that colors magically changed in 2012, then we'd just have to take that into account.
This reminds me of the Feynman quote "Philosophers say a great deal about what is absolutely necessary for science, and it is always, so far as one can see, rather naive, and probably wrong."
Replies from: robzahra, ciphergoth↑ comment by robzahra · 2009-04-15T22:34:02.033Z · LW(p) · GW(p)
I agree with Jimmy's examples. Tim, the Solomonoff model may have some other fine print assumptions {see some analysis by Shane Legg here}, but "the earth having the same laws as space" or "laws not varying with time" are definitely not needed for the optimality proofs of the universal prior (though of course, to your point, uniformity does make our induction in practice easier, and time and space translation invariance of physical law do appear to be true, AFAIK.). Basically, assuming the universe is computable is enough to get the optimality guarantees. This doesn't mean you might not still be wrong if Mars in empirical fact changes the rules you've learned on Earth, but it still provides a strong justification for using induction even if you were not guaranteed that the laws were the same, until you observed Mars to have different laws, at which point, you would assign largest weight to the simplest joint hypothesis for your next decision.
↑ comment by Paul Crowley (ciphergoth) · 2009-04-15T17:18:14.286Z · LW(p) · GW(p)
I'm afraid that you're assuming what you're trying to prove: whether you call it uniformity, or simplicity, or order, it's all the same assumption, and you do have to assume it, whatever Feynman says.
Look at it from a Bayesian point of view: if your prior for the universe is that every sequence of Universe-states is equally likely, then the apparent order of the states so far gives no weight at all to more orderly future states - in fact, no observation can change what we expect.
Incidentally I'm very confident of the math in the paragraph above, and I'd ask that you'd be sure you've taken in what I'm getting at there in your reply.
Replies from: steven0461, robzahra↑ comment by steven0461 · 2009-04-15T18:10:26.711Z · LW(p) · GW(p)
There are a lot more complex than simple possible universes, so the assumption that an individual simple possible universe is more probable than an individual complex possible universe (which is the assumption being made here) is not the same thing as the assumption that all simple universes considered together are more probable than all complex universes considered together (i.e., the assumption that the universe is probably simple). (Not saying you disagree, but it's probably good to be careful around the distinction.)
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-15T22:23:29.413Z · LW(p) · GW(p)
I suspect I'm going to be trying to make this point again at some point - I've had difficulty in the past explaining the problem of induction, and though I know about Solomonoff induction I only realised today that the whole problem is all about priors. I tried to to be explicit about which side of the distinction you draw I was speaking, but any thoughts on how I can make it clearer in future? Thanks!
↑ comment by robzahra · 2009-04-15T23:34:52.985Z · LW(p) · GW(p)
Ciphergoth, I agree your points, that if your prior over world-states were not induction biased to start with, you would not be able to reliably use induction, and that this is a type of circularity. Also of course, the universe might just be such that the Occam prior doesn't make you win; there is no free lunch, after all.
But I still think induction could meaningfully justify itself, at least in a partial sense. One possible, though speculative, pathway: Suppose Tegmark is right and all possible math structures exist, and that some of these contain conscious sub-structures, such as you. Suppose further that Bostrom is right and observers can be counted to constrain empirical predictions. Then it might be that there are more beings in your reference class that are part of simple mathematical structures as opposed to complex mathematical structures, possibly as a result of some mathematical fact about your structure and how that logically inter-relates to all possible structures. This might actually make something like induction true about the universe, without it needing to be a direct assumption. I personally don't know if this will turn out to be true, nor whether it is provable even if true, but this would seem to me to be a deep, though still partially circular, justification for induction, if it is the case.
We're not fully out of the woods even if all of this is true, because one still might want to ask Tegmark "Why does literally everything exist rather than something else?", to which he might want to point to an Occam-like argument that "Everything exists" is algorithmically very simple. But these, while circularities, do not appear trivial to my mind; i.e., they are still deep and arguably meaningful connections which seem to lend credence to the whole edifice. Eli discusses in great detail why some circular loops like these might be ok/necessary to use in Where Recursive Justification Hits Bottom
↑ comment by Paul Crowley (ciphergoth) · 2009-04-15T12:40:45.725Z · LW(p) · GW(p)
To a Bayesian, the problem of induction comes down to justifying your priors. If your priors rate an orderly universe as no more likely than a disorderly one, than all the evidence of regularity in the past is no reason to expect regularity in the future - all futures are still equally likely. Only with a prior that weights more orderly universes with a higher probability, as Solomonoff's universal prior does, will you be able to use the past to make predictions.
Replies from: timtyler↑ comment by timtyler · 2009-04-15T13:42:07.413Z · LW(p) · GW(p)
More than that, surely: inductive inference is also built into Bayes' theorem itself.
Unless the past is useful as a guide to the future, the whole concept of maintaining a model of the world and updating it when new evidence arrives becomes worthless.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-15T14:23:33.541Z · LW(p) · GW(p)
inductive inference is also built into Bayes' theorem itself
As you say, Bayes' theroem isn't useful if you start from a "flat" prior; all posterior probabilities come out the same as prior probabilities, at least if A is in the future and B in the past. But nothing in Bayes' theorem itself says that it has to be useful.
↑ comment by byrnema · 2009-04-15T11:39:03.464Z · LW(p) · GW(p)
Right. In case anyone thinks this thread is an argument, it's not -- the assumption of induction would need to be added to deduce anything about the empirical world. The definition above didn't say how deductions would be made... You just make assumptions and then keep track of what your conclusions would be given those assumptions (that's deduction). I'm not sure if we could or would start listing the assumptions. I made the mistake of including (1), which is the only explicit assumption, but AndySimpson and ALexU have pointed out that elevating that assumption is empiricism.
↑ comment by robzahra · 2009-04-13T22:58:39.892Z · LW(p) · GW(p)
agreed, drawing hands
↑ comment by timtyler · 2009-04-15T02:00:56.976Z · LW(p) · GW(p)
By "Bayes" I meant this: http://en.wikipedia.org/wiki/Bayes'_theorem - a formalisation of induction.
If you think "Bayes" somehow includes deductive reasoning, can you explain whether it supposedly encapsulates first-order logic or second-order logic?
Replies from: robzahra↑ comment by robzahra · 2009-04-16T01:04:40.404Z · LW(p) · GW(p)
I think we're probably using some words differently, and that's making you think my claim that deductive reasoning is a special case of Bayes is stronger than I mean it to be.
All I mean, approximately, is:
Bayes theorem: p(B|A) = p(A|B)*p(B) / p(A)
Deduction : Consider a deductive system to be a set of axioms and inference rules. Each inference rule says: "with such and such things proven already, you can then conclude such and such". And deduction in general then consists of recursively turning the crank of the inference rules on the axioms and already generated results over and over to conclude everything you can.
Think of each inference rule "i" as i(A) = B, where A is some set of already established statements and B corresponds to what statements "i" let's you conclude, if you already have A.
Then, by deduction we're just trying to say that if we have generated A, and we have an inference rule i(A) = B, then we can generate or conclude B.
The connection between deduction and Baye's is to take the generated "proofs" of the deductive system as those things to which you assign probability of 1 using Bayes.
So, the inference rule corresponds to the fact that p(B | A) = 1. The fact that A has been already generated corresponds to p(A) = 1. Also, since A has already been generated independently of B, p(A | B) = 1, since A didn't need B to be generated. And we want to know what p(B) is.
Well, plugging into Bayes:
p(B|A) = p(A|B)p(B) / p(A)
i.e. 1 = 1 p(B) / 1
i.e. p(B) = 1.
In other words, B can be generated, which is what we wanted to show.
So basically, I think of deductive reasoning as just reasoning with no uncertainty, and I see that as popping out of bayes in the limiting case. If a certain formal interpretation of this leads me into Godelian problems, then I would just need to weaken my claim somewhat, because some useful analogy is clearly there in how the uncertain reasoning of Bayes reduces to certain conclusions in various limits of the inputs (p=0, p=1, etc.).
Replies from: timtyler↑ comment by timtyler · 2009-04-16T17:37:22.383Z · LW(p) · GW(p)
I think I would describe what you are talking about as being Bayesian statistics - plus a whole bunch of unspecified rules (the "i" s).
What I was saying is that there isn't a standard set of rules of deductive reasoning axioms that is considered to be part of Bayesian statistics. I would not dispute that you can model deductive reasoning using Bayesian statistics.
Replies from: robzahra