Posts
Comments
In terms of whether to take your complaints about philosophy seriously, I mean.
Does it matter that you've misstated the problem of induction?
I wish this was separated into two comments, since I wanted to downvote the first paragraph, and upvote the second.
Glad someone mentioned that there is good reason Scott Adams is not considered a paradigm rationalist.
For anyone interested in wearing Frodo's ring around your neck: http://www.myprecious.us/
I guess this raises a different question: I've been attempting to use my up and down votes as a straight expression of how I regard the post or comment. While I can't guarantee that I am never drawn to inadvertently engage in corrective voting (where I attempt to bring a post or comment's karma in line with where I think it should be in an absolute sense or relative to another post), it seems as though this is your conscious approach.
What are the advantages/disadvantages or the two approaches?
I voted this down, and the immediate parent up, because recognizing one's errors and acknowledging them is worthy of Karma, even if the error was pointed out to you by another.
That puts people with a great deal of Karma in a much better position with respect to Karma gambling. You could take us normal folk all-in pretty easily.
I mean, I don't know if "woody" or "dry" are the right words, in terms of whether they invoke the "correct" metaphors. But, the point is that if you have vocabulary that works, it can allow you to verbalize without undermining your underlying ability to recognize the wine.
I think the training the with vocabulary actually augments verbally mediated recall, not that it turns off the verbal center, but I'm not sure the vehicle by which it works.
For the most part I think that starts to address it. At the same time, on your last point, there is an important difference between "this is how fully idealized rational agents of a certain sort behave" and "this is how you, a non-fully idealized, partially rational agent should behave, to improve your rationality".
Someone in perfect physical condition (not just for humans, but for idealized physical beings) has a different optimal workout plan from me, and we should plan differently for various physical activities, even if this person is the ideal towards which I am aiming.
So if we idealize our bayesian models too much, we open up the question: "How does this idealized agent's behavior relate to how I should behave?" It might be that, were we to design rational agents, it makes sense to use these idealized reasoners as models, but if the goal is personal improvement, we need some way to explain what one might call the Kantian inference from "I am an imperfectly rational being" to "I ought to behave the way such-and-such a perfectly rational being would".
I am thinking more like this: I am a scaredy-cat about roller coasters. So I prefer the tea cups to big thunder mountain rail road. And I maintain that preference after choosing the Tea Cups (I don't regret my decision). However, had I ridden Big Thunder Mountain Rail Road, I would have been able to appreciate that it is awesome, and would have preferred Big Thunder Mountain Rail Road to the Tea Cups.
Since this case seems pretty possible, if the sorts of lessons you are going to draw only apply to hyper-idealized agents who know all their preferences perfectly and whose preferences are stable over time, that is a good thing to note, since the lessons may not apply to those of us with dynamic preference sets.
From what I've read, one needs to train oneself on paradigm cases. So, for example, with wine tasting, you develop your verbal acuity by learning how to describe fairly ordinary wines.
I don't know how to port this strategy over to verbal acuity for rationality.
I agree, however, the definition of preferring A to B that he gave was choosing A over B (and if we don't specify that A and B must be total world-states, then it would turn out that I prefer Mexican to Italian because I chose Mexican over Italian). Psy-Kosh's comment above explains why that isn't what he meant.
That takes care of the first concern, but not necessarily the second one.
I guess we find out how to acquire verbal expertise in a given domain, and do so for rationality, reasoning, and inference.
That's what it means to prefer something. That if you prefer A over B, you'd give up situation B to gain situation A. You want situation A more than you want situation B.
I don't want this to devolve into an argument about precisely how to talk about preferences, but I think this is a more substantive assumption that you are regarding it. If I prefer going to the Italian restaurant to going to the Mexican restaurant, I might still choose the Mexican restaurant over the Italian restaurant, because of the preferences of others.
It seems like you are also glossing over the importance of the possible difference between what I prefer when choosing to what I would have preferred had I chosen differently.
It depends. Sometimes it will be sight or our other senses, sometimes it will be memory, sometimes it will be testimony.
Thinks about it this way, we take in information all the time, and draw conclusions from it. "Sight" isn't playing a key role in face recognition except providing the data, you have a mental program for matching visual face data to previous visual face data, and that program gets screwed up if you start thinking through a description of the face after you see it.
Similarly, you see a room full of objects and events. You've got one or more "draw conclusions" programs that run on the data you see, and that program can get screwed up by putting things into words that you don't normally.
The data on insight puzzles shows that if you do manage to draw the right conclusions, and you try to put into words how you did it, you may get screwed up in the following way: you are confident in explanation A for how you drew the conclusion, when, in actuality, the truth is radically different explanation B.
My claim isn't about rationality recognition per se, it is simply this: psychology has shown that verbalizing can screw us up when dealing with a process that isn't normally done verbally. And a lot (if not most) of our inferential processes are not done in this explicitly verbalized manner (verbalized doesn't necessarily mean spoken aloud, but just 'thinking through in words').
My claim is that there are known ways to get good at verbalizing non-verbal processes, and they involve training on paradigmatic cases. It is only after such training that one can start thinking about edge cases and the borderlands without worrying that the process of discussing the cases is corrupting their thinking about the cases.
Before we can advance rationality by discussion, we must first learn to discuss rationality.
I think the question about which cases to focus on when forming theories is different from the question of which cases to use to train oneself to verbalize one's thoughts without interfering with one's thinking. The latter requires us to train on paradigms, the former may be something we can pursue in either direction.
This is crucial: The thought isn't to presuppose which direction our theorizing should go, but rather to make sure that when we theorize, we aren't tripping ourselves up.
The Verbal Overshadowing effect, and how to train yourself to be a good explicit reasoner.
someone could start a thread, I guess.
Given the problems for the principle of indifference, a lot of bayesians favor something more "subjective" with respect to the rules governing appropriate priors (especially in light of Aumann-style agreement theorems).
I'm not endorsing this manuever, merely mentioning it.
Apologies for the misunderstanding.
Often, when someone says, "Is it because A? or is the issue B?" they intend to be suggesting that the explanation is either A or B.
I realize this is not always the case, but I (apparently incorrectly) assumed that you were suggesting those as the possible explanations.
What makes it a crutch?
The Implications of Saunt Lora's Assertion for Rationalists.
For those who are unfamiliar, Saunt Lora's Assertion comes from the novel Anathem, and expresses the view that there are no genuinely new ideas; every idea has already been thought of.
A lot of purportedly new ideas can be seen as, at best, a slightly new spin on an old idea. The parallels between, Leibniz's views on the nature of possibility and Arnauld's objection, and David Lewis's views on the nature of possibility and Kripke's objection are but one striking example. If there is anything to the claim that we are, to some extent, stuck recycling old ideas, rather than genuinely/interestingly widening the range of views, it seems as though this should have some import for rationalists.
I should note, this explanation for why there is a disparity between how much we attend to the two issues does not make any assumptions about the degree to which we should be attending to either issue, which is a different question entirely.
That seems to be a false dichotomy. The first option implicitly condones disconcern for racial balance and implies that gender is not a social construct, the latter assumes that there is widespread sensitivity over the issue of racial balance.
More likely, issues of gender interaction are more salient for members of the community than issues of racial interaction, leading us to focus on the former and overlook the latter.
I suppose rather than just asking a rhetorical question, I should advocate for publicizing one's plans. So:
It is far too easy to let oneself off the hook, and accept excuses from oneself that one would not want to offer to others. For instance, if one plans to work out three times a week, they might fail, and let themselves off the hook because they were relatively busy that week, even though they would not be willing to offer "It was a moderately busy week" as an excuse when another person asked why they didn't exercise three times that week. On the other hand, the genuinely good excuses are the ones that we are willing to offer up. "I broke my leg", "A family member fell ill", etc. So, for whatever reason, the excuses we are willing to publicly rely on do a better job of tracking legitimate reasons to alter plans. Thus, whenever one is trying to effect a change in their lives, it seems good to rely on one's own desire not to be embarrassed in front of their peers, as it will give them more motivation to stick to their plans. This motivation seems to be, if anything, heightened when the group is one that is specifically attending to whether you are making progress on the goal in question (for instance, if the project is about rationality, this community will be especially attuned to the progress of its members).
So, our rationality "to do" lists should be public (and, to echo something I imagine Robin Hanson would point out) so should our track-records at accomplishing the items on the list.
Epistemic rationality alone might be well enough for those of us who simply love truth (who love truthseeking, I mean; the truth itself is usually an abomination)
What motivation is there to seek out an abomination?
Presumably the position mentioned is simply that one can value truth without valuing particular truths in the sense that you want them to be true. It might be true that an earthquake will kill hundreds, but I don't love that an earthquake will kill hundreds.
The main danger for LW is that it could remain rationalist-porn for daydreamers.
I think this is a bit more accurate.
Why not determine publicly to fix it?
i agree. Have a karma based limit under a certain threshold, then, above that threshold, free reign.
I sense a bout of Deism coming on from our creator/sustainer.
I thought the point was to limit people's ability to down vote. Wouldn't that be a reason not to change the threshold?
You can also induce from what incentives you seem to respond to how to increase the probability that you will do B. For instance, if telling your friends that you plan to do a project has a high correlation with your doing that project, then you can increase your probability that you will do B by telling your friends that you plan to do B.
I think I may have been too brief/unclear, so I am going to try again:
The fallacy of sunk costs is, in some sense, to count the fact that you have already expended costs on a plan as a benefit of that plan. So, no matter how much it has already cost you to pursue project A, avoiding the fallacy means treating the decision about whether to continue pursuing A, or to pursue B (assuming both projects have equivalent benefits) as equivalent to the question of whether there are more costs remaining for A, or more costs remaining for B.
The closest to relevant thing induction tells us is how to convert our evidence into predictions about the remaining costs of the projects. This doesn't conflict, because induction tells us only that, if projects like A tend to get a lot harder from the point you are at, that your current project is likely to get a lot harder from the point you are at.
There just isn't a conflict there.
Am I wrong, or are you conflating disregarding past costs in evaluating costs and benefits with failing to remember past costs when making predictions about future costs and benefits?
It seems pretty clear that the sunk cost consideration is that past costs don't count in terms of how much it now would cost you to pursue using vendor A vs. pursuing vendor B, while induction requires you to think, "every time we go with Vendor A, he messes up, so if we go with Vendor A, he will likely mess up again".
What's the conflict?
http://4.bp.blogspot.com/_dzzZwXftwcg/R7nmF_bpsQI/AAAAAAAACKo/Os0WrGbEguo/s400/remember-santa.jpg
Edited to link to accessible image.
Not sure I disagree with your position, but I voted down because simply stating that your opponent is wrong doesn't seem adequate.
I didn't think that one had to. That is what your challenge to the theist sounded like. I think that religious language is coherent but false, just like phlogiston or caloric language.
Denying that the theist is even making an assertion, or that their language is coherent is a characteristic feature of positivism/verificationism, which is why I said that.
Oh, that's a good point. I was assuming Aristotle was commending people who could hear it without coming to believe it, but it could easily be that he is commending people who diminish their belief rapidly, and acquire a state of mere apprehension.
You say: You're right; yet no one ever sees it this way. Before Darwin, no one said, "This idea that an intelligent creator existed first doesn't simplify things."
I may have to look up where before Darwin it gets argued, but I am pretty sure people challenged that before Darwin.
I don't think making a move towards logical positivism or adopting a verificationist criterion of meaning would count as a victory.
This reminds me of a Peter Geach quote: "The moral philosophers known as Objectivists would admit all that I have said as regards the ordinary uses of the terms good and bad; but they allege that there is an essentially different, predicative use of the terms in such utterances as pleasure is good and preferring inclination to duty is bad, and that this use alone is of philosophical importance. The ordinary uses of good and bad are for Objectivists just a complex tangle of ambiguities. I read an article once by an Objectivist exposing these ambiguities and the baneful effects they have on philosophers not forewarned of them. One philosopher who was so misled was Aristotle; Aristotle, indeed, did not talk English, but by a remarkable coincidence ἀγαθός had ambiguities quite parallel to those of good. Such coincidences are, of course, possible; puns are sometimes translatable. But it is also possible that the uses of ἀγαθός and good run parallel because they express one and the same concept; that this is a philosophically important concept, in which Aristotle did well to be interested; and that the apparent dissolution of this concept into a mass of ambiguities results from trying to assimilate it to the concepts expressed by ordinary predicative adjectives."
"Never believe a thing simply because you want it to be true." - Diax
That is an interesting contrast with Spinoza's view that all ideas enter the mind as beliefs, and that mere apprehension is achieved by diminishing something about the idea believed.
Eliezer, does your respect for Aumann's theorem incline you to reconsider, given how many commenters think you should thoroughly prepare for this debate?
i believe that linguists would typically claim that it is formed by legitimate rules of English syntax, but point out that there might be processing constraints on humans that eliminate some syntactically well formed sentences from the category of grammatical sentences of English.
This post reminds me of Aristotle's heuristics for approaching the mean when one tends towards the extremes:
"That moral virtue is a mean, then, and in what sense it is so, and that it is a mean between two vices, the one involving excess, the other deficiency, and that it is such because its character is to aim at what is intermediate in passions and in actions, has been sufficiently stated. Hence also it is no easy task to be good. For in everything it is no easy task to find the middle, e.g. to find the middle of a circle is not for every one but for him who knows; so, too, any one can get angry- that is easy- or give or spend money; but to do this to the right person, to the right extent, at the right time, with the right motive, and in the right way, that is not for every one, nor is it easy; wherefore goodness is both rare and laudable and noble.
Hence he who aims at the intermediate must first depart from what is the more contrary to it, as Calypso advises-
Hold the ship out beyond that surf and spray.
For of the extremes one is more erroneous, one less so; therefore, since to hit the mean is hard in the extreme, we must as a second best, as people say, take the least of the evils; and this will be done best in the way we describe. But we must consider the things towards which we ourselves also are easily carried away; for some of us tend to one thing, some to another; and this will be recognizable from the pleasure and the pain we feel. We must drag ourselves away to the contrary extreme; for we shall get into the intermediate state by drawing well away from error, as people do in straightening sticks that are bent." (NE, II.9)
"Morals excite passions, and produce or prevent actions. Reason of itself is utterly impotent in this particular. The rules of morality, therefore, are not conclusions of our reason." - David Hume
Note about the selection of this quote: While I am not inclined towards the position that reason is (and ought to be) slave to the passions, I considered this a good quote on the topic of rationality because it concisely presents one of the most fundamental challenges for rationalism as such.
"Even in the games of children there are things to interest the greatest mathematician." G.W. Leibniz