Beginning at the Beginning
post by Annoyance · 2009-03-11T19:23:23.651Z · LW · GW · Legacy · 60 commentsContents
60 comments
I can't help but notice that some people are utilizing some very peculiar and idiosyncratic meanings for the word 'rational' in their posts and comments. In many instances, the correctness of rationality is taken for granted; in others, the process of being rational is not only ignored, but dispensed with alltogether, and rational is defined as 'that which makes you win'.
That's not a very useful definition. If I went to someone looking for helping selecting between options, and was told to choose "the best one", or "the right one", or "the one that gives you the greatest chance of winning", what help would I have received? If I had clear ideas about how to determine the best, the right, or the one that would win, I wouldn't have come looking for help in the first place. The responses provide no operational assistance.
There is a definite lack of understanding here of what rationality is, much less why it is correct, and this general incomprehension can only cripple attempts to discuss its nature or how to apply it. We might think that this site would try to dispel the fog surrounding the concept. Remarkably, a blog established to help "refining the art of human rationality" neither explains nor defines rationality.
Those are absolutely critical goals if lesswrong is to accomplish what it advertises itself as attempting. So let's try to reach them.
The human mind is at the same time both extremely sophisticated and shockingly primitive. Most of its operations take place beneath the level of explicit awareness; we don't know how we reach conclusions and make decisions, we're merely presented with the results along with an emotional sense of rightness or confidence.
Despite these emotional assurances, we sometimes suspect that such feelings are unfounded. Careful examination shows that to be precisely the case. We can and do develop confidence in results, not because they are reliable, but for a host of other reasons.
Our approval or disapproval of some properties can cross over into our evaluation of others. We can fall prey to shortcuts while believing that we've been thorough. We tend to interpret evidence in terms of our preferences, perceiving what we want to perceive and screening out evidence we find inconvenient or uncomfortable. Sometimes, we even construct evidence out of whole cloth to support something we want to be true.
It's very difficult to detect these flaws in ourselves as we make them. It is somewhat easier to detect them in others, or in hindsight while reflecting upon past decisions which we are no longer strongly emotionally involved in. Without knowing how our decisions are reached, though, we're helpless in the face of impulses and feelings of the moment, even while we're ultimately skeptical about how our judgment functions.
So how can we try to improve our judgment if we don't even know what it's doing?
How did Aristotle establish the earliest-known examination of the principles of justification? If he originated the foundation of the systems we know as *logic*, how could that be accomplished without the use of logic?
As Aristotle noted, the principles he made into a set of formal rules already existed. He observed the arguments of others, noting how people defended positions and attacked the positions of others, and how certain arguments had flaws that could be pointed out while others seemed to possess no counters. His attempts to organize people's implicit understandings of the validity of arguments led to an explicit, formal system. The principles of logic were implicit before they were understood explicitly.
The brain is capable of performing astounding feats of computation, but our conscious grasp of mathematics is emulated and lacks the power of the system that creates it. We can intuitively comprehend how a projectile will move from just a glimpse of its trajectory, although solving the explicit partial differential equation that describes that motion is terrifically difficult, and virtually impossible to accomplish in real-time. Yet our explicit grasp of mathematics makes it possible for us to solve problems and comprehend ideas completely beyond the capacity of our hunter-gatherer ancestors, even though the processing power of our brains does not appear to have changed from those early days.
In the same way, our models of what proper thought means give us options and opportunities far beyond what our intuitive, unconscious reasoning makes possible, even though the conscious understanding works with much fewer resources than the unconscious.
When we consciously and deliberately model the evolution of one statement into another according to elementary rules that make up the foundation of logical consistency, something new and exciting happens. The self-referential aspects of that modeling permit us to compare the decisions presented to us by the parts of our minds beneath the threshold of our awareness and override them. We can evaluate our own evaluations, reaching conclusions that our emotions don't lead us to and rejecting some of those that they do.
That's what rationality is: having explicit and conscious standards of validity, and applying them in a systematic way. It doesn't matter if we possess an inner conviction that something is true - if we can't demonstrate that it can be generated from basic principles according to well-defined rules, it's not valid.
What makes this so interesting is that it's self-correcting. If we observe an empirical relationship that our understanding doesn't predict, we can treat it as a new fact. For example, let's say that we find that certain manipulations of tarot decks permit us to predict the weather, even though we have no idea of why the two should be correlated at all. With rationality, we don't need to know why. Once we've recognized that the relationship exists, it becomes rational for us to use it. Likewise, if a previously-useful relationship suddenly ceases to be, even though we have no theoretical grounds for expecting that to happen, we simply acknowledge the fact. Once we've done so, we can justify ignoring that which we previously considered to be evidence.
Human reasoning is especially plagued by superstitions, because it's easy for us to accept contradictory principles without acknowledging the inconsistency. But when we're forced to construct step-by-step justifications for our beliefs, contradiction is thrown into sharp relief, and can't be ignored.
Arguments that are not made explicitly, with conscious awareness of how each point is derived from fundamental principles and empirical observations, may or may not be correct. But they're never rational. Rational reasoning does not guarantee correctness; rational choice does not guarantee victory. What rationality offers is self-knowledge of validity. If rational standards are maintained when thinking, the best choice as defined by the knowledge we possess will be made. Whether it will be best when we gain new knowledge, or in some absolute sense, is unknown and unknowable until that moment comes.
Yet those who speak here often of the value of human rationality frequently don't do so by rational means. They make implicit arguments with hidden assumptions and do not acknowledge or clarify them. They emphasize the potential for rationality to bootstrap itself to greater and greater levels of understanding, yet don't concern themselves with demonstrating that their arguments arise from the most basic elements of reason. Rationality starts when we make a conscious attempt to understand and apply those basic elements, to emulate in our minds the principles that make the existence of our minds possible.
Are we doing so?
60 comments
Comments sorted by top scores.
comment by abigailgem · 2009-03-12T13:19:15.276Z · LW(p) · GW(p)
In addition, there are issues where it is not possible to be rational. In choosing goals, one cannot always be rational: the emotional response decides the goal. One can be rational in choosing ways of achieving that goal, or in making the map fit the territory.
EDIT: As I have been voted down, I will provide an example. I am transsexual. I decided it was "rational" to attempt to live as a man, and arguably it is: and yet I could not, and the most important thing for me was to change my presentation. I cannot assess that goal "rationally": it means I cannot reproduce, it makes it more likely for me to be seen as a weirdo, it has been terribly difficult to achieve. And yet it was the most important thing in my life.
Replies from: HughRistik, thomblake↑ comment by HughRistik · 2009-03-13T17:42:26.593Z · LW(p) · GW(p)
abigailgem,
I'm not sure that your goal was irrational. You say that it had certain disadvantages (e.g. not being able to reproduce, being seen as a weirdo, being difficult...). You also say that it was the most important thing in your life.
Even though that calculation is probably very emotionally complex, it seems like your decision was the rational one, if those were indeed the pros and cons.
And I agree with you that we cannot always rationally choose goals. Some of our goals (even including rational goals such as truth-seeking) are hardwired by evolution, prenatal development, or formative experiences and are difficult to change.
For example, I have a major sweet tooth. This pushes me towards a goal of eating lots of sugar, which is not a predisposition that I would necessarily choose (at least, not so strongly), if I had a choice over what food my body likes.
↑ comment by thomblake · 2009-03-12T13:52:12.285Z · LW(p) · GW(p)
As Aristotle would say, your emotions are an important part of rationality. A virtuous person feels the right emotion in response to events, and can simply 'follow his gut' to do what's right.
Which is why it's important to follow moral exemplars and build virtue - to make sure you have good character before you make a life-changing, irreversible decision.
comment by HughRistik · 2009-03-11T22:56:01.479Z · LW(p) · GW(p)
I've also been noticing that there are a bunch of notions of "rationality" being thrown around. This is because "rationality" is used to describe different domains of thought, and in each of these domains, the term has fundamentally different meanings (some of which you were getting at). Here are at least some of them:
Deductive rationality: This one is relatively easy. When talking about the rationality of a deductive argument, we mean that the argument is valid (the conclusions follow from the premises), and we often also mean that the premises are true.
Inductive rationality: This type of rationality governs empirical claims. There a various controversies about when inductive arguments are rational or not, which I won't get into but are well-explored in the philosophy of science.
Moral rationality: What does it mean to say that a moral argument is rational? Having the argument be deductively valid would be a necessary condition, but might not be sufficient. Whether you are a moral skeptic or not, the truth conditions for moral prescriptions are different from the truth conditions of descriptive claims about the world.
Pragmatic rationality: This is the rationality of an agent's actions relative to that agent's goals.
Lack of emotion: If someone is being emotional, it doesn't necessarily mean their arguments lack the forms of rationality above. This is the one usage of the term "rationality" that is arguably bogus, since rationality is at best moderately correlated with lack of emotionality.
In short, when we say:
- "Your mathematical proof is rational"
- "Your scientific theory is rational"
- "Human rights are rational"
- "Your decisions are rational"
- "Calm down and be rational!"
the notion of rationality invoked in each of these statements is different.
But how different? Is there a generalized mode of thinking that leads to rationality in all these domains (except, perhaps, lack of emotion, since I think that a bogus notion of rationality)?
Deductive and inductive reasoning are indeed distinct, yet with moral and pragmatic reasoning, once you accept certain premises about the rightness or effectiveness of certain actions, then those problems break down into deduction and induction.
Deduction and induction themselves have similarities, in that both involve the consistent application of rules and principles, though in the case of induction, those rules are more complex and debatable. So your definition of rationality, "having explicit and conscious standards of validity, and applying them in a systematic way," is at least necessary, but not quite sufficient. As others have observed, more elaboration on the actual nature of those standards of validity is necessary. For example, a religion might involve explicit/conscious/systematic standards of validity, some of which come from divine revelation. Yet we should hesitate in labeling it rational, since those standards are suspect, because the divine revelation they rest on fails to pass explicit/conscious/systematic standards of validity.
A good start would be to make your requirement for application of explicit/conscious/systematic recursive, though then we would need a way to avoid infinite regress.
Replies from: jimrandomh, thomblake↑ comment by jimrandomh · 2009-03-12T02:37:25.268Z · LW(p) · GW(p)
HughRistik wrote:
Deductive rationality: When talking about the rationality of a deductive argument, we mean that the argument is valid (the conclusions follow from the premises), and we often also mean that the premises are true.
I disagree with giving rationality this definition. The word you have defined here is "sound". Having an untrue premise or having an invalid deductive step means that an argument is unsound, but it doesn't necessarily mean that it's irrational. An argument may be rational but mistaken, provided (1) the argument was made in good faith, (2) reasonable effort (but not necessarily extensive effort) was put into avoiding mistakes like the one that was made, and (3) the argument is withdrawn when the error is pointed out.
Replies from: HughRistik↑ comment by HughRistik · 2009-03-12T18:28:44.859Z · LW(p) · GW(p)
I'm trying to observe what people are using "rational" to mean. I agree with you that an argument can be rational even if the premises are false, as long as they are not known to be false by the arguer.
Replies from: Cameron_Taylor↑ comment by Cameron_Taylor · 2009-03-12T18:55:30.519Z · LW(p) · GW(p)
I'd be more comfortable with leaving the "are not known to be false by the arguer". In this context, where we are evaluating an argument as rational, it's simpler to leave off subjective states of the arguer. That saves all sorts of messy complications regarding motive and uncertainty. Making a rational argument from premises that are known to
↑ comment by thomblake · 2009-03-11T23:01:01.124Z · LW(p) · GW(p)
though then we would need a way to avoid infinite regress.
You don't always need a way to avoid infinite regress. It's one of the skeptical problems of justification, but as long as you're trying to justify things there's no good reason to prefer eliminating that one rather than any of the others.
Replies from: HughRistik↑ comment by HughRistik · 2009-03-12T19:14:56.079Z · LW(p) · GW(p)
I don't think infinite regress is an insurmountable problem for the notion that beliefs are rational when they can be justified recursively by rules. I want to acknowledge the skeptical problem of justification, but try to describe rationality in a way that belief systems including something like causation are rational, while belief systems like intelligent design are not.
Replies from: Cameron_Taylor↑ comment by Cameron_Taylor · 2009-03-12T19:31:58.105Z · LW(p) · GW(p)
I want to acknowledge the skeptical problem of justification, but try to describe rationality in a way that belief systems including something like causation are rational, while belief systems like intelligent design are not.
Am I understanding you correctly? You describe belief systems like intelligent design as irrational by the very definition of the word rational?
Replies from: Annoyance, HughRistik↑ comment by Annoyance · 2009-03-12T20:36:03.825Z · LW(p) · GW(p)
"You describe belief systems like intelligent design as irrational by the very definition of the word rational?"
That's how definitions work, buddy. They indicate what words mean, and that has consequences for how they're used.
An problematic use of definitions would to be declare intelligent design to be irrational by definition - that is, the definition of 'intelligent design', not of 'rationality'.
↑ comment by HughRistik · 2009-03-12T20:19:48.026Z · LW(p) · GW(p)
I realized that possible interpretation as I was posting, and I tried to head it off by using the word "describe" rather than "define."
I think that the definition of rationality is an open question, though I'm sure it has properties which intelligent design violates. For instance, I would say that empirical arguments failing Occam's Razor, such as intelligent design, are not rational.
Basically, there are bunch of notions of rationality (some of which are competing), such that belief in something like causation is rational though unprovable, disbelief in failures by Occam's Razor is irrational, radical skepticism is irrational though irrefutable, logical fallacies and other deductive errors are irrational, etc... I think we are looking for a unifying description of rationality that encompasses all of them, if such a thing exists.
comment by jimmy · 2009-03-11T21:35:17.502Z · LW(p) · GW(p)
"That's what rationality is: having explicit and conscious standards of validity, and applying them in a systematic way. It doesn't matter if we possess an inner conviction that something is true - if we can't demonstrate that it can be generated from basic principles according to well-defined rules, it's not valid."
What you're talking about here is "system 1" vs "system 2" (http://www.overcomingbias.com/2006/11/why_truth_and.html)
You need to make a distinction between "we can't generate this from math because we're stupid" and "we can't generate this from math because the math gives a different answer". If your car isn't running well, it's good to look under the hood, but you have to learn the difference between "I don't know how this thing works" and "It's broken because the ignition coil is unhooked"
While "system 2" may give better results in general than "system 1", it is an oversimplification to decide we only need system two. Ignoring system one seems to be one of, if not the most common way for a wannabe rationalist to shoot himself in the foot.
If you ask someone that isn't a wannabe rationalist why they aren't, a very common response is "you need some emotions" or "being rational doesn't always give the right answer". This seems to come partly from seeing this error, but oversimplifying and erring on the other side.
Replies from: pjeby↑ comment by pjeby · 2009-03-11T22:22:21.543Z · LW(p) · GW(p)
I think you mean: http://www.overcomingbias.com/2006/11/why_truth_and.html
comment by billswift · 2009-03-13T05:40:54.566Z · LW(p) · GW(p)
"Rationality, as most broadly understood, is characterized by four dispositions: suspicion toward received authority, a commitment to continually refining one's own understanding, a receptivity toward new evidence and alternative explanatory schemes, and a dedication to logical consistency. The whole point of calling a belief rational is not that it is guaranteed to be true but only that it is arrived at through a process of inquiry that allows the inquirers to be so oriented to evidence that they can change their beliefs in ways that make it more likely that they are true. This sense of cognitive progress - a sense that through science we have learned how to learn from our experience so that we can give up erroneous views - is how the scientific worldview has traditionally been distinguished from prescientific worldviews.
. . .
"All beliefs to which we consent should be backed not by authority but by appropriate and independent evidence (independent not of all presuppositions, which is impossible, but independent of the belief in question). Scientific rationality in social life is basically a demand that social authority be backed not by power but by reason."
Meera Nanda, "The Epistemic Charity of the Social Constructivist Critics of Science and Why the Third World Should Refuse the Offer"
in Noretta Koertge (editor), A House Built on Sand: Exposing Postmodernist Myths About Science
Replies from: Annoyance↑ comment by Annoyance · 2009-03-13T14:27:52.099Z · LW(p) · GW(p)
"Rationality, as most broadly understood, is characterized by four dispositions:"
I strongly suspect that some of those dispositions logically follow from the others. Once they're adopted, the remainder are derived from them.
Otherwise, I strongly agree with you.
comment by Patrick · 2009-03-12T02:41:01.044Z · LW(p) · GW(p)
The main distinction that seems to crop up is between rational reasoning and rational actions. Rational actions are about following an optimal strategy, "winning", whereas rational reasoning is about being able to reliably generate optimal strategies.
A question that confuses the two senses. "Is it rational to sign up for cryonics because Eliezer told me to?" it's probably a good strategy to sign up for cryonics if you don't want to die, but doing whatever Eliezer says may not be able to reliably generate such good strategies.
Replies from: Annoyance, pjeby↑ comment by Annoyance · 2009-03-12T19:05:22.790Z · LW(p) · GW(p)
"Rational actions are about following an optimal strategy,"
No, they're not. They're about acting in accordance with one's explicit and conscious understanding of what the optimal strategy is.
Rationality is inherently self-reflective. It's about the thinking that directions the actions and not the actions themselves.
"rational reasoning is about being able to reliably generate optimal strategies"
Again, no. That's usually an effect of rational reasoning, but it's not what rational reasoning is about, and it doesn't necessarily follow. Possessing a self-correcting feedback loop doesn't mean the output will be correct, or even that it will tend to be correct. It means only that it will tend to correct itself.
Replies from: Patrick↑ comment by Patrick · 2009-03-13T07:09:11.827Z · LW(p) · GW(p)
So, is it rational for me to sign up for cryonics because Eliezer told me to? I don't think predictions and actions stop being rational due to following a preferred ritual of cognition.
Replies from: Annoyance↑ comment by Annoyance · 2009-03-13T14:33:50.504Z · LW(p) · GW(p)
"So, is it rational for me to sign up for cryonics because Eliezer told me to?"
Depends. Can you offer an explanation of the reasoning that permits you to derive that action, and can you justify that explanation?
That's what determines whether it would be rational or not. 'Rational' is not a property of the action, it's a property of you and your reasons for taking the action.
comment by thomblake · 2009-03-11T20:53:55.965Z · LW(p) · GW(p)
I totally agree. If rationality means anything, it means this. I don't think EY really thought 'whatever makes you win' was definitional, but unfortunately a lot of people have latched on as though it is.
The example of the tarot cards is a bit dicey. I'm not sure I'd want to employ the decision procedure you're implying - something about correlation not implying causation. Or maybe I'm just thinking of Monday's XKCD.
Replies from: pjeby, Annoyance, abigailgem↑ comment by Annoyance · 2009-03-12T19:19:35.227Z · LW(p) · GW(p)
"The example of the tarot cards is a bit dicey. I'm not sure I'd want to employ the decision procedure you're implying - something about correlation not implying causation."
True. If there is actually no causative relationship, then we wouldn't expect the correlation to hold for an extended period.
Of course, if the correlation continues to hold, then we should speculate that there is a causative relationship, even if it's not predicted by any part of our reality-model, which suggests there are important principles not included in it.
We didn't need to know about UV and ionizing radiation to recognize that exposure to sunlight could cause burns. We deduced certain properties of UV (such as its existence) from our observations of how burns were associated with light.
Any recognition of correlation that doesn't specify a causative mechanism looks somewhat dodgy. That doesn't mean that we shouldn't utilize that correlation.
↑ comment by abigailgem · 2009-03-12T13:30:49.467Z · LW(p) · GW(p)
I think he meant it is Not rational to do something which will observably make you lose, in Newcomb's problem. The process which the two-boxer goes through is not "rational", it is simply a process which ignores the evidence that one boxers get more money. That process ignored evidence, and so is irrational. It looked at the map, not the territory- though it is impossible, here there really is an Omega which can predict what you will do. From the map, that is impossible, so we Two-box. However from the territory, that is what is, so we One-box.
Replies from: thomblake↑ comment by thomblake · 2009-03-12T13:47:12.031Z · LW(p) · GW(p)
Except if your decision procedure results in you one-boxing, then you'll lose more often than not in similar situations in real life.
Sure, people who give this answer are ignoring things about the thought experiment that make one-boxing the obvious win - like Annoyance said, if you use rationality you can follow the evidence without necessarily having an explanation of how it works. Sure, tarot cards have been shown to predict the weather and one-boxing has been shown to result in better prizes.
However, we don't make decisions using some simple algorithm. Our decisions come primarily from our character - by building good habits of character, we engage in good activities. By developing good rationalist habits, we behave more rationally. And the decision procedure that leads you to two-box on the completely fictional and unrealistic thought experiment, is the same decision procedure that would make you win in real life.
Don't base your life on a fiction.
Replies from: abigailgem, pjeby↑ comment by abigailgem · 2009-03-13T18:46:46.680Z · LW(p) · GW(p)
I do not base my life on the fiction of Newcomb's problem, but I do take lessons from it. Not the lesson that an amazingly powerful creature is going to offer me a million dollars, but the lesson that it is possible to try and fail to be rational, by missing a step, or that I may jump too soon to the conclusion that something is "impossible", or that trying hard to learn more rationality tricks will profit me, even if not as much as that million dollars.
↑ comment by pjeby · 2009-03-12T16:16:06.905Z · LW(p) · GW(p)
I don't see how this works. Are you saying that betting on an outcome with extremely high probability and a very good payoff is somehow a bad idea in real life?
Can you give an example of how my one-box reasoning would lead to a bad result, for example?
See, here's my rationale for one-boxing: if somebody copied my brain and ran it in a simulation, I can assume that it would make the same decision I would... and therefore it is at least possible for the predictor to be perfect. If the predictor also does, in fact, have a perfect track record, then it makes sense to use an approach that would result in a consistent win, regardless of whether "I" am the simulation or the "real" me.
Or, to put it another way, the only way that I can win by two-boxing, is if my "simulation" (however crude) one-boxes...
Which means I need to be able to tell with perfect accuracy whether I'm being simulated or not. Or more precisely, both the real me AND the simulation must be able to tell, because if the simulated me thinks it's real, it will two-box... and if I can't conclusively prove I'm real, I must one-box, to prevent the real me from being screwed over.
Thus the only "safe" strategy is to one-box, since I cannot prove that I am NOT being simulated at the time I make the decision... which would be the only way to be sure I could outsmart the Predictor.
(Note, by the way, that it doesn't matter whether the Predictor's method of prediction is based on copying my brain or not; it's just a way of representing the idea of a perfect or near-perfect prediction mechanism. The same logic applies to low-fidelity simulations, even simple heuristic methods of predicting my actions.)
Whew. Anyway, I'm curious to see how that decision procedure would lead to bad results in real-world situations... heck, I'm curious to see how I would ever apply that line of reasoning to a real world situation. ;-)
Replies from: thomblake↑ comment by thomblake · 2009-03-12T19:56:41.558Z · LW(p) · GW(p)
heck, I'm curious to see how I would ever apply that line of reasoning to a real world situation. ;-)
Well, it seems you've grasped the better part of my point, anyway.
To begin, you're assuming that simulating a human is possible. We don't know how to do it yet, and there really aren't good reasons to just assume that it is. You're really jumping the gun by letting that one under the tent.
Here's my rationale for not one-boxing:
However much money is in the boxes, it's already in there when I'm making my decision. I'm allowed to take all of the boxes, some of which contain money. Therefore, to maximize my money, I should take all of the boxes. The only way to deny this is to grant that either: 1. my making the decision affects the past, or 2. other folks can know in advance what decisions I'm going to make. Since neither of these holds in reality, there is no real situation in which the decision-procedure that leads to two-boxing is inferior to the decision-procedure that leads to one-boxing.
In a real life version of the thought experiment, the person in charge of the experiment would just be lying to you about the predictor's accuracy, and you'd be a fool to one-box. These are the situations we should be prepared for, not fantasies.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T21:00:06.388Z · LW(p) · GW(p)
I'm making the assumption that we've verified the predictor's accuracy, so it doesn't really matter how the predictor achieves it.
In any case, this basically boils down to cause-and-effect once again: if you believe in "free will", then you'll object to the existence of a Predictor, and base your decisions accordingly. If you believe, however, in cause-and-effect, then a Predictor is at least theoretically possible.
(By the way, if humans have free will -- i.e., the ability to behave in an acausal manner -- then so do subatomic particles.)
Replies from: thomblake↑ comment by thomblake · 2009-03-12T21:16:51.005Z · LW(p) · GW(p)
I'm making the assumption that we've verified the predictor's accuracy, so it doesn't really matter how the predictor achieves it.
Right, and I'm saying that that assumption only holds in fiction, and so using decision procedures based on it is irrational.
In any case, this basically boils down to cause-and-effect once again: if you believe in "free will", then you'll object to the existence of a Predictor, and base your decisions accordingly. If you believe, however, in cause-and-effect, then a Predictor is at least theoretically possible.
I'm afraid this is a straw man - I'm with Dennett on free will. However, in most situations you find yourself in, believing that a Predictor has the aforementioned power is bad for you, free will or no.
Also, I'm not sure what you mean by 'at least theoretically possible'. Do you mean 'possible or not possible'? Or 'not yet provably impossible'? The Predictor is at best unlikely, and might be physically impossible even in a completely deterministic universe (entirely due to practicality / engineering concerns / the amount of matter in the universe / the amount that one has to model).
(By the way, if humans have free will -- i.e., the ability to behave in an acausal manner -- then so do subatomic particles.)
This does not logically follow. Insert missing premises?
Replies from: pjeby↑ comment by pjeby · 2009-03-12T21:40:32.682Z · LW(p) · GW(p)
Free will and subatomic particles
As for the rest, I'm surprised you think it would take such a lot of engineering to simulate a human brain... we're already working on simulating small parts of a mouse brain... there's not that many more orders of magnitude left. Similarly, if you think nanotech will make cryonics practical at some point, then the required technology is on par with what you'd need to make a brain in a jar... or just duplicate the person and use their answer.
comment by timtyler · 2009-03-11T21:49:28.039Z · LW(p) · GW(p)
Re: That's what rationality is: having explicit and conscious standards of validity, and applying them in a systematic way.
Cultural relativists will rejoyce at seeing that this definition makes no mention of what the standards of validity actually are.
Replies from: Annoyance↑ comment by Annoyance · 2009-03-12T19:11:24.099Z · LW(p) · GW(p)
'Cultural relativism' only diverges when dealing with statements that have no actual truth value. When people consciously and explicitly attempt to understand reality, their beliefs tend to converge.
How many different systems of chemistry are there? How about applied physics? Are there fundamentally different schools of thought that have wildly divergent approaches to fluid dynamics?
The question of what standards determine validity tends to be answered in the same way, as long as the question is approached in a certain manner. That manner is: consciously and explicitly, demonstrating each step and maintaining consistency.
Replies from: timtyler↑ comment by timtyler · 2009-03-12T22:09:03.063Z · LW(p) · GW(p)
Your definition appears indefensible to me - that isn't a useful definition of what rationality is. I'm surprised that you are bothering to defend it.
Replies from: Annoyance, thomblake↑ comment by Annoyance · 2009-03-13T14:36:32.921Z · LW(p) · GW(p)
"Your definition appears indefensible to me - that isn't a useful definition of what rationality is. "
It both conveys what the generally-recognized meaning is, and describes a very useful concept that is fundamental to all sorts of processes.
I can only conclude that I don't understand what you mean by 'useful', and I see no benefit to learning what you men.
Replies from: timtylercomment by RobinHanson · 2009-03-11T20:35:49.787Z · LW(p) · GW(p)
In my recent blogging heads TV with Tyler Cowen we argued about the value of "having explicit and conscious standards of validity, and applying them in a systematic way." My intuition agrees with you, but can we point to anything more concrete than that? Do we have data suggesting this approach is in fact more accurate?
Replies from: Annoyance, timtyler↑ comment by Annoyance · 2009-03-12T19:24:30.644Z · LW(p) · GW(p)
"Do we have data suggesting this approach is in fact more accurate?"
That isn't the goal, here, although ultimately empiricism demonstrates that people concerned with explicit and conscious standards become more accurate. That's a side-effect, albeit a very nifty one.
The point is not to be correct, but to know that we're as correct as our understanding permits. If we don't understand our reasons for doing something, and we can't predict how well our choices will work out by looking at experience, we can't know whether we're doing the right thing.
If your decision systems are unconscious, but you can consciously look at their outcomes and see that they work out, going along with your feelings is rational - because you are consciously recognizing that they work and consciously choosing to follow them.
If your decision systems are unconscious, and your meta-decision systems are also unconscious, and you can't explain why you're following your feelings (beyond that you feel that you should), you're being irrational. Even if your systems and meta-systems are correct.
Replies from: thomblake↑ comment by thomblake · 2009-03-12T22:13:18.138Z · LW(p) · GW(p)
Thank you. I'd been struggling to clearly state this for some time now, with respect to the relationship between reason and virtue. If you take virtue theory at face value, it seems like the suggestion is to pursue virtue instead of rationality. Thinking of the meta-decisions as rational allows one to say that pursuing virtue is rational.
↑ comment by timtyler · 2009-03-11T21:47:49.129Z · LW(p) · GW(p)
I wonder if that will be listed here someday:
http://bloggingheads.tv/search/?participant1=Hanson,%20Robin
comment by jimrandomh · 2009-03-12T02:19:34.651Z · LW(p) · GW(p)
When thinking, rationality means the methods that lead you towards true statements and away from false ones. When acting, rationality means using rational thought to determine what will produce the best outcome (for some criteria), and doing that.
Conversely, irrationality means using methods of thought which you know or ought to know will lead you to false conclusions, failing to give an action thought proportional to its importance and complexity, or figuring out what action will produce the best outcome but not doing it.
Replies from: Annoyance↑ comment by Annoyance · 2009-03-12T19:07:04.344Z · LW(p) · GW(p)
"rationality means the methods that lead you towards true statements and away from false ones."
No, it means consciously choosing the methods that your explicit mental model of reality indicates should produce true statements.
Your definition refers to objective properties of methods and statements that we don't actually know.
comment by roland · 2009-03-11T21:03:00.880Z · LW(p) · GW(p)
and rational is defined as 'that which makes you win'.
I was the one who wrote this in a previous comment regarding the rationality of the fear of darkness. I think this definition(Which I learned from Eliezer) is in fact useful: if you know of two procedures A and B where A is correct according to some standard of rationality but will make you lose whereas B will make you win I would choose B. Eliezer makes this point in Newcomb's problem: http://www.overcomingbias.com/2008/01/newcombs-proble.html
For example, let's say that we find that certain manipulations of tarot decks permit us to predict the weather, even though we have no idea of why the two should be correlated at all. With rationality, we don't need to know why. Once we've recognized that the relationship exists, it becomes rational for us to use it.
Here you are making the exact same point! Knowing that the tarot decks will "make you win" is reason enough to use them, no matter how irrational that may appear.
Replies from: thomblake, Annoyance↑ comment by thomblake · 2009-03-11T21:27:21.495Z · LW(p) · GW(p)
No.
If you let 'rationality' mean simply 'whatever makes you win', then its definition drifts with the wind. According to a useful definition of rationality, it would be defined by some procedure that you could then determine whether you're following. You could then empirically test whether 'rationality' makes you win.
Example Dialogue:
Amy: Craig was being irrational. He believed contradictory things and made arguments that did not logically follow. Due to some strange facts about his society and religion, this made him a well-respected and powerful person
Beth: But David insisted that his own arguments should be logically valid and avoided contradictory beliefs, and that made him unpopular so he was not successful in life. Clearly this was not rational, since David had a much worse life than Craig.
Amy, here, is using the typical definition of rational, while Beth is using 'what makes you win'. Is there any advantage to using Beth's definition? Does it make anything clearer to call Craig rational and David irrational?
Or could we just use Amy's definition of rationality, and say that Craig was being irrational and David was being rational, and then we have a clear idea of what sorts of things they were doing.
More to the point, David could set out to be rational by Amy's definition, but there's no way to plan to be rational by Beth's definition. 'Be rational' is in no way a guide to life, as it's defined entirely by consequences that haven't occurred yet.
Replies from: roland, pjeby↑ comment by roland · 2009-03-11T22:09:42.980Z · LW(p) · GW(p)
It all depends on what you value, what do you want to achieve, what is your utility function? If being popular is your goal then being able to lie, manipulate, use impressive arguments even if they are wrong can be a successful way, it's called politics.
For Amy winning means reasoning correctly. For Beth winning meant being popular. Winning for a paperclip maximizer looks different than for you and me.
I understand where you want to go. For you rationality is a procedure that will bring you closer to the truth. The problem is, where do we get the correct procedure from and how can we be sure that we are applying it correctly? Here is where the "winning" test comes in. According to the prevailing scientific consensus in the past, airplanes where impossible and anyone investing time and money trying to build one was clearly acting irrationally. Yet, in the end those who acted irrationally won, that is, they achieved scientific truth as you can see today.
PS: Here is where Newcomb's problem comes in. It seems that it is very hard to define a rational procedure(starting from fundamental principles) that will one-box, yet one-boxing is the correct choice(at least if you value money).
Replies from: pjeby↑ comment by pjeby · 2009-03-12T03:11:07.958Z · LW(p) · GW(p)
At least, if you value money more than whatever (emotional) value you place on sounding logically consistent. ;-)
However, since any formal system can contain undecidable propositions, and since there is no reason to suppose that human brains (or the universe!) are NOT equivalent to a formal system, then there isn't any way to guarantee a mapping from "procedure" to "truth", anyway!
So I prefer to treat logical consistency as a useful tool for winning, rather than treating winning as a useful tool for testing logical consistency.
↑ comment by pjeby · 2009-03-11T22:17:33.451Z · LW(p) · GW(p)
Actually, neither Craig nor David were rational, if it's defined as "what makes you predictably win, for an empirical, declared-in-advance definition of winning". Craig did not choose his beliefs in order to achieve some particular definition of winning. And David didn't win... EVEN IF his declared-in-advance goals placed a higher utility on logical consistency than popularity or success.
Of course, the real flaw in your examples is that popularity isn't directly created or destroyed through logical consistency or a lack thereof... although that idea seems to be a strangely common bias among people who are interested in logical consistency!
Unless David actually assigned ZERO utility to popularity, then he failed to "win" (in the sense of failing to achieve his optimum utility), by choosing actions that showed other people he valued logical consistency and correctness more than their pleasant company (or whatever else it was he did).
I'm not married to my off-the-cuff definition, and I'm certainly not claiming it's comprehensive. But I think that a definition of rationality that does NOT include the things that I'm including -- i.e. predicting maximal utility for a pre-defined utility function -- would be severely lacking.
After all, note that this is dangerously close to Eliezer's definition of "intelligence": a process for optimizing the future according to some (implicitly, predefined) utility function.
And that definition is neither circular nor meaningless.
Replies from: thomblake↑ comment by thomblake · 2009-03-11T22:55:49.642Z · LW(p) · GW(p)
So you have to be a utilitarian to be rational? Bad luck for the rest of us. Apparently Aristotle was not pursuing rationality, by your definition. Nor am I.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T01:05:35.662Z · LW(p) · GW(p)
I don't know what you mean by "utilitarian", but if you mean, "one who chooses his actions according to their desired results", then how can you NOT be a utilitarian? That would indicate that either 1) you're using a different utility function, or 2) you're very, very confused.
Or to put it another way, if you say "I choose not to be a utilitarian", you must be doing it because not being a utilitarian has some utility to you.
If you are arguing that truth is more important than utility in general, rather than being simply one component of a utility function, then you are simply describing what you perceive to be your utility function.
For human beings, all utility boils down to emotion of some kind. That is, if you are arguing that truth (or "rationality" or "validity" or "propriety" or whatever other concept) is most important, you can only do this because that idea makes you feel good... or because it makes you feel less bad than whatever you perceive the alternative is!
The problem with humans is that we don't have a single, globally consistent, absolutely-determined utility function. We have a collection of ad-hoc, context-sensitive, relative utility and distutility functions. Hell, we can't even make good decisions when looking at pros and cons simultaneously!
So, if intelligence is efficiently optimizing the future according to your utility function, then rationality could perhaps be considered the process of optimizing your local and non-terminal utility functions to better satisfy your more global ones.
(And I'd like to see how that conflicts with Aristotle -- or any other "great" philosopher, for that matter -- in a way that doesn't simply amount to word confusion.)
Replies from: thomblake↑ comment by thomblake · 2009-03-12T13:39:00.479Z · LW(p) · GW(p)
I don't know what you mean by "utilitarian"
utilitarianism or, if you prefer, consequentialism
you can only do this because that idea makes you feel good
Is this theory falsifiable? What experiment would convince you that this isn't true? Or will this quickly turn into a 'true scotsman' argument?
The problem with humans is that we don't have a single, globally consistent, absolutely-determined utility function
Take this, reverse the normative content, and you have a view more like mine.
Humans are awesome in part because we're not utility maximizers. To paraphrase Nietzsche:
Replies from: pjebyMan does not pursue utility; only the Englishman does.
↑ comment by pjeby · 2009-03-12T15:14:53.300Z · LW(p) · GW(p)
I'm neither a utilitarian nor a consequentialist, by those definitions. That's a bunch of stuff that applies only to the map, not the territory.
My statement is that humans do what they do, either to receive pleasure or avoid pain. (What other beings get from their actions is only relevant insofar as that creates pleasure or pain for the decider.)
In order to falsify this statement, you'd need to prove the existence of some supernatural entity that is not ruled by cause-and-effect. That is, you'd have to prove that "free will" or a "soul" exists. Good luck with that. ;-)
For verification of this statement, on the other hand, we can simply continue to understand better and better how the brain works, especially how pain and pleasure interact with memory formation and retrieval.
Replies from: thomblake↑ comment by thomblake · 2009-03-12T16:12:23.144Z · LW(p) · GW(p)
In order to falsify this statement, you'd need to prove the existence of some supernatural entity that is not ruled by cause-and-effect
Congratulations! Your claim is non-falsifiable, and therefore is nonsense.
You claim that humans do what they do either to receive pleasure or avoid pain. That sounds implausible to me. I'd happily list counterexamples, but I get the impression you'd just explain them away as "Oh, what he's really going after is pleasure" or "What he's really doing is avoiding pain."
if your explanation fits all possible data, then it doesn't explain anything.
Example: Masochists pursue pain. Discuss.
Replies from: Nick_Tarleton, pjeby↑ comment by Nick_Tarleton · 2009-03-12T20:10:24.594Z · LW(p) · GW(p)
Congratulations! Your claim is non-falsifiable, and therefore is nonsense.... I'd happily list counterexamples, but I get the impression you'd just explain them away as "Oh, what he's really going after is pleasure" or "What he's really doing is avoiding pain."
Black Belt Bayesian: Unfalsifiable Ideas versus Unfalsifiable People
↑ comment by pjeby · 2009-03-12T16:32:23.944Z · LW(p) · GW(p)
Wait... are you saying that atheism, science, and materialism are all nonsense?
I'm only saying that people do things for reasons. That is, our actions are the effects of causes.
So, are you really saying that the idea of cause-and-effect is nonsense? Because I can't currently conceive of a definition of rationality where there's no such thing as cause-and-effect.
Meanwhile, I notice you're being VERY selective in your quoting... like dropping off the "that is not ruled by cause-and-effect" part of the sentence you just quoted. I don't think that's very helpful to the dialog, since it makes you appear more interested in rhetorically "winning" some sort of debate, than in collaborating towards truth. Is that the sort of "character" you are recommending people develop as rationalists?
(Note: this is not an attack... because I'm not fighting you. My definition of "win" in this context is better understanding -- first for me, and then for everyone else. So it is not necessary for someone else to "lose" in order for me to "win".)
Replies from: thomblake↑ comment by thomblake · 2009-03-12T19:48:34.146Z · LW(p) · GW(p)
I didn't think the 'that is not ruled by cause-and-effect' was relevant - I was granting that your argument required something less specific than it actually did, since I didn't even need that other stuff. But if you prefer, I'll edit it into my earlier comment.
Atheism (as a theory) is falsifiable, if you specify exactly which god you don't believe in and how you'd know it if you saw it. Then if that being is ever found, you know your theory has been falsified.
I've never heard 'Science' framed as a theory, so my criticism would not apply. Feel free to posit a theory of science and I'll tell you whether it makes sense.
Materialism is mostly justified on methodological grounds, and is also not a theory.
Psychological hedonism, however, is a theory, and if it's clearly specified then there are easy counterexamples.
A reason is not the same as a cause. Though reasons can be causes.
I didn't say anything like "the idea of cause-and-effect is nonsense". Rather, I said that our actions have causes other than the avoidance of pain and the pursuit of pleasure. You seem to think that the only thing that can constitute a 'cause' for a human is pleasure or pain, given that you've equated the concepts.
I'm only saying that people do things for reasons. That is, our actions are the effects of causes.
That's not all you're saying, at all. I would agree wholeheartedly with this sentiment. Whilst denying that I try to maximize any sort of utility or am ruled by drives towards pleasure and pain.
↑ comment by Annoyance · 2009-03-12T19:15:26.733Z · LW(p) · GW(p)
"I was the one who wrote this in a previous comment regarding the rationality of the fear of darkness. I think this definition(Which I learned from Eliezer) is in fact useful: if you know of two procedures A and B where A is correct according to some standard of rationality but will make you lose whereas B will make you win I would choose B."
That's the point: it's knowing that B leads to winning, and acknowledging that winning is the goal, that makes choosing B rational.
"Knowing that the tarot decks will "make you win" is reason enough to use them, no matter how irrational that may appear."
If we establish that we want to predict something, and we acknowledge that tarot is correlated to whatever we want to predict, using tarot to predict that thing IS COMPLETELY RATIONAL. We do not need to know the mechanism behind the correlation. What we DO need is to be able to look at our evaluation of the tarot's usefulness and determine that every step in the reasoning is correct.
The reasoning that concludes looking at tarot is an effective way of predicting [whatever] is fairly simple and trivially easy to verify.