Rationality Compendium: Principle 1 - A rational agent, given its capabilities and the situation it is in, is one that thinks and acts optimally
post by ScottL · 2015-08-23T08:01:08.021Z · LW · GW · Legacy · 27 commentsContents
Related Materials Wikis: Posts: Suggested posts to write: Academic Books: Popular Books: Notes on decisions I have made while creating this post None 27 comments
A perfect rationalist is an ideal thinker. Rationality ↓, however, is not the same as perfection. Perfection guarantees optimal outcomes. Rationality only guarantees that the agent will, to the utmost of their abilities, reason optimally. Optimal reasoning cannot, unfortunately, guarantee optimal outcomes. This is because most agents are not omniscient or omnipotent. They are instead fundamentally and inexorably limited. To be fair to such agents, the definition of rationality that we use should take this into account. Therefore, a rational agent will be defined as: an agent that, given its capabilities and the situation it is in, thinks and acts optimally. Although it is noted that rationality does not guarantee the best outcome, a rational agent will most of the time achieve better outcomes than those of an irrational agent.
Rationality is often considered to be split into three parts: normative, descriptive and prescriptive rationality.
Normative rationality describes the laws of thought and action. That is, how a perfectly rational agent with unlimited computing power, omniscience etc. would reason and act. Normative rationality basically describes what is meant by the phrase "optimal reasoning". Of course, for limited agents true optimal reasoning is impossible and they must instead settle for bounded optimal reasoning, which is the closest approximation to optimal reasoning that is possible given the information available to the agent and the computational abilities of the agent. The laws of thought and action (what we currently believe optimal reasoning involves) are::
- Logic ↓ - math and logic are deductive systems, where the conclusion of a successful argument follows necessarily from its premises, given the axioms of the system you’re using: number theory, geometry, predicate logic, etc.
- Probability theory ↓ - is essentially an extension of logic. Probability is a measure of how likely a proposition is to be true, given everything else that you already believe. Perhaps, the most useful rule to be derived from the axioms of probability theory is Bayes’ Theorem ↓, which tells you exactly how your probability for a statement should change as you encounter new information. Probability is viewed from one of two perspectives: the Bayesian perspective which sees probability as a measure of uncertainty about the world and the Frequentist perspective which sees probability as the proportion of times the event would occur in a long run of repeated experiments. Less wrong follows the Bayesian perspective.
- Decision theory ↓ - is about choosing actions based on the utility function of the possible outcomes. The utility function is a measure of how much you desire a particular outcome. The expected utility of an action is simply the average utility of the action’s possible outcomes weighted by the probability that each outcome occurs. Decision theory can be divided into three parts:
- Normative decision theory studies what an ideal agent (a perfect agent, with infinite computing power, etc.) would choose.
- Descriptive decision theory studies how non-ideal agents (e.g. humans) actually choose.
- Prescriptive decision theory studies how non-ideal agents can improve their decision-making (relative to the normative model) despite their imperfections.
Descriptive rationality describes how people normally reason and act. It is about understanding how and why people make decisions. As humans, we have certain limitations and adaptations which quite often makes it impossible for us to be perfectly rational in the normative sense of the word. It is because of this that we must satisfice or approximate the normative rationality model as best we can. We engage in what's called bounded, ecological or grounded rationality ↓ . Unless explicitly stated otherwise, 'rationality' in this compendium will refer to rationality in the bounded sense of the word. In this sense, it means that the most rational choice for an agent depends on the agents capabilities and the information that is available to it. The most rational choice for an agent is not necessarily the most certain, true or right one. It is just the best one given the information and capabilities that the agent has. This means that an agent that satisfices or uses heuristics may actually be reasoning optimally, given its limitations, even though satisficing and heuristics are shortcuts that are potentially error prone.
Prescriptive or applied rationality is essentially about how to bring the thinking of limited agents closer to what the normative model stipulates. It is described by Baron in Thinking and Deciding ↓ pg.34:
In short, normative models tell us how to evaluate judgments and decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model. If prescriptive recommendations derived in this way are successful, the study of thinking can help people to become better thinkers.
The behaviours and thoughts that we consider to be rational for limited agents is much larger than those for the perfect, i.e. unlimited, agents. This is because for the limited agents we need to take into account, not only those thoughts and behaviours which are optimal for the agent, but also those thoughts and behaviours which allow the limited agent to improve their reasoning. It is for this reason that we consider curiousity, for example, to be rational as it often leads to situations in which the agents improve their internal representations or models of the world. We also consider wise resource allocation to be rational because limited agents only have a limited amount of resources available to them. Therefore, if they can get a greater return on investment on the resources that they do use then they will be more likely to be able to get closer to thinking optimally in a greater number of domains.
We also consider the rationality of particuar choices to be something that is in a state of flux. This is because the rationality of choices depends on the information that an agent has access to and this is something which is frequently changing. This hopefully highlights an important fact. If an agent is suboptimal in its ability to gather information, then it will often end up with different information than an agent with optimal informational gathering abilities would. In short, this is a problem for the suboptimal (irrational) agent as it means that its rational choices are going to differ more from the perfect normative agents than the rational agents would. The closer an agents rational choices are to the rational choices of a perfect normative agent the more that the agent is rational.
It can also be said that the rationality of an agent depends in large part on the agents truth seeking abilities. The more accurate and up to date the agents view of the world the closer its rational choices will be to those of the perfect normative agents. It is because of this that a rational agent is one that is inextricably tied to the world as it is. It does not see the world as it wishes it, fears it or has seen it to be, but instead constantly adapts to and seeks out feedback from interactions with the world. The rational agent is attuned to the current state of affairs. One other very important characteristic of rational agents is that they adapt. If the situation has changed and the previously rational choice is no longer the one with the greatest expected utility, then the rational agent will adapt and change its preferred choice to the one that is now the most rational.
The other important part of rationality, besides truth seeking, is that it is about maximising the ability to actually achieve important goals. These two parts or domains of rationality: truth seeking and goal reaching are referred to as epistemic and instrumental rationality. ↓
- Epistemic rationality is about the ability to form true beliefs. It is governed by the laws of logic and probability theory.
- Instrumental rationality is about the ability to actually achieve the things that matter to you. It is governed by the laws of decision theory. In a formal context, it is known as maximizing “expected utility”. It important to note that it is about more than just reaching goals. It is also about discovering how to develop optimal goals.
As you move further and further away from rationality you introduce more and more flaws, inefficiencies and problems into your decision making and information gathering algorithms. These flaws and inefficiencies are the cause of irrational or suboptimal behaviors, choices and decisions. Humans are innately irrational in a large number of areas which is why, in large part, improving our rationality is just about mitigating, as much as possible, the influence of our biases and irrational propensities.
If you wish to truly understand what it means to be rational, then you must also understand what rationality is not. This is important because the concept of rationality is often misconstrued by the media. An epitomy of this misconstrual is the character of Spock from Star Trek. This character does not see rationality as if it was about optimality, but instead as if it means that ↓:
- You can expect everyone to react in a reasonable, or what Spock would call rational, way. This is irrational because it leads to faulty models and predictions of other peoples behaviors and thoughts.
- You should never make a decision until you have all the information. This is irrational because humans are not omniscient or omnipotent. Their decisions are constrained by many factors like the amount of information they have, the cognitive limitations of their brains and the time available for them to make decisions. This means that a person if they are to act rationally must often make predictions and assumptions.
- You should never rely on intuition. This is irrational because intuition (system 1 thinking) ↓ does have many advantages over conscious and effortful deliberation (system 2 thinking) mainly its speed. Although intuitions can be wrong, to disregard them entirely is to hinder yourself immensely. If your intuitions are based on multiple interactions that are similar to the current situation and these interactions had short feedback cycles, then it is often irrational to not rely on your intuitions.
- You should not become emotional. This is irrational because while it is true that emotions can cause you to use less rational ways of thinking and acting, i.e. ways that are optimised for ancestral or previous environments, it does not mean that we should try to eradicate emotions in ourselves. This is because emotions are essential to rational thinking and normal social behavior ↓. An aspiring rationalist should remember four points in regards to emotions:
- The rationality of emotions depends on the rationality of the thoughts and actions that they induce. It is rational to feel fear when you are actually in a situation where you are threatened. It is irrational to feel fear in situations where are not being threatened. If your fear compels you to take suboptimal actions, then and only then is that fear irrational.
- Emotions are the wellspring of value. A large part of instrumental rationality is about finding the best way to achieve your fundamental human needs. A person who can fulfill these needs through simple methods is more rational than someone who can't. In this particular area people tend to become alot less rational as they age. As adults we should be jealous of the innocent exuberance that comes so naturally to children. If we are not as exuberant as children, then we should wonder at how it is that we have become so shackled by our own self restraint.
- Emotional control is a virtue, but denial is not. Emotions can be considered a type of internal feedback. A rational person does not be consciously ignore or avoid feedback as this means that would be limiting or distorting the information that they have access to. It is possible that a rational agent may may need to mask or hide their emotions for reasons related to societal norms and status, but they should not repress emotions unless there is some overriding rational reason to do so. If a person volitionally represses their emotions because they wish to perpetually avoid them, then this is both irrational and cowardly.
- By ignoring, avoiding and repressing emotions you are limiting the information that you exhibit, which means that other people will not know how you are actually feeling. In some situations this may be helpful, but it is important to remember that people are not mind readers. Their ability to model your mind and your emotional state depends on the information that they know about you and the information, e.g. body language, vocal inflections, that you exhibit. If people do not know that you are vulnerable, then they cannot know that you are courageous. If people do not know that you are in pain, then they cannot know that you need help.
- You should only value quantifiable things like money, productivity, or efficiency. This is irrational because it means that you are reducing the amount of potentially valuable information that you consider. The only reason a rational person ever reduces the amount of information that they consider is because of resource or time limitations.
Related Materials
Wikis:
- Rationality - the characteristic of thinking and acting optimally. An agent is rational if it wields its intelligence in such a way as to maximize the convergence between its beliefs and reality; and acts on these beliefs in such a manner as to maximize its chances of achieving whatever goals it has. For humans, this means mitigating (as much as possible) the influence of cognitive biases. ↩
- Maths/Logic - Math and logic are deductive systems, where the conclusion of a successful argument follows necessarily from its premises, given the axioms of the system you’re using: number theory, geometry, predicate logic, etc. ↩
- Probability theory - a field of mathematics which studies random variables and processes. ↩
- Bayes theorem - a law of probability that describes the proper way to incorporate new evidence into prior probabilities to form an updated probability estimate.
- Bayesian - Bayesian probability theory is the math of epistemic rationality, Bayesian decision theory is the math of instrumental rationality.
- Bayesian probability - represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials. An event with Bayesian probability of .6 (or 60%) should be interpreted as stating "With confidence 60%, this event contains the true outcome", whereas a frequentist interpretation would view it as stating "Over 100 trials, we should observe event X approximately 60 times." The difference is more apparent when discussing ideas. A frequentist will not assign probability to an idea; either it is true or false and it cannot be true 6 times out of 10.
- Bayesian Decision theory - Bayesian decision theory refers to a decision theory which is informed by Bayesian probability ↩
- Decision theory – is the study of principles and algorithms for making correct decisions—that is, decisions that allow an agent to achieve better outcomes with respect to its goals. ↩
- Hollywood rationality- What Spock does, not what actual rationalists do.
Posts:
- What do we mean by rationality? - Introduces rationality and the rationality domains (epistemic and instrumental). ↩
- Newcomb's Problem and Regret of Rationality - introduces the idea that you should never end up envying someone else's mere choices.
- What Bayesianism taught me - discusses some specific things that the bayesian thinking has taught or caused the author to learn.
- Cognitive science of rationality - discusses rationality, (Type1/Type 2) processes of cognition, thinking errors and the three kinds of minds (reflective, algorithmic, autonomous). ↩
Suggested posts to write:
- Bounded/ecological/grounded Rationality - I couldn't find a suitable resource for this on less wrong. ↩
Academic Books:
- Baron, Thinking and Deciding ↩
- Hastie and Dawes, Rational Choice in an Uncertain World
- Bazerman and Moore, Judgment in Managerial Decision Making
- Plous, The Psychology of Judgment and Decision Making
- Gilboa, Making Better Decisions
- Stanovich, Rationality and the Reflective Mind
- Holyoak and Morrison, The Oxford Handbook of Thinking and Reasoning
Popular Books:
- Ariely, Predictably Irrational
- Kahneman, Thinking, Fast and Slow
- Thaler and Sunstein, Nudge: Improving Decisions About Health, Wealth, and Happiness
- Tavris and Aronson, Mistakes Were Made (But Not by Me)
- Stanovich, What Intelligence Tests Miss
- Silver, The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t
- Heath, Decisive: How to Make Better Choices in Life and Work
- Rock, Your Brain at Work: Strategies for Overcoming Distraction, Regaining Focus, and Working Smarter All Day Long
- Damasio, Descartes' Error: Emotion, Reason, and the Human Brain ↩
Notes on decisions I have made while creating this post
(these notes will not be in the final draft):
- I agree denotationally, but object connotatively with 'rationality is systemized winning', so I left it out. I feel that it would take too long to get rid of the connotation of competition that I believe is associated with 'winning'. The other point that would need to be delved into is: what exactly does the rationalist win at? I believe by winning Elizer meant winning at newcomb's problem, but the idea of winning is normally extended into everything. I also believe that I have basically covered the idea with: “Rationality maximizes expected performance, while perfection maximizes actual performance.”
- I left out the 12 virtues of rationality because I don’t like perfectionism. If it was not in the virtues, then I would have included them. My problem with perfectionism is that having it as a goal makes you liable to premature optimization and developing tendencies for suboptimal levels of adaptability. Everything I have read in complexity theory, for example, makes me think that perfectionism is not really a good thing to be aiming for, at least in uncertain and complex situations. I think truth seeking should be viewed as an optimization process. If it doesn't allow you to become more optimal, then it is not worth it. I have a post about this here.
- I couldn't find an appropriate link for bounded/ecological/grounded rationality.
27 comments
Comments sorted by top scores.
comment by Lumifer · 2015-08-24T15:12:53.012Z · LW(p) · GW(p)
Rationality maximizes expected performance
Hm. Since this is a core definition, I have an urge to examine it very carefully. First, "performance" is a bit fuzzy, would you mind if I replaced it with utility? We would get "rationality maximizes expected utility". I think that I have a few questions about that.
Rationality maximizes. That implies that every rational action must maximize utility. Anything that does not maximize utility is not (fully) rational. In particular, satisficing is not rational.
Rationality maximizes expected utility. A great deal of heavy lifting is done by this word and there are some traps here. For example, if you define utility as "that what you want" and add a little bit about revealed preferences, we would get caught in a loop: you maximize what you want and how do we know what you want? why, that is what you maximize. In general most every action maximizes some utility and, moreover, there is no requirement for the utility function to be stable across time, so this gets complicated quite fast.
Rationality maximizes expected utility. At issue here are risk considerations. You can wave them away by saying that one should maximize risk-adjusted utility, but in practice this is a pretty big blind spot. Faced with estimated distributions of future utility, most people would pick one with the highest mean (they pick the maximum expected value), but that ignores the width of the distributions which is rarely a good idea.
Take curiosity. It's an accepted rationalist virtue. And yet I don't see how it maximizes expected utility.
Replies from: LawChan, ScottL↑ comment by LawrenceC (LawChan) · 2015-08-24T16:20:05.614Z · LW(p) · GW(p)
I'm not sure if this is correct, but my best guess is:
It maximizes utility, in so far as most goals are better achieved with more information, and people tend to systematically underestimate the value of collecting more information or suffer from biases that prevent them from acquiring this information. Or, in other words, curiosity is virtuous because humans are bounded and flawed agents, and it helps rectify the biases that we fall prey to. Just like being quick to update on evidence is a virtue, and scholarship is a virtue.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-24T16:37:55.898Z · LW(p) · GW(p)
There are a couple of problems here. First is the usual thing forgotten on LW -- costs. "More information" is worthwhile iff its benefits outweigh the costs of acquiring it. Second, your argument implies that, say, attempting to read the entire Wikipedia (or Encyclopedia Britannica if you are worried about stability) from start to finish would be a rational thing to do. Would it?
Replies from: LawChan, ScottL↑ comment by LawrenceC (LawChan) · 2015-08-24T20:23:24.856Z · LW(p) · GW(p)
No, it isn't. Being curious is a good heuristic for most people, because most people are in the region where information gathering is cheaper than the expected value of gathering information. I don't think we disagree on anything concrete: I don't claim that it's rational in itself a priori but is a fairly good heuristic.
↑ comment by ScottL · 2015-08-25T02:40:17.468Z · LW(p) · GW(p)
This is a good point about taking into account the costs. I want to cover this idea in my third post which I am still writing, but will probably be something like Principle 3 – your rationality depends on the usefulness of your internal representation of the world. My view is that truth seeking should be viewed as an optimization process. If it doesn't allow you to become more optimal, then it is not worth it. I have a post about this here.
↑ comment by ScottL · 2015-08-25T02:32:39.478Z · LW(p) · GW(p)
The quote probably should have probably had an 'often' in it. I wasn't actually trying to define rationality in that quote. I was just trying to differentiate it from perfection. I have rewritten the first paragraph based on your feedback.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-25T03:34:10.434Z · LW(p) · GW(p)
I have rewritten the first paragraph based on your feedback.
Mea culpa, but the rewrite doesn't look great to me. Before, you first paragraph had some zing. People like me could and did find fault with it, sure, but at least it was energetic. And now the first two sentences are followed by a lot of hemming and hawing which sounds defensive and is entirely uninspiring.
"Ensuring that resource usage and behaviour/thought cordination is directed towards the fulfillment of the agents goals" was already being taught by senior slave-drivers to junior slave-drivers when the pyramids were being built. In trying to avoid rationality be just prediction, you made it be just effectiveness.
I don't have a good suggestion for you, in fact I'm not sure that the so-called epistemic rationality (aka science) and instrumental rationality (aka pragmatism and keeping your eye on the ball) can be usefully joined together into a single concept. But since you are writing a compendium, you probably should come up with a reasonable definition for rationality, since it is, y'know, a core concept.
Replies from: ScottL↑ comment by ScottL · 2015-08-25T06:14:45.174Z · LW(p) · GW(p)
I changed it again.
In terms of the defintiion, it is in the title. As to what is is. I am basically trying to convey the idea that rationality is optimal thinking. Although, I suppose I am also happy with how its defined in this book. If you think the below definitions are better, let me know.
- Rationality: the property of a system which does the "right thing" given what it knows.
- It is an gent that acts to maximize its expected performance measure. That is, it does the "right thing".
- For each possible percept sequence, an ideal rational agent should do whatever action is expected to maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has
- An omniscient agent knows the actual outcome of its actions, and can act accordingly, but omniscience is impossible in reality. It is important to distinguish rationality and omniscience. This is because we should not blame an agent for failing to take into account something it could not perceive, or for failing to take an action that it is incapable of taking.
↑ comment by Lumifer · 2015-08-25T14:55:08.392Z · LW(p) · GW(p)
In terms of the defintiion, it is in the title.
That's not really a definition: you just shifted the entire burden onto the word "optimally". A basic use of a definition is to see if something fits it -- if we defined a class A, is object z a member of that class? So let's say I'm considering some action. Is it rational? Well, it is if it's optimal. Err.. and what does that mean? To answer I need to define optimality and that is not trivial. And if you say that optimality is maximizing (expected) utility, we're back to your original definition which I poked at and you abandoned.
the property of a system which does the "right thing"
That's exactly the same thing -- replacing one word with another (or two) without clarifying anything.
whatever action is expected to maximize its performance measure
That's maximizing expected utility, again.
we should not blame an agent
Yes, of course, but that doesn't help you with a definition.
Replies from: ScottL↑ comment by ScottL · 2015-08-26T06:20:25.728Z · LW(p) · GW(p)
Before we get too deeply into problems with whatever definition I might use, I want to make sure that you agree that what I am trying to say is right. Once that is confirmed, then I can think more about how to say it well.
This is basically what I am trying to say in the post. 'Rationality' is 'optimal reasoning' which we know of as normative rationality, i.e. the laws of thought and action or what the perfect agent would do. A caveat is that when we talk about rationality in regards to limited agents we are really talking about bounded rationality. Hence, rationality in this case is really 'bounded optimal reasoning'. So for limited agents, rationality is about the reasoning that best approximates the results of the normative rationality. Also, for limited agents we consider those types of thinking that lead to more optimal reasoning in the future to be rational as well. This is the basis for epistemic and instrumental rationality and is why curiosity is rational. Curiosity often leads to better maps. Better maps often leads to better decisions and, therefore, we consider curiosity to be rational. In terms of costs, the goal of rationality in limited agents is to best approximate the normative rationality. This requires the highest return on investment on the resources that you make use of. Basically the resource usage to expected utiltity ratio should be high since, we only have a limited amount of resources that we can make us of. If there is an alternative way to spend the same resources which has a higher usage to expected utility ratio, then you are not as close as possible.to the normative rationality.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-26T15:00:19.870Z · LW(p) · GW(p)
Before we get too deeply into problems with whatever definition I might use, I want to make sure that you agree that what I am trying to say is right.
A good idea. I'm not nipicking about wording, at issue is actually meaning.
First, as I mentioned before, I am not sure how to combine epistemic and instrumental rationality together into one useful concept. I am not saying it's impossible, just that nothing comes to my mind.
One issue, for example, is that they belong to different categories: one is about knowing and the other is about doing. Yes, you can trivially stick them together by saying that epistemic rationality is just instrumental rationality with the goal of constructing a good map, but I don't know what you gain by that. Constructing a good map is, basically, the scientific method and it is not a decision theory.
Second, I have problems with the "what the perfect agent would do". An immediate issue is that the answer to that is "You don't know and you will never know" for any noticeably complex problem[1], especially one that concerns the messy real world and not, for example, the neat and well-defined world of mathematics. That's an issue because you set it up as a standard and as a limit to which "bounded optimal reasoning" should converge. But if you don't know what it is, you don't know what you should converge to and don't have a good method to adjudicate competing claims about what is rational.
There are also questions about defining rationality as optimality. Optimality typically involves maximizing some measure, but in a lot of situations what matters is not how to reach the maximum, but rather what is it that you optimize. Is it "rational" to arrive at an optimum for the wrong thing? How do you know what to optimize for? Handwaving about utility remains handwaving because the only utility functions I have seen which actually produce a specific numerical estimate are economic utility functions and they solely care about money.
Moreover, you rarely have the luxury of optimizing for one thing. Typically you have multiple conflicting goals with a mix of different costs to all actions, so deciding how are you going to balance goals and summarize costs is very important and I have no idea what is the "rational" way to go about it.
All in all, I am not satisfied by the "perfect agent" or "optimize utility" definitions of rationality. The perfect agent approach is essentially WWJD -- What Would Jesus Do -- only without the religious baggage, and optimizing utility doesn't tell me how to actually, in practice do that.
Notice that keeping epistemic and instrumental rationality separate works much better. The criterion for epistemic rationality is the match between the map and the reality -- this is specific and observable. The criterion for instrumental rationality is whether you reach your goals at a reasonable cost. This is more complicated because of uncertainty of the future: good decisions don't always lead to good outcomes and good outcomes do not necessarily follow from good decisions. But even here there are things we can look at and handles we can grab and manipulate. But "emulate perfection" or "maximize utility" -- I have no idea how to even start doing that.
[1] This is commonly held as entirely obvious in LW -- only in the context of AI boxing :-)
Replies from: ScottL↑ comment by ScottL · 2015-08-27T05:51:23.108Z · LW(p) · GW(p)
First, as I mentioned before, I am not sure how to combine epistemic and instrumental rationality together into one useful concept. I am not saying it's impossible, just that nothing comes to my mind. One issue, for example, is that they belong to different categories: one is about knowing and the other is about doing.
If we forget about epistemic and instrumental rationality for a moment and think about what reasoning is aimed at achieving. That is, why we care about it. Then, I think we can get closer to understanding how epistemic and instrumental rationality might work together to become part of something larger. They are ,of course, still different techniques.
Do you think that the below areas describe what it means to reason well?
- It is about actually achieving desired outcomes, so if you want x to occur it is about making sure that you initiate a series of thoughts and actions that lead to x occurring instead of y or z or a plethora of other possibilities.
- It is about achieving outcomes at a reasonable cost. If you can achieve your desired outcomes with cheaper costs, then you will be able to achieve more outcomes.
- It is about choosing the best outcomes to achieve. We all have limited resources and to achieve any outcomes we need to use resources. This means that we can only chose a limited amount of outcomes to pursue. Making this choice wisely is what this area is about.
- It is about valuing the outcomes appropriately. This would be about making your values coherent and correctly valuing the things that matter so that they get priority.
Epistemic/Instrumental rationality is really about certain types of skills that allow us to do well in the above areas (I think I covered all of them). I think that there might also be more types of skills. I have a general idea about what another one might be, I am not sure how many others there might be. Although, I want the compendium to cover existing and established ideas only. That is why I am referring to epistemic and instrumental rationality and not anything else.
All in all, I am not satisfied by the "perfect agent" or "optimize utility" definitions of rationality. The perfect agent approach is essentially WWJD -- What Would Jesus Do -- only without the religious baggage, and optimizing utility doesn't tell me how to actually, in practice do that.
What do you think about defining rationality as a property that is attributed by agents to certain thoughts and behaviours? This would mean that it is not only bounded by the agent’s abilities and the information it has, but also by its understanding of what it means to be rational. Essentially, it would mean that ‘rationality’ is subjective. To avoid the fallacy of the grey there needs to be some objective way to judge different agents understanding of what it means to be rational. This objective way is basically our best overall guess at what perfectly optimal reasoning or optimal reasoning for humans would be. For humans, this way is the scientific method with the current body of work pointing to logic, probability and decision theory as having the closest answers on what it means to reason optimally, i.e. be rational. These answers aren’t necessarily correct due to negative pragmatism etc. They are just our current, best and most informed guesses.
There are also questions about defining rationality as optimality. Optimality typically involves maximizing some measure, but in a lot of situations what matters is not how to reach the maximum, but rather what is it that you optimize. Is it "rational" to arrive at an optimum for the wrong thing?
I always thought that this was a part of what it means to be instrumentally rational. Basically to have optimal goals as well. This is my problem with instrumental rationality as it's talked about on less wrong is about achieving what you value at a reasonable cost or is it about making your values coherent and in line with what you innately value or is it a combination of the two. I have always felt that instrumental rationality is a bit too overreaching and encompassing. Do you think I should split it into two types of instrumental rationality? One for costs and one for value alignment or am I not interpreting it correctly.
Notice that keeping epistemic and instrumental rationality separate works much better.
I will have other posts where I go into detail on each of these separately. I think that they are separate skills or areas of expertise, but I also think that there should be a base reason for why we should care about them.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-27T15:17:14.666Z · LW(p) · GW(p)
and think about what reasoning is aimed at achieving
So do you want to define "rationality" as a kind of reasoning? Reasoning is an opaque mental process and, for example, does not include acting which is a large part of instrumental rationality. Procrastination is a classic LW sin, but it's not a reasoning problem. And what would be non-rational reasoning besides straightforward logical errors? The great majority of thinking people do throughout the day is not formalizable into a neat system of propositions and conclusions.
It is about actually achieving desired outcomes ...at a reasonable cost
Yes, that's the definition of instrumental rationality.
It is about choosing the best outcomes to achieve. ... It is about valuing the outcomes appropriately.
Hold on, that's new. Are you claiming that (proper) values are a part of rationality and that rationality will tell you what your values should be? I think I am going to loudly object to that. Maybe you can provide an example to show what you mean?
is really about certain types of skills that allow us to do well
Hm, that's an interesting approach. Then you'd consider rationality a kind of skill -- a skill like writing essays or programming? This is probably worth exploring further.
Essentially, it would mean that ‘rationality’ is subjective.
Not sure I want to go that way. You wouldn't have many counterarguments to a bloke which declares himself perfectly rational as he goes to pray to Jesus so that he wins the lottery. And once you introduce an "objective way to judge" there doesn't seem to be any point to the subjectivity any more.
I always thought that this was a part of what it means to be instrumentally rational. Basically to have optimal goals as well.
See above -- goals are a direct function of values and I have very strong doubts that rationality can tell you what your values should be.
it about making your values coherent
Humans don't have coherent values. In fact, I don't think you can make system of values complex enough to deal with real life fully coherent (people who come close to that are usually called "crazy fanatics"). Instead, what people do is trade off different values against each other and come up with an end-result balance where they are willing to sacrifice some A, B, and C but gain X, Y, and Z. As a crude approximation you can think about it as summing different vectors and acting according to where the summed vector points.
I think that to what degree rationality applies here is a hard question. On the one hand, there is no basis for rationality to say "you need to value this and not value that". On the other hand, values and their weights are not stable across time, and part of rationality is juggling short-term and long-term desires and consequences -- usually pointing out that it's not smart to pay with a lot of long-term pain for a jolt of short-term pleasure. That's where this whole bit about "imagine yourself as a very smart, calm, capable human being -- what would she choose?" comes in.
So, yes, it's complicated. I have issues with listening to "It's not rational to value/desire this", but I have much less issues with "The price for this action that you want to do is really high, are you quite sure you want to pay it, that doesn't look rational". I am not sure where the proper boundary is.
Replies from: ScottL↑ comment by ScottL · 2015-08-28T15:14:08.509Z · LW(p) · GW(p)
So do you want to define "rationality" as a kind of reasoning? Reasoning is an opaque mental process and, for example, does not include acting which is a large part of instrumental rationality.
When I use the word reasoning, I really mean both the system 1 and 2 cognitive processes. By rational I basically mean reasoning (system 1 and 2) done well. Where done well, is defined based on your most trusted source. For us this is science, so logic, probability, decision theory etc. for system 2.
Hold on, that's new. Are you claiming that (proper) values are a part of rationality and that rationality will tell you what your values should be? I think I am going to loudly object to that. Maybe you can provide an example to show what you mean?
I don’t know what “proper” would mean. I am talking about coherence which means that its “properness”, I suppose, depends on its context, i.e. the other pre-existing values. I will give you some examples. I will assume that you already know the difference between wanting and liking.
- Excessive Wanting - an example is drug addiction: “Only ‘wanting’ systems sensitize, and so ‘wanting’ can increase and become quite intense due to sensitization, regardless of whether a drug still remains ‘liked’ after many repeated uses”.
- Not liking things that you should or could - examples are bad experiences that cause aversion conditioning to something that you used to or could like. My general view is that if you don’t like something and you could then this is a limitation.
- Not wanting things you like - ugh fields are an example of this.
- Conflicting wants - this is often inevitable like you say value is complex. But, I think it is important to look at what the fundamental human values or needs are and try to align with those. If you don’t, then in general there is a going to be a greater amount of conflict.
I would need to write a full post on the details, but that is just a general idea of what I mean. You also consider the values of others that you are interconnected with and care about.
Hm, that's an interesting approach. Then you'd consider rationality a kind of skill -- a skill like writing essays or programming? This is probably worth exploring further.
I don’t see how you can view it as anything but a skill. This is because epistemic rationality, for example, is only valuable instrumentally. It helps makes more rational decisions, but the truer beliefs it causes need to be applied to actually be useful and improve your rationality. If you spend lots of effort creating true beliefs and then compartmentalize that knowledge and don’t apply it, you have effectively gained nothing in terms of rationality. That’s my view anyway. I don’t know how many people would agree. An example is Aumann, he knows a lot about rationality, but I don’t think he is rational because it looks to me like he believes in non overlapping magisteria.
So, yes, it's complicated. I have issues with listening to "It's not rational to value/desire this", but I have much less issues with "The price for this action that you want to do is really high, are you quite sure you want to pay it, that doesn't look rational". I am not sure where the proper boundary is.
I agree with you on this and your other points on how value is complex. I think that to say that: “it is rational to value/desire this” there needs to a ‘because’ after that statement. No value/desire is rational or irrational in and of itself. It is only irrational or irrational in a context. That is, because of its relation to other values or the costs to fulfil it etc.
Right now, I am thinking that I need to make the base concepts of rationality more solid before I can move into what rationality is for this compendium.
This is my first attempt at defining things. My goal is to define things in a programatic kind of way. This means that the concepts should follow: single responsibility, loose coupling, yagni etc. Let me know what you think.
The goal of the definitions is just to highlight the right areas in concept space. They are drafts and will obviously need more detail. I would also need to submit them as posts and see if others agree.
I am thinking that there should be two basic areas: system 1 and system 2 rationality. Where rationality, in its most basic form, means done well (this will need to be expanded upon). The goal of the two areas is to define what it is we are referring to when we say that something is rational or irrational. There are two areas so that we can distinguish rationality/irrationality in formal reasoning vs. your intuitions or what you actually do vs. what you think you should do.
There are also skills or general topics which describe groups of techniques and methods that can be used to improve your rationality in one or both of the two areas of it. Using these skills means that you apply them using volitional effort. It is noted, however that if you use these skills often enough they are likely to become embedded in your system 1 processes.
There may be more skills, but I think the main ones are below:
- Epistemic rationality - true beliefs and all that
- Instrumental rationality - (restricted to reasonable costs)
- Value coherence rationality - I gave some examples, but it basically means noticing when your values and desires are out of alignment or could become so if you did some action.
- Distributive rationality - this is basically what you are talking about in the above quote. Once you have a semi-sufficient valuation system in place how can you actually distribute resources so that you achieve what you value.
- Perspectival rationality - no matter how great you are at being rational you are limited by the ideas that you can come up with. You are limited by your paradigms and perspectives. Perspectival rationality is about knowing when to look at a situation from multiple perspectives and having the ability to model the territory or map accurately from another perspective. By modelling the map from another perspective, it is meant that you are thinking about what the maps of someone else or yourself in a future or past tense would be like for a given situation. By modelling the territory, it is meant that you are thinking about what the territory will be like if some situation occurs. An important part of perspectival rationality is being able to coalesce the information from multiple perspectives into a coherent whole. The aim of prrapectival rationality is greater novelty in your ideas, broader utilities in solutions and more pragmatic results. It also includes understanding the necessarily flawed and limited nature of your perspective. You need to constantly be seeking feedback and other perspectives. It would relate to complexity theory, agile software development, systems dynamics, boydian thinking and mental models/schemas/scripts (whatever you want to call it). I plan to write some posts around this idea.
- Communicative rationality - how can you communicate well. I will need to look into this one, but I think it’s important.
- Applied rationality - This relates to when you already know what the best thing to do is and is about how you can get yourself to actually do it. Examples of this are training will power or courage (doing something you don't want to, but believe you should), dealing with ugh fields.
↑ comment by Lumifer · 2015-08-31T01:07:08.442Z · LW(p) · GW(p)
rational I basically mean reasoning (system 1 and 2) done well. Where done well, is defined based on your most trusted source.
I am not sure I understand -- is "most trusted source" subjective? What if Jesus is my most trusted source? And He is for a great deal of people.
I am talking about coherence which means that its “properness”, I suppose, depends on its context, i.e. the other pre-existing values.
Do you think it could be reformulated in the framework where values form tree-like networks with some values being "deep" or "primary" and other values being "shallow" or "derived" or "secondary"? Then you might be able to argue that a conflict between a deep and a shallow value should be resolved by the declaring the shallow value not rational.
I don’t see how you can view it as anything but a skill
I meant this more specifically in the looking for a definition context.
One very common way of making a definition is to point to a well-known class, say, Bet and then define a sub-class beta by listing a set of features {X} which allow you to decide whether a particular object b from the super-class Bet belongs to the sub-class beta or not. Such definitions are sometimes called is-a-kind-of definitions: beta is a kind of Bet.
So if we were to try to give an is-a-kind-of defintion of rationality, what is the super-class? Is it reasoning? Is it skills? Something else?
No value/desire is rational or irrational in and of itself. It is only irrational or irrational in a context. That is, because of its relation to other values or the costs to fulfil it etc.
So how to avoid being caught in a loop: values depend on values which depend on values that depend on values..?
This means that the concepts should follow: single responsibility, loose coupling, yagni etc.
Not sure about yagni, since it is not the case that you can always go back to a core definition and easily update it for your new needs. If there's already a structure built on top of that core definition, changing it might prove to be quite troublesome. Loose coupling and such -- sure, if you can pull it off :-) Software architecture is... much less constrained by reality :-)
two basic areas: system 1 and system 2 rationality
What do you mean by system 2 rationality? Intuitions that work particularly well? Successful hunches?
I think the main ones are below
That's a very wide reach. Are you sure you're not using "rationality" just as a synonym for "doing something really well"?
Replies from: ScottL↑ comment by ScottL · 2015-08-31T06:35:57.835Z · LW(p) · GW(p)
That's a very wide reach. Are you sure you're not using "rationality" just as a synonym for "doing something really well"?
I mean do well in the areas I talked about before. In summary, I basically mean do well at coming up with solutions to problems or choosing/being able to go through with the best solution, out of all of the solutions you have come up with, to a problem.
I will try to define it again.
First off, there is comprehensive rationality or normative rationality. This does not consider agent limitations. It can be thought of as having two types.
- Prescient - outcomes are known and fixed. The decision makers maximise the outcomes with the highest utilities (discounted by costs).
- Non-prescient - like the prescient model, but it integrates risk and uncertainty by associating a probability distribution with the models where the probability is estimated by the decision maker.
In both cases, choices among competing goals are handled by something like indifference curves.
We could say that under the comprehensive rational model a rational agent is one that maximizes its expected utility, given its current knowledge.
When we talk about rationality, though, we normally mean in regards to humans. This means that we are talking about bounded rationality. Like comprehensive rationality, bounded rationality assumes that agents are goal-oriented, but bounded rationality also takes into account the cognitive limitations of decision makers in attempting to achieve those goals.
Bounded rationality deals with agents that are limited in many ways which include being:
- Unable to determine all outcomes. Organisms with cognitive limitations have a need to satisfice and an inability to consider long sequential outcomes that are inextricably tied. There is also a tendency to focus on a specific set of the overall goals or outcomes due to priming.framing.
- Unable to determine all of the pertinent information.
- Unable to determine all of the possible inferences.
The big difference between bounded rationality and normative rationality is that in bounded rationality you also consider the agent improving its ability to choose or come up with the best outcomes as rational as long as there are no costs or missed opportunities involved.. Therefore, a rational agent, in the bounded sense, is one that has three characteristics:
- It has a honed ability to return decent sets of outcomes from its searches for outcomes
- The expected utiltity it assigns to outcomes accurately matches the actual expected utiltity
- It chooses the best outcome returned by its searches for outcomes. The best outcome is that one with the highest expected utiltity (discounted by costs)
Do you think it could be reformulated in the framework where values form tree-like networks with some values being "deep" or "primary" and other values being "shallow" or "derived" or "secondary"? Then you might be able to argue that a conflict between a deep and a shallow value should be resolved by the declaring the shallow value not rational.
I think that once a value is in. It is in and works just like all the others in terms of its impact on valuation. However, a distinction like the one you talked about makes sense. But, I would not have 'deep' and 'shallow' because I have no idea how to determine that. Perhaps, 'changeable' vs 'non-changeable' would be better. Then, you can look at some conflicting values, i.e. ones that lead you to want opposite things, and ask if any of them are changeable and what the impact is from changing them. The values that relate to what you actually need are non-changeable or at least would cause languishing if you tried to repress them. I think the problem with the tree view is that values are complex, like you were talking about before, one value may conflict with multiple other values.
So how to avoid being caught in a loop: values depend on values which depend on values that depend on values..?
I don't see the loop. This is because there is no 'value'. There is only coherence which is just how much it conflicts with the other values. I don't know how to describe this without an eidetic example. Please let me know if this doesn't work. Imagine one of those old style screensavers where you have a ball moving across the screen and when it hits the side of the screen it bounces in a random direction. Now, when you have a single ball it can go in any direction at all. There is no concept of coherence because there is only one ball. It is when you introduce another ball that the direction starts to matter as there is now the factor of coherence between the balls. By coherence I mean simply that you don't want the balls to hit each other. This restricts their movement and it now becomes optimal for them to move in some kind of pattern with vertical or horizontal lines being the simplest,
What this means for values is that you want them to basically be directed towards the same or similar targets or at least targets that are not conflicting. A potential indicator of an irrational value is one that conflicts with other values, Of course, human values are not coherent. But, incoherence is still an indicator of potential irrationality.
Unrelated to the above examples is that you would need to think about if the target of the value is actually valuable and is worth the costs you have to pay to achieve it, this is harder to find out, but you can look at the fundamental human needs. Maybe, your deep vs. shallow distinction would be useful in this context.
I am not sure I understand -- is "most trusted source" subjective? What if Jesus is my most trusted source? And He is for a great deal of people.
I don't think I am conveying this point well. I am trying to say that we only have an incomplete answer as to what is rational and that science provides the best answer we have.
One very common way of making a definition is to point to a well-known class,
I think instead of that type of defintion I would rather say that rationality means doing well in the areas of X, Y and Z. and then have a list of skills or domains that improve your ability in the areas of rationality.
Do you think that there are many types of rationality? I think that there are many types of methods to achieve rationality, but I don't think there are many types of rationality.
So if we were to try to give an is-a-kind-of defintion of rationality, what is the super-class? Is it reasoning? Is it skills? Something else?
I would say reasoning or maybe problem solving and outcome generation/choosing better convey the idea of it.
Replies from: Lumifer↑ comment by Lumifer · 2015-09-01T01:22:00.494Z · LW(p) · GW(p)
I have a feeling we're starting to go in circles. But it was an interesting conversation and I hope it was useful to you :-)
Replies from: ScottL↑ comment by ScottL · 2015-09-01T13:56:06.135Z · LW(p) · GW(p)
Sorry, if I was meandering and repeating my points. I wasn't viewing this as an argument, so I don't view it as going in circles, but as going through a series of drafts. Maybe, I will need to be more careful in the future. I appreciate your feedback.
In regards to what we talked about, I am not really that happy with how rationality is defined in the literature, but I am also not sure of what a better way to define it would be. I guess I will have to look into the bounded types of rationality.
Replies from: Lumifer↑ comment by Lumifer · 2015-09-01T15:01:40.720Z · LW(p) · GW(p)
No, that's perfectly fine, I wasn't treating it as an argument, either. It's just that you are spending a lot of time thinking about it, and I'm spending less time, so, having made some points, I really don't have much more to contribute and I don't want to fisk your thinking notes. No need to be careful in drafts, that's not what they are for :-)
comment by Viliam · 2015-08-24T08:49:26.629Z · LW(p) · GW(p)
When you cricitize Spock's "rationality", I think it would be better to visually separate the beliefs you disagree with, otherwise an inattentive reader might get confused about what exactly are you trying to say. Like this:
- "You can expect everyone to react in a reasonable, or what Spock would call rational, way." This expectation is irrational because...
- "You should never make a decision until you have all the information." This requirement is irrational because...
- "You should never rely on intuition." This is irrational because...
- "You should not become emotional." This is irrational because...
comment by LawrenceC (LawChan) · 2015-08-24T16:20:46.949Z · LW(p) · GW(p)
I agree denotationally, but object connotatively with 'rationality is systemized winning', so I left it out. I feel that it would take too long to get rid of the connotation of competition that I believe is associated with 'winning'. The other point that would need to be delved into is: what exactly does the rationalist win at? I believe by winning Elizer meant winning at newcomb's problem, but the idea of winning is normally extended into everything.
I think that Eliezer has disavowed using this statement precisely because of the connotations that people associate with it.
It is because of this that rationality is often considered to be split into two parts: normative and descriptive rationality.
What happened to prescriptive rationality?
Replies from: ScottLcomment by [deleted] · 2015-08-23T12:42:18.697Z · LW(p) · GW(p)
This is more like a glossary than a primer
Replies from: ScottL↑ comment by ScottL · 2015-08-23T13:20:24.542Z · LW(p) · GW(p)
A glossary is just an alphabetical list of words relating to a specific subject, text, or dialect, with explanations; a brief dictionary. I suppose the wiki part is sort of like a glossary, but overall I don't think these posts are a glossary. I think you are right, though, that primer is not the best word for it. I changed it to compendium. A compendium is a collection of concise but detailed information about a particular subject.
Replies from: None