When is Winning not Winning?
post by Eneasz · 2012-05-22T16:25:19.474Z · LW · GW · Legacy · 60 commentsContents
60 comments
Lately I'd gotten jaded enough that I simply accepted that different rules apply to the elite class. As Hanson would say, most rules are there specifically to curtail those who don't have the ability to avoid them and to be side-stepped by those who do - it's why we evolved such big, manipulative brains. So when this video recently made the rounds it shocked me to realize how far my values had drifted over the past several years.
(the video is not about politics, it is about status. My politics are far from those of Penn)
http://www.youtube.com/watch?v=wWWOJGYZYpk&feature=sharek
It's good we have people like Penn around to remind us what it was like to be teenagers and still expect the world to be fair, so our brains can be used for more productive things.
By the measure our society currently uses, Obama was winning. Penn was not. Yet Penn’s approach is the winning strategy for society. Brain power is wasted on status games and social manipulation when it could be used for actually making things better. The machinations of the elite class are a huge drain of resources that could be better used in almost any other pursuit. And yet the elites are admired high-status individuals who are viewed as “winning” at life. They sit atop huge piles of utility. Idealists like Penn are regarded as immature for insisting on things as low-status as “the rules should be fair and apply identically to every one, from the inner-city crack-dealer to the Harvard post-grad.”
The “Rationalists Should Win” meme is a good one, but it risks corrupting our goals. If we focus too much on “Rationalist Should Win” we risk going for near-term gains that benefit us. Status, wealth, power, sex. Basically hedonism – things that feel good because we’ve evolved to feel good when we get them. Thus we feel we are winning, and we’re even told we are winning by our peers and by society. But these things aren’t of any use to society. A society of such “rationalists” would make only feeble and halting progress toward grasping the dream of defeating death and colonizing the stars.
It is important to not let one’s concept of “winning” be corrupted by Azathoth.
ADDED 5/23:
It seems the majority of comments on this post are people who disagree on the basis of rationality being a tool for achieving ends, but not for telling you what ends are worth achieving.
I disagree. As is written "The Choice between Good and Bad is not a matter of saying 'Good!' It is about deciding which is which." And rationality can help to decide which is which. In fact without rationality you are much more likely to be partially or fully mistaken when you decide.
60 comments
Comments sorted by top scores.
comment by [deleted] · 2012-05-22T16:48:07.795Z · LW(p) · GW(p)
"You win the game you are playing"
Play the right game.
Replies from: faul_sname↑ comment by faul_sname · 2012-05-22T16:51:09.658Z · LW(p) · GW(p)
Which game is that?
Replies from: None↑ comment by [deleted] · 2012-05-22T16:53:37.294Z · LW(p) · GW(p)
I don't know. That's your problem.
Replies from: faul_sname, hankx7787↑ comment by faul_sname · 2012-05-22T19:39:33.710Z · LW(p) · GW(p)
It seems the OP thinks that the right game for the group as a whole and the right game for the individuals within that group are different. So if it's up to the individual which game to play, they will play the one that benefits them and the group will lose.
Replies from: None↑ comment by [deleted] · 2012-05-23T00:01:22.757Z · LW(p) · GW(p)
Humans aren't purely selfish. If we all play our individual game, the group will do just fine. As evidenced by the fact that we are even talking about the group as if it matters.
Even with selfish agents, the best strategy is to cooperate under certain (our) conditions.
Replies from: billswift↑ comment by hankx7787 · 2012-05-22T18:56:07.144Z · LW(p) · GW(p)
That's not really a good answer, so I down voted.
Replies from: Dorikka, None↑ comment by Dorikka · 2012-05-22T20:11:27.301Z · LW(p) · GW(p)
The right game for you will be dependant on your utility function, no?
Replies from: faul_sname↑ comment by faul_sname · 2012-05-22T23:11:25.092Z · LW(p) · GW(p)
Not just, else we say that defectors in PD are winning.
Replies from: Dorikka, None↑ comment by Dorikka · 2012-05-22T23:18:15.804Z · LW(p) · GW(p)
I don't understand the relevance of your comment; could you explain? (Expected payout for all agents in PD increases if they can find a way to cooperate AFAIK, even if all are completely selfish.)
Replies from: faul_sname↑ comment by faul_sname · 2012-05-22T23:24:45.621Z · LW(p) · GW(p)
Expected payout for one agent increases even more if they can convince everyone else to cooperate while they defect. This is the game you want to keep the other agents from playing, and while TDT works when all the other agents use a similar decision strategy, it fails in situations where they don't. Which is exactly the problem Eneasz was getting at.
comment by Emile · 2012-05-22T18:28:03.813Z · LW(p) · GW(p)
Could you please include a summary of what goes on in the video? That would make it easier for those of us who can read but not listen (noisy room and all that).
Replies from: Eneasz, saturn↑ comment by Eneasz · 2012-05-22T19:12:22.618Z · LW(p) · GW(p)
Penn Jillette comments at length and with great anger that Obama nonchalantly talked about drug use in college and yet continues to enforce federal drug laws. Penn rants that Obama wouldn't be nearly so nonchalant about it if he was treated like those of the lower classes who would have served jail time for the same actions and been left with a permanent record that would make them nearly unemployable and certainly never able to go into politics.
Replies from: gwern, Unnamed, Multiheaded↑ comment by Multiheaded · 2012-05-23T11:24:42.924Z · LW(p) · GW(p)
It's even worse in Russia. Like most things. (Not that I'm unpatriotic.)
comment by faul_sname · 2012-05-22T16:43:14.775Z · LW(p) · GW(p)
One's concept of "winning" comes from Azathoth. There's no avoiding that. However, not everything from Azathoth is a bad thing, so I doubt that your real objection is that we follow an evolved morality. Is this an accurate interpretation of what you're saying?
"Often, the 'rational' choice for agents is to defect in Prisoner's Dilemma-type situations. Those that do are rewarded, but their actions are a net negative for society. Despite this, we say that those who do antisocial things are winning, while those that do prosocial things are not winning. Shouldn't we reward those who sacrifice their own personal goals to help the group?"
comment by Desrtopa · 2012-05-22T17:04:02.775Z · LW(p) · GW(p)
This is why I have much lower expectations for individual rationality than group rationality. Sometimes, strong individual rationality in a low rationality population can't really do better than implementing solutions like defecting in a tragedy of the commons that a more rational group could have avoided in the first place.
comment by Xachariah · 2012-05-22T22:26:51.197Z · LW(p) · GW(p)
I'm confused about your opening statement regarding 'different rules apply to the elite class.' Drug usage is not limited to the upper class, nor is admitting that you've used drugs limited to the upper class either. Obama could have hardly been called elite when he was using drugs, and barely even when he was writing that book. My friends and acquaintances are equally open about their past drug use.
To put it more succinctly, he was treated the same way most lower class drug users are. They receive no punishment and eventually grow up and do fine in life.
Your paragraph on 'Obama Winning, Penn is not' is similarly confusing. Obama is the President of the USA and presumably sitting on the hugest pile of utility on earth, but Penn Jillette is exceedingly rich sitting atop an estimated $175 million net worth. By my estimation, both are winning.
Replies from: EternalArchon, Eneasz↑ comment by EternalArchon · 2012-05-22T23:58:15.371Z · LW(p) · GW(p)
he was treated the same way most lower class drug users are. They receive no punishment and eventually grow up and do fine in life.
1) I think the op knows that, and maybe what he's saying is more like: isn't that people don't care about drug use, they like their tribal leaders to be "effective" rule breakers. An Obama who never did drugs might be less popular and less cool.
2) I assume you're saying that 'treated the same way' means not caught. Most poor and rich escape being caught, but that is very different than equal treatment once caught.
Replies from: Eneasz↑ comment by Eneasz · 2012-05-23T17:26:22.564Z · LW(p) · GW(p)
1 - yes exactly. Thank you.
2 - also in agreement. In the video Penn mentions a couple times that if Obama had been caught he'd be screwed, which is absolutely laughable. He would have been let off with a wink and a nod, due to his elite status. But I didn't want to side-track the post.
↑ comment by Eneasz · 2012-05-23T17:31:42.165Z · LW(p) · GW(p)
but Penn Jillette is exceedingly rich sitting atop an estimated $175 million net worth. By my estimation, both are winning.
In general perhaps, this particular case I don't think so. Penn's vision for a fair society is being frustrated by the old boy's club, and without them even putting much effort into it. Obama is being rewarded for his place in the game, Penn is being handicapped by his (and fortunately has enough resources that the handicap is more than tolerable).
Replies from: CuSithBell↑ comment by CuSithBell · 2012-05-23T18:47:43.471Z · LW(p) · GW(p)
This seems like an overbroad definition of "winning"...
Replies from: loup-vaillant↑ comment by loup-vaillant · 2012-05-23T21:45:53.171Z · LW(p) · GW(p)
Let "winning" be "increasing one's own utility function". Or "achieving one's goals".
Utility functions can wildly differ (think psychopath vs saint), giving the appearance of an overly broad definition for "winning". But I think that's a proper one.
Replies from: CuSithBell↑ comment by CuSithBell · 2012-05-23T22:59:25.665Z · LW(p) · GW(p)
Well, okay, I just mean... the biggest factor in fulfilling that particular part of their utility functions is what utility function they have, not any particular ability or choice on their part.
comment by [deleted] · 2012-05-24T09:03:39.798Z · LW(p) · GW(p)
I ask the OP: Have you read the meta-ethics sequence? It answers many of your questions.
Other than that, your title is misleading and your post can be summarized as "society sucks because humans are stupid." If you care, look into raising the sanity waterline.
Replies from: Eneasz↑ comment by Eneasz · 2012-05-24T17:36:15.282Z · LW(p) · GW(p)
- Yes (altho it's been a long time now, could probably reread to refresh)
- Yeah, I'm trying. A lot of the point of the original post was "Let's not get hung up on winning, and rather let's get society to be more sane, cuz winning in an insane society is not something I'd consider "winning" at all"
↑ comment by [deleted] · 2012-05-24T18:33:31.090Z · LW(p) · GW(p)
You seem to be confused as to what 'winning' means. Because it literally only means 'achieving arbitrary goal X' and rational action furthers that goal in the most effective way possible.
I do however agree that society's current goal set is morally wrong, and that seen with my eyes, winning is doing what is right.
comment by pleeppleep · 2012-05-23T13:43:47.284Z · LW(p) · GW(p)
"Winning" means achieving your goals. It doesn't mean optimizing society unless optimizing society fits into your utility function. If you value hedonism above all, then you win by experiencing pleasure. Its generally agreed here that hedonism alone is insufficient for achieving happiness. You say that resources are wasted on status games instead of making things "better". What does "better" mean? "Better" for who? Rationalism is NOT about holding onto moral obligations. It is about eliminating cognitive flaws so as to improve your ability as an agent to achieve your goals. What you are essentially arguing here is that certain values are somehow inferior to others because they do not correspond to your sense of ethics.
comment by EternalArchon · 2012-05-22T22:36:44.169Z · LW(p) · GW(p)
I can feel this post triggering a little BlueVsGreen thinking habits. Instead I'm going to attempt to stay Bruce Banner, and simply ask for clarification, but if my comments appear frustrated/insulting- please forgive me.
Can someone, OP or otherwise, explain to me, directly, the connection being made between Penn's rant and rationalists loving hedonism? Even if I accept each assertion, the materials don't construct a train-track capable of being traveled for my brain:
- What does Penn's rant have to do with the nature of the goals we choose and should choose?
- Winning short term goals can be destructive to long term goals; I got it. Again though, not seeing the connection to prior prior paragraphs. The seduction of short term, even wanting seemingly human-long(years) instead of generations-long goals. Got it. Important topic. However, again: how does this relate to previous paragraphs of Obama/Penn/society/elites/etc?
My inklings-
- There is a lot of talk about Penn- or is this a hidden discussion of high-utility Obama and HIS hedonistic behavior? Not accusing, but when I supported a color team, I found it difficult to directly associate faults the team leader.
- Am I over analyzing due to repeated pattern-exposure/anchoring to difficult not-how-homo-sapians-were-evolved-to-think bias Articles of Truth+3? Should this instead be taken as a loose interior monologue to explain how one event (the video) sparked a series of associations to bring up a topic worthy of further discussion (devilish attraction of short-term/winnable goals)?
Again, ending topic is very worthy of discussion- but I'm not seeing how it fits together
Edit: fixed link error with a bigger error, then fixed again.
Replies from: Zaine, BlazeOrangeDeer↑ comment by Zaine · 2012-05-22T23:16:10.264Z · LW(p) · GW(p)
The high-status elites Eneasz refers to are rewarded by society with praise, respect, worship, etc. for playing the game in near mode, focusing mainly on maintaining their high status-profiles with little ulterior motives (at least, little that have a high probability of creating net world-wide utility). Such would be the same feedback loop for near-mode winning rationalist hedons.
That's how I understood the transition, anyhow.
I agree the danger is certainly worth considering, and think we should remember Machiavelli's position on the role of princes: The Prince's duty is to attain power and maintain it, by whatever means necessary to ensure the benefit of his people. *Paraphrased
Id est, the ends justify the means, but only so long as the ends benefit the people; purely status oriented games only benefit those who play them.
↑ comment by BlazeOrangeDeer · 2012-05-23T05:39:38.404Z · LW(p) · GW(p)
Yes to the "loose interior monologue" bit. The video sparked a question about what society considers to be "winning" and how the meme of winning rationalists relates to that. And in the future you may want to include your inklings right after you mention your suspicions of blue/green thinking, I almost downvoted you because I thought you were just another complainer. Upvoted because I found your first inkling interesting and not obviously wrong.
comment by bryjnar · 2012-05-22T17:04:36.564Z · LW(p) · GW(p)
Rationality doesn't tell you what to care about. It can tell you how to be the best paperclip-maximiser equally well. "Winning" depends on what you care about; and most of us do care about the fate of society, not just maximising our own wealth and status. So being a hedonist wouldn't necessarily be "winning".
What is a problem is forgetting that you care about long-term things, or far away people. So we certainly do need to be on guard against short-termism, and if "winning" connotes focussing on short-term benefit to you, then perhaps that's an argument to stop using the word. But it's not a deep problem.
comment by Athrelon · 2012-05-22T16:54:59.088Z · LW(p) · GW(p)
"Winning" refers to making progress towards whatever goals you set for yourself. Rationality can help you achieve your goals but - unless you'res suffering from akrasia - offers little guidance in figuring out what goals you should have.
Replies from: pragmatist, Eneasz↑ comment by pragmatist · 2012-05-22T20:29:13.648Z · LW(p) · GW(p)
It's a rule of epistemic rationality that, all other things being equal, one should adopt simpler theories. Why shouldn't this also extend to practical rationality, and to the determination of our goals in particular? If our ultimate values involve arbitrary and ad hoc distinctions, then they are irrational. Consider, for instance, Parfit's example of Future Tuesday Indifference:
A certain hedonist cares greatly about the quality of his future experiences. With one exception, he cares equally about all the parts of his future. The exception is that he has Future-Tuesday-Indifference. Throughout every Tuesday he cares in the normal way about what is happening to him. But he never cares about possible pains or pleasures on a future Tuesday.
I think that any account of practical rationality that does not rule Future Tuesday Indifference an irrational ultimate goal is incomplete. Consider also Eliezer's argument in Transhumanism as Simplified Humanism.
Of course, this doesn't apply directly to the point raised by Eneasz, since the distinction between values he is talking about can't obviously be cashed out in terms of simplicity. But I think there's good reason to reject the general Humean principle that our ultimate values are not open to rational criticism (except perhaps on grounds of inconsistency), and once that is allowed, positions like the one held by Eneasz are not obviously wrong.
Replies from: JGWeissman↑ comment by JGWeissman · 2012-05-22T21:52:43.659Z · LW(p) · GW(p)
Having a high quality experience at all times other than Tuesdays seems to be a strange goal, but one that a person could coherently optimize for (given a suitable meaning of "high quality experience). The problem with Future Tuesday Indifference is that at different times, the person places different values on the same experience on the same Tuesday.
Replies from: pragmatist↑ comment by pragmatist · 2012-05-22T22:17:57.324Z · LW(p) · GW(p)
Yeah, I see that Future Tuesday Indifference is a bad example. Not precisely for the reason you give, though, because that would also entail that any discounting of future goods is irrational, and that doesn't seem right. But Future Tuesday Indifference would involve the sort of preference switching you see with hyperbolic discounting, which is more obviously irrational and might be confounding intuitions in this case.
So here's a better example: a person only assigns value to the lives of people who were born within a five-mile radius of the Leaning Tower of Pisa. This is an ultimate value, not an instrumental one. There's no obvious incoherence involved here. A person could coherently optimize for this goal. But my point is that this does not exhaust our avenues for rational criticism of goals. The fact that this person has an ultimate value that relies on such a highly specific and arbitrary distinction is grounds for criticism, just as it would be if the person adopted a scientific theory which (despite being empirically adequate) postulated such a distinction.
Replies from: JGWeissman↑ comment by JGWeissman · 2012-05-22T22:36:21.456Z · LW(p) · GW(p)
Not precisely for the reason you give, though, because that would also entail that any discounting of future goods is irrational, and that doesn't seem right.
Discounting of future goods does not involve assigning different values to the same goods at the same time.
So here's a better example: a person only assigns value to the lives of people who were born within a five-mile radius of the Leaning Tower of Pisa. This is an ultimate value, not an instrumental one. There's no obvious incoherence involved here. A person could coherently optimize for this goal. But my point is that this does not exhaust our avenues for rational criticism of goals. The fact that this person has an ultimate value that relies on such a highly specific and arbitrary distinction is grounds for criticism, just as it would be if the person adopted a scientific theory which (despite being empirically adequate) postulated such a distinction.
I would not criticize this goal for being "irrational", though I would oppose it because it conflicts with my own goals. My opposition is not because it is arbitrary, I am perfectly happy with arbitrariness in goal systems that aligns with my own goals.
Replies from: pragmatist↑ comment by pragmatist · 2012-05-22T22:46:45.283Z · LW(p) · GW(p)
Discounting of future goods does not involve assigning different values to the same goods at the same time.
The qualifier "at the same time" is ambiguous here.
If you mean that different values are assigned at the same time, so that the agent has conflicting utilities for a goal at a single time, then you're right that discounting does not involve this. But neither does Future Tuesday Indifference,. so I don't see the relevance.
If "at the same time" is meant to modify "the same goods", so that what you're saying is that discounting does not involve assigning different values to "good-g-at-time-t", then this is false. Depending on the time at which the valuation is made, discounting entails that different values will be assigned to "good-g-at-time-t".
Replies from: JGWeissman↑ comment by JGWeissman · 2012-05-22T23:04:12.353Z · LW(p) · GW(p)
If "at the same time" is meant to modify "the same goods", so that what you're saying is that discounting does not involve assigning different values to "good-g-at-time-t", then this is false. Depending on the time at which the valuation is made, discounting entails that different values will be assigned to "good-g-at-time-t".
Suppose an agend with exponential time discounting assigns goods G at a time T a utility of U0(G)*exp(a*(T0-T)). Then that is the utility the agent at any time assigns those goods at that time. You may be thinking that the agent at time TA assigns a utility to the goods G at the same time T of U0(G)*exp(a*(TA-T)) and thus the agent at different times is assigning different utilities, but these utility functions differ only by the constant (over states of the universe) factor exp(a*(TA-T0)), which being an affine transformation, doesn't matter. The discounting agent's equivalency class of utility functions representing its values really is constant over the agent's subjective time.
Replies from: pragmatist↑ comment by pragmatist · 2012-05-22T23:11:08.272Z · LW(p) · GW(p)
Ah, I see. You're right. Comment retracted.
↑ comment by Eneasz · 2012-05-22T17:27:26.642Z · LW(p) · GW(p)
It's my contention that rationality should offer guidance in figuring out what goals you should have. A rationalist society will have goals closer to "defeat death and grasp the stars" than "gain ALL the status". It's not just rationalists who should win, it's rational societies who should win. If you're in a society that is insane then you may not be able to "win" as a rationalist. In that case your goal should not be "winning" in the social-traditional sense, it should be making society sane.
Replies from: NMJablonski, asparisi↑ comment by NMJablonski · 2012-05-22T18:12:24.768Z · LW(p) · GW(p)
You're priveliging your values when you judge which society - the status game players versus the immortal starfarers - is "winning".
Replies from: Phoenix↑ comment by Phoenix · 2012-05-23T00:29:18.036Z · LW(p) · GW(p)
I don't think that that's a bad thing. The immortal starfarers necessarily go somewhere; the status game players don't necessarily go anywhere. Hence "winning". The point of the post was to warn that not only answering our questions but figuring out which questions we should ask is an issue we have to tackle. We have to figure out what winning should be.
The reason that the immortal starfarers are better is that they're trying to do that, so if all values aren't created equally, they're more likely to find out about it.
Replies from: NMJablonski↑ comment by NMJablonski · 2012-05-23T14:28:43.703Z · LW(p) · GW(p)
The immortal starfarers necessarily go somewhere; the status game players don't necessarily go anywhere. Hence "winning".
Deciding that going somewhere is "winning" comes from your existing utility function. Another person could judge that the civilization with the most rich and complex social hierarchy "wins".
Rationality can help you search the space of actions, policies, and outcomes for those which produce the highest value for you. It cannot help you pass objective judgment on your values, or discover "better" ones.
↑ comment by asparisi · 2012-05-22T17:38:47.051Z · LW(p) · GW(p)
I think that's almost completely wrong. Being human offers guidance in figuring out what goals and values we should have. If the values of the society would be seen as insane by us, a rationalist will still be more likely to win over more of those socieities than average.
Replies from: pragmatist↑ comment by pragmatist · 2012-05-22T20:44:48.152Z · LW(p) · GW(p)
If the values of the society would be seen as insane by us, a rationalist will still be more likely to win over more of those socieities than average.
I suspect that, if rigorously formulated, this claim will run afoul of something like the No Free Lunch Theorem.
Replies from: asparisi↑ comment by asparisi · 2012-05-22T21:05:50.406Z · LW(p) · GW(p)
Can you explain this suspicion? I'm not saying that "Rationalists always win": I am saying that they win more often than average.
Say you are in society X, which maximizes potential values [1, 2, 7] though mechanism P and minimzies potential values [4, 9, 13] through mechanism Q.
A rationalist (A) who values [1, 4, 9] will likely not do as well as a random agent (B) that values [1, 2, 7] under X, because the rationalist will only get limited help from P while having to counteract Q, while the other agent (rationalist or not) will recieve full benefit from P and no harm from Q. So it's trivially true that a rationalist does not always do better than other agents: sometimes the game is set against them.
A rationalist (A) will do better than a non-rationalist (C) with values [1, 4, 9] if having an accurate perception of P allows you to maximize P for 1 or having an accurate perception of Q allows you to minimize Q for [4, 9]. In the world we live in, at least, this usually proves true.
But A will also do better than B in any society that isn't X, unless B is also a rationalist. They will have a more accurate perception of the reality of the society they are in and thus be better able to maximize the mechanisms that aid their values while minimizing the mechanisms that countermand them.
That's what I meant by "more likely to win over more of those societies than average."
Replies from: pragmatist↑ comment by pragmatist · 2012-05-22T21:56:51.841Z · LW(p) · GW(p)
I haven't thought about this carefully, so this may be a howler, but here is what I was thinking:
"Winning" is an optimization problem, so you can conceive of the problem of finding the winning strategy in terms of efficiently minimizing some cost function. Different sets of values -- different utility functions -- will correspond to different cost functions. Rationalism is a particular algorithm for searching for the minimum. Here I associate "rationalism" with the set of concrete epistemic tools recommended by LW; you could, of course, define "rationalism" so that whichever strategy most conduces to winning in a particular context is the rational one, but then your claim would be tautological.
The No Free Lunch Theorem for search and optimization says that all algorithms that search for the minimum of a cost function perform equally well when you average over all possible cost functions. So if you're really allowing the possibility of any set of values, then the rationalism algorithm is no more likely to win on average than any other search algorithm.
Again, this is a pretty hasty argument, so I'm sure there are holes.
Replies from: asparisi↑ comment by asparisi · 2012-05-22T22:16:29.293Z · LW(p) · GW(p)
I suspect you are right if we are talking about epistemic rationality, but not instrumental rationality.
In practice, when attempting to maximize a value, once you know what sort of system you are in, most of your energy has to go into gaming the system: finding the cost of minimizing the costs and looking for exploits. This is more true the more times a game is iterated: if a game literally went on forever, any finite cost becomes justifiable for this sort of gaming of the system: you can spend any bounded amount of bits. (Conversely, if a game is unique, you are less justified in spending your bits on finding solutions: your budget roughly becomes what you can afford to spare.)
If we apply LW techniques of rationalism (as you've defined it) what we get is general methods, heuristics, and proofs on ways to find these exploits, a summation of this method being something like "know the rules of the world you are in" because your knowledge of a game directly affects your ability to manipulate its rules and scoring.
In other words, I suspect you are right if what we are talking about is simply finding the best situation for your algorithm: choosing the best restaurant in the available solution space. But when we are in a situation where the rules can be manipulated, used, or applied more effectively I believe this dissolves. You could probably convince me pretty quickly with a more formal argument, however.
comment by RomeoStevens · 2012-05-22T23:10:54.332Z · LW(p) · GW(p)
I only have far goals to get more of my near goals at some point.
comment by Eneasz · 2012-05-23T17:56:07.892Z · LW(p) · GW(p)
It seems the majority of people who disagree with this post do so on the basis of rationality being a tool for achieving ends, but not for telling you what ends are worth achieving.
I disagree. As is written, "The Choice between Good and Bad is not a matter of saying 'Good!' It is about deciding which is which." And rationality can help to decide which is which. In fact without rationality you are much more likely to be partially or fully mistaken when you decide.
What does "better" mean? "Better" for who?
That's part of the question we're trying to answer. As for the "for who" part I would answer with "ideally, all sentient beings."
Replies from: Armarren↑ comment by Armarren · 2012-05-23T18:50:44.498Z · LW(p) · GW(p)
As often happens, it is to quite an extent a matter of definitions. If by an "end" you mean a terminal value, then no purely internal process can change that value, because otherwise it wouldn't be terminal. This is essentially the same as the choice of reasoning priors, in that anything that can be chosen is, by definition, not a prior, but a posterior of the choice process.
Obviously, if you split the reasoning process into sections, then posteriors of a certain sections can become priors of the sections following. Likewise, certain means can be more efficiently thought as ends, and in this case rationality can help you determine what those ends would be.
The problem with humans is that the evolved brain cannot directly access either core priors or terminal values, and there is not guarantee that they are even coherent enough to be said to properly exists. So every "end" that rises high enough into the conscious mind to be properly reified is necessarily an extrapolation, and hence not a truly terminal end.
Replies from: Vladimir_Nesov, RomeoStevens↑ comment by Vladimir_Nesov · 2012-06-10T19:00:02.281Z · LW(p) · GW(p)
If by an "end" you mean a terminal value, then no purely internal process can change that value, because otherwise it wouldn't be terminal.
A notion of "terminal value" should allow possibility of error in following it, including particularly bad errors that cause value drift (change which terminal values an agent follows).
↑ comment by RomeoStevens · 2012-05-23T21:24:34.345Z · LW(p) · GW(p)
Some of your terminal values can modify other terminal values though. Rational investigation can inform you about optimal trade-offs between them.
Edit: Tradeoffs don't change that you want more of both A and B. Retracted.
comment by buybuydandavis · 2012-05-23T09:43:58.083Z · LW(p) · GW(p)
Winning is achieving your ends, not achieving them better than the other guy achieves his.
Also, I'd suggest that you'd improve your analysis if you stopped anthropomorphizing society.
And you should also distinguish between instrumental and epistemic rationality, which I think a lot of people around here should do more of as well. One sense of Rationalists Should Win is I want to Win, and don't want any part of a Rationality that makes me lose. Another sense is Epistemic Rationality helps you Win, which is usually true, but I'm against making a fetish of Epistemic Rationality and treating it as synonymous with Winning.