Rationally Irrational

post by HungryTurtle · 2012-03-07T19:21:23.601Z · LW · GW · Legacy · 417 comments

I understand rationality to be related to a set of cognitive tools rather than a certain personality or genetic type. Like any other tool it can be misused. You can kill a person with a spoon, but that is a misuse of its intended function. You cut a pound of raw meat with a chainsaw, but that is a misuse of its intended function. Tools are designed with both intended purposes and functional limitations. Intended purposes serve to provide the user with an understanding of how to achieve optimal impact. For example, some intended uses of a sword would be killing, disabling, acting, or training (and many more). Tools can be used outside of their intended purposes. The use might not result in optimal output, it might even damage the tool, but it is possible.  A sword can be used to cut wood, clear shrubbery, as a decoration, a sword could even be used as a door stop. Doorstop has long departed from the intended function for a sword upon its design, but nevertheless it exists as possibility given the structure of a sword. Functional limitations are desired uses that a tool cannot meet given its structure.  A sword alone cannot allow you to fly or breathe underwater, at least not without making significant alterations to its structure, rendering it no longer a sword.

Every tool exists with both intended functions and functional limitations. From reading some essays on this website I get the impression that many members of this community view rationality as a universal tool. That no matter what the conflict a certain degree of rationality would provide the appropriate remedy. I would like to question this idea. I think there are both functional limitations to rationality and ways to misuse one's powers of reasoning. To address these, it is first necessary to identify what the primary function of rationality is.

The Function of rationality

From reading various articles on this website I would suggest that rationality is seen as a tool for accuracy in obtaining desired results, or as Eliezer puts it, for “winning.” I agree with this analysis. Rationality is a tool for accuracy; increased accuracy leads to successfully obtainment of some desired result; obtainment of some desired result can broadly be described as “winning.” If rationality is a tool for increasing accuracy, then the questions becomes “are there ever times when it is more beneficial to be inaccurate,” or in other words, are there times when it should be desired to lose.

Why would a person ever want to lose?

I can think of two situations where increased accuracy is detrimental: 1.) In maintaining moderation; 2.) In maintaining respectful social relations.

1.) *It is better to air on the side of caution*: The more accurate you become the faster you obtain your goals. The faster you obtain your goals the quicker you progress down a projected course. In some sense this is a good thing, but I do not think it is universally good. **The pleasure winning may deter the player from the fundamental question “Is this a game I should be playing?”** A person who grew up playing the violin from an early age could easily find themselves barreling along a trajectory that leads them to a conservatory without addressing the fundamental question “is becoming a violinist what is going to most benefit my life? It is easy to do something you are good at, but it is fallacious to think that just because you are good at something it is what you should be doing. If Wille E. Coyote has taught us anything it is that progressing along a course too fast can result in unexpected pitfalls. Our confidence in an idea, job, a projected course, has no real bearing on its ultimate benefit to us (see my comment here for more on how being wrong feels right). While we might not literally run three meters off a cliff and then fall into the horizon, is it not possible for things to be moving too fast?

2.) *”Wining” all the time causes other people narrative dissonance*:  People don’t like it when someone is right about everything. It is suffocating.  Why is that? I am sure that a community of dedicated rationalists will have experienced this phenomenon, where relationships with family, friends, and other personal networks are threatened/damaged by you having an answer for everything, every causal debate, every trivial discussion; where you being extremely good at “winning” has had a negative effect on those close to you. I have a theory for why this is, is rather extensive, but I will try to abridge it as much as possible. First, it is based in the sociological field of symbolic interactionism, where individuals are constantly working to achieve some role confirmation in social situations. My idea is that there are archetypes of desired roles, and that every person needs the psychological satisfaction of being cast into those roles some of the time. I call these roles “persons of interest.” The wise one, the smart one, the caring one, the cool one, the funny one, these are all roles of interest that I believe all people need the chance to act out. If in a relationship you monopolize one of these roles to the point that your relations are unable to take it on, than I believe you are hurting your relationship. If you win too much, deprive those close to you the chance of winning, effectively causing them anxiety.

For example, I know when I was younger my extreme rationality placed a huge burden on my relationship with my parents. After going to college I began to have a critique of almost everything they did. I saw a more efficient, more productive way of doing things than my parents who had received outdated educations. For a while I was so mad that they did not trust me enough to change their lives, especially when I knew I was right. Eventually, What I realized was that it is psychologically damaging for a parent’s 20 something year old kid to feel that it is their job to show you how to live. Some of the things (like eating healthier and exercising more) I did not let go, because I felt the damages of my role reversal were less than the damages of their habits; however, other ideas, arguments, beliefs, I did let go because they did not seem worth the pain I was causing my parents. I have experienced the need to not win as much in many other relationships. Be they friends, teachers, lovers, peers, colleagues, in general if one person monopolizes the social role of imparter of knowledge it can be psychologically damaging to those they interact with. I believe positive coexistence is more important than achieving some desired impact (winning). Therefore I think it is important to ease up on one’s accuracy for the sake of one’s relationships.

- Honestly I have more limitation and some misuses I to address, but decided to hold off and see what the initial reception of my essay was. I realize this is a rationalist community and I am not trying to pick a fight. I just strongly believe in moderation and wanted to share my idea. Please don't hate me too much for that.

- HungryTurtle

 

417 comments

Comments sorted by top scores.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-03-07T19:46:03.841Z · LW(p) · GW(p)

Your article is interesting, and a lot of the points you make are valid. In practice, LW-style rationality might well have some of the effects you describe, especially in the hands of those who use it or understand it in a limited way. However, I don't think your point is valid as a general argument. For example:

If you win too much, deprive those close to you the chance of winning, effectively causing them anxiety.

To me, this seems to be based on a fallacious understanding of LW-style "winning." Winning here means accomplishing your goals, and using a "rationality" toolkit to win means that you accomplish more of your goals, or accomplish them better, than you would have without those tools.

For some people, being right about everything is a goal. For some people, harmonious social relationships are a goal. For a lot of people, these are both goals, although they may be prioritized differently, i.e. a different weight may be placed on each. If the goal of being right conflicts with the goal of harmonious relationships, and harmonious relationships are prioritary, then according to the toolkit of "rationality", it is rational to lay off a bit and avoid threatening the self-image of your friends and family. This is certainly true for me. Being right, coming across as exceptionally smart, etc, are rather low-priority goals for me compared to making and keeping friends. (The fact that the former has always been easier than the latter may be a factor).

Naive use of a rationality toolkit, by people who don't know their own desires, may in fact result in the kind of interpersonal conflict you describe, or in barreling too fast towards the wrong goal. That would be improper use of the tool...and if you cared to ask, the tool would be able to tell you that the use was improper-and aiming for the wrong goal is something that LW specifically warns against.

Nitpick: there's something funny up with the formatting of this article. The text is appearing smaller than usual, making it somewhat hard to read. Maybe go back to 'edit' and see if you can play around with the font size?

Replies from: HungryTurtle, metaphysicist
comment by HungryTurtle · 2012-03-09T14:42:46.827Z · LW(p) · GW(p)

Thank you for your comments,

For some people, being right about everything is a goal. For some people, harmonious social relationships are a goal. For a lot of people, these are both goals, although they may be prioritized differently, i.e. a different weight may be placed on each.

Thank you, your comments have helped crystallize my ideas. When I said to "rethink what game you are playing" that was a misleading statement. It would be more accurate to my idea to say that some times you have to know when to stop playing. The point I was trying to make is not that the goal you choose is damaging to your relations, but literally winning itself regardless of the goal. From my experience, people don't care about what's right as much as they care about being right. Let's imagine, as you say, your goal is social harmony. This is not an individual goal, like golf, it is a team goal. Achieving this goal requires both a proper method and team subordination. If you let the other players of your team play out their strategies, then you will not win. However, because of the phenomenon I have attempted to explain above (people's need to fulfill certain ideal roles) taking the steps necessary to "win" is damaging to the other players, because it forces them to acknowledge their subordination, and thus in reality does not achieve the desired goal. Does this make sense?

It is similar to the daoist idea of action vs inaction. Inaction is technically a type of action, but it is also defined by existing outside of action. The type of "game" I am talking about is technically a game, but it is defined by relinquishing the power/position of control. Even if you can win/ know how to win, sometimes what people need more than winning is to attempt to win by themselves and know that you are in it with them.

Of course there are times when it is worth more to win, but I think there are times when it is worth the risk of losing to allow others the chance to feel that they can win, even if it is a lesser win than you envision.

Thank you again for your comments.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-03-09T20:23:48.993Z · LW(p) · GW(p)

I'm glad my comment helped clarify your ideas for you. I can't say that I entirely understand your point, though.

It would be more accurate to my idea to say that some times you have to know when to stop playing.

Stop playing what game? Say you're with a group of friends, and you're all playing a game together, like Monopoly or something. You're also playing the "game" of social relations, where people have roles like "the smart one", "the cool one" or "the helpful one" that they want to fulfill. Do you mean that sometimes you have to know when to stop playing to win at Monopoly in order to smooth over the social relations game and prevent people from getting frustrated and angry with you? Or do you mean that sometimes you have to stop playing the social status/relations game? The former is, I think, fairly obvious. Some people get too caught up in games like Monopoly and assign more value to "winning" than to letting everyone else have fun, but that's more a failure of social skills than "rationality".

As for the latter, I'm not sure I understand what "deciding to stop playing" at social relations would mean. That you would stop trying to make yourself look good? That you would stop talking to the other people with you? More to the point, I don't think social relations is a game where one person wins over everyone else. If I got to look cool, but it meant that some of my friends didn't have fun and felt neglected, I certainly wouldn't feel like I'd won the game of social harmony.

However, because of the phenomenon I have attempted to explain above (people's need to fulfill certain ideal roles) taking the steps necessary to "win" is damaging to the other players, because it forces them to acknowledge their subordination, and thus in reality does not achieve the desired goal. Does this make sense?

This paragraph makes it sound like you're talking about social status. Yes, social status is somewhat of a zero-sum game, in that you being cooler and getting tons of attentions makes everyone else a bit less cool by comparison and takes away from the attention they get. But that's in no way the goal of social harmony, at least not as I define it. In a harmonious group, no one feels neglected, and everyone enjoys themselves.

In summary, I think you may just be describing a problem that doesn't really happen to me (although, thinking back, it happened to me more back when I was 12 and didn't have good social skills.) Given that intelligence and "nerdiness" is associated with poor social skills, and LW is considered a nerdy community, I can see why it wouldn't be an unreasonable assumption to think that others in the community have this problem, and are liked less by other people because they try too hard to be right. But that's most likely because they don't think of "getting along with others" or "improving their social skills" as specific goals in their own right. Anyone who does form those goals, and apply the toolkit of LW-rationality to them, would probably realize on their own that trying to be right all the time, and "winning" in that sense, would mean losing at a different and perhaps more important game.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-06T17:15:57.245Z · LW(p) · GW(p)

Sorry for such a late response, life really picked up this month in many amazing and wondrous ways and I found myself lacking the time or desire to respond. Now things have lulled back, and I would like to address your, and all the other responses to my ideas.

Stop playing what game? ...As for the latter, I'm not sure I understand what "deciding to stop playing" at social relations would mean.

When I say game I am referring to a board game, a social game, a dream, really any desired outcome. Social status is a type of game, and it was the one I thought provided the most powerful analogy, but it is not the overall point. The overall point is the social harmony you speak of. You say that in your opinion,

In a harmonious group, no one feels neglected, and everyone enjoys themselves...

I agree with this definition of harmony. The idea I am trying to express goes beyond the poor social skills you are assuming I am attributing to this "nerdy community" (which I am not). Beyond individually motivated goals, I am suggesting that for no one to feel neglected and everyone to enjoy themselves it is necessary for the actor to stop trying to achieve any goal. The pursuit of any one goal-orientation automatically excludes all other potential goal-orientations. If you have an idea of what is funny, what is cool, in attempting to actualize these ideas you are excluding all other possible interpretations of them. For no one to feel neglected and everyone to truly enjoy themselves, then everyone’s ideas of happiness, security, camaraderie, humor, etc must be met. My idea is somewhat similar to Hinesburg’s uncertainty principle, in that your intentionally makes the goal you desire unattainable. Does this make sense?

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-06T23:58:45.242Z · LW(p) · GW(p)

for no one to feel neglected and everyone to enjoy themselves it is necessary for the actor to stop trying to achieve any goal.

Do you mean that the person in question has to just sit back and relax? That they have to stop trying to steer the direction of the conversation and just let it flow? Or that they have to focus on other people's enjoyment rather than their own enjoyment? The former doesn't feel true for me, in that having someone with good social skills and an idea of people's interests steer the conversation can make it more enjoyable rather than less so. The latter, maybe true, but I wouldn't want to live like that.

comment by metaphysicist · 2012-03-08T20:43:29.633Z · LW(p) · GW(p)

Winning here means accomplishing your goals...

What justifies this unconventional definition of the word? Random House Dictionary offers three senses of "win":

  1. to finish first in a race, contest, or the like.
  2. to succeed by striving or effort: He applied for a scholarship and won.
  3. to gain the victory; overcome an adversary: The home team won.

Notice that 2 of the 3 involve a contest against another; definition 2 is closer to what's wanted, but the connection between winning a competition is so strong, that when offering an example, the dictionary editors chose a competitive example.

This unconventional usage encourages equivocation, and it appeals to the hyper-competitive, while repelling those who shun excessive competition. Why LW's attachment to this usage? It says little for the virtue of precision; it makes LWers seem like shallow go-getters who want to get ahead at any cost.

Replies from: TheOtherDave, Nornagest, HungryTurtle
comment by TheOtherDave · 2012-03-08T21:30:48.023Z · LW(p) · GW(p)

Nothing justifies it, really. Like most local jargon in any community, it evolves contingent on events.

In this case, the history is that the phrase "rationalists should win" became popular some years ago as a way of counteracting the idea that what being rational meant was constantly overanalyzing all problems and doing things I know are stupid because I can come up with a superficially logical argument suggesting I should do those things. Newcomb's Problem came up a lot in that context as well. The general subtext was that if my notion of "rationality" is getting in the way of my actually achieving what I want, then I should discard that notion and instead actually concentrate on achieving what I want.

Leaving all that aside: to my mind, any definition of "win" that makes a win-win scenario a contradiction in terms is a poor definition, and I decline to use it.

Leaving that aside: do we seem to you like shallow go-getters who want to get ahead at any cost, or are we talking about how we seem to some hypothetical other people?

Replies from: fubarobfusco
comment by fubarobfusco · 2012-03-08T23:46:32.263Z · LW(p) · GW(p)

If the word "win" is getting in people's way because it makes them think of zero-sum scenarios, social status conflicts, or Charlie Sheen, then we can and should find other ways of explaining the same concept.

Rationality should make you better off than you would be if you didn't have it. That doesn't imply defeating or outdoing other people. It just means that the way we evaluate candidate decision procedures is whether they create beneficial outcomes — not bogus standards such as how unemotional they are, how many insignificant bits of data they take in, how much they resemble Traditional Scholarship, how much CPU time they consume, or whether they respect naïve ideas of causality.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-09T01:01:31.304Z · LW(p) · GW(p)

Yup. Personally, I prefer "optimizers should optimally achieve their goals" to "rationalists should win". But there exist people who find the latter version pithier, easier to remember, and more compelling. Different phrasings work better for different people, so it helps to be able to say the same thing in multiple ways, and it helps to know your audience.

Replies from: khafra
comment by khafra · 2012-03-09T19:06:03.053Z · LW(p) · GW(p)

"Lesswrong, a community blog dedicated to the art of optimal goal-seeking, mostly through optimal belief acquisition."

I kinda like that. It's wordier, but it loses some of the connotations of "rational" that invariably trip up newcomers.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-09T19:25:57.322Z · LW(p) · GW(p)

I have elsewhere argued essentially this, though "optimized" and "optimal" are importantly different. The conclusion I came to reading the comments there was that people vary in how they process the relevant connotations, and arguing lexical choice makes my teeth ache, but personally I nevertheless try to avoid talking about "rational X choosing" when I mean optimizing my choice of X for particular goals.

comment by Nornagest · 2012-03-08T22:15:54.557Z · LW(p) · GW(p)

This is a little speculative and I wasn't around for the original coining of the LW usage, but American geek slang has contained a much broader sense of "win" for quite a while. Someone conversant in that sociolect might say a project wins if it accomplishes its goals (related to but broader than sense 2 in the parent); they also might say an event is a win if it has some unexpected positive consequence, that a person wins if they display luck or excellence in some (not necessarily competitive) domain, et cetera. I'd expect that to have come from 1990s gamer jargon if you trace it back, but it's metastasized pretty far from those origins by now.

I'm guessing that "rationalists should win" derives from that usage.

Replies from: DSimon
comment by DSimon · 2012-04-06T20:17:25.985Z · LW(p) · GW(p)

Indeed, one might even say that Less Wrong is for the win.

comment by HungryTurtle · 2012-03-09T14:45:54.553Z · LW(p) · GW(p)

What justifies this unconventional definition of the word? Random House Dictionary offers three senses of "win":

I apologize, I thought it was appropriate to base my discussion in community terminology. I guess I assumed Eliezer's definition of "winning" and rationality were a community norm. My bad.

comment by Vladimir_Nesov · 2012-03-08T09:54:28.568Z · LW(p) · GW(p)

[A]re there times when it should be desired to lose[?]

When you should "lose", "losing" is the objective, and instrumental rationality is the art of successfully attaining this goal. When you do "lose", you win. On the other hand, if you "win", you lose. It's very simple.

Replies from: Matt_Simpson, HungryTurtle
comment by Matt_Simpson · 2012-03-08T18:44:53.673Z · LW(p) · GW(p)

When you do "lose", you win. On the other hand, if you "win", you lose. It's very simple.

Cue laugh track.

comment by HungryTurtle · 2012-04-06T16:37:55.899Z · LW(p) · GW(p)

When you should "lose", "losing" is the objective, and instrumental rationality

Thank you for your insightful comments. I chose to call it winning to try and build off the existing terminology of the community, but that might have been a mistake. What was meant by "winning" was goal achievement, what was meant by "losing" was acting in a way that did not move towards any perceived goal, perhaps it would be better described as having no goal.

Inaction is technically a type of action, but I think there needs to be a distinction between them. Choosing to suspend intentionality is technically a type of intentionality, but I still think there needs to be a distinction. What do you think?

comment by TimS · 2012-03-07T19:34:07.891Z · LW(p) · GW(p)

Some of the things (like eating healthier and exercising more) I did not let go, because I felt the damages of my role reversal were less than the damages of their habits; however, other ideas, arguments, beliefs, I did let go because they did not seem worth the pain I was causing my parents.

Why call this losing instead of winning-by-choosing-your-battles? I don't think members of this community would endorse always telling others "I know a better way to do that" whenever one thinks this is true. At the very least, always saying that risks being wrong because (1) you were instrumentally incorrect about what works better or (2) you did not correctly understand the other person's goals.

More generally, the thing you are labeling rationality is what we might call straw vulcan rationality. We don't aspire to be emotionless computrons. We aspire to be better at achieving our goals.

Eliezer wrote a cute piece about how pathetic Spock was to repeatedly predict things had <1% of succeeding when those sorts of things always worked. As outsiders, we can understand why the character said that, but from inside Spock-the-person, being repeated wrong like that shows something is wrong in how one is thinking. Can't find that essay, sorry.


It doesn't bother me, but some people will be bothered by the non-standard font and spacing. I'd tell you how to fix it, but I don't actually know.

Replies from: Swimmer963, HungryTurtle, Dustin
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-03-07T19:59:49.935Z · LW(p) · GW(p)

Reminds me of this chapter from Eliezer's fanfiction. "Winning" in the naive, common-usage-of-the-word sense doesn't always result in better accomplishing your goals, and it is sometimes "rational" to lose, which means that losing is sometimes "winning" in the LW/rationality sense.

Words are confusing sometimes!

comment by HungryTurtle · 2012-03-09T15:12:07.525Z · LW(p) · GW(p)

Tims,

It is always a pleasure talking! Thanks for the great link to the straw vulcan rationality. Ironically, what Julia says here is pretty much the point I am trying to make

Clearly Spock has persistent evidence accumulated again and again over time that other people are not actually perfectly rational, and he’s just willfully neglecting the evidence; The exact opposite of epistemic rationality.

Humans are irrational by nature; humans are also social by nature. There is individual health and there is social health. Because humans are irrational, often times social health contradicts individual health. That is what I call rationally irrational.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-03-09T21:31:52.653Z · LW(p) · GW(p)

Humans are irrational by nature; humans are also social by nature.

One: what is your evidence that humans are "irrational by nature", and how do you define this irrationality.

Two: I've found that since I started reading LW and trying to put some of its concepts into practice, my ability to handle social situations has actually improved. I am now much better at figuring out what people really want and what I really want, and then finding a way to get both without getting derailed by which options "feel high-status". The specific LW rationality toolkit, at least for me, has been VERY helpful in improving both my individual psychological health and my "social health."

Replies from: faul_sname, HungryTurtle
comment by faul_sname · 2012-03-09T22:16:21.494Z · LW(p) · GW(p)

One: I think Lukeprog says it pretty well here:

“Oh my God,” you think. “It’s not that I have a rational little homunculus inside that is being ‘corrupted’ by all these evolved heuristics and biases layered over it. No, the data are saying that the software program that is me just is heuristics and biases. I just am this kluge of evolved cognitive modules and algorithmic shortcuts. I’m not an agent designed to have correct beliefs and pursue explicit goals; I’m a crazy robot built as a vehicle for propagating genes without spending too much energy on expensive thinking neurons.”

Two: Good point. Social goals and nonsocial goals are only rarely at odds with one another, so this may not be a particularly fruitful line of thought. Still, it is possible that the idea of rational "irrationality" is neglected here.

Replies from: thomblake, Swimmer963, HungryTurtle
comment by thomblake · 2012-04-10T18:21:42.534Z · LW(p) · GW(p)

Social goals and nonsocial goals are only rarely at odds with one another

This seems implausible on the face of it, as goals in general tend to conflict. Especially to the extent that resources are fungible.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-03-09T22:20:22.231Z · LW(p) · GW(p)

I agree with you on Lukeprog's description being a good one. I'm curious about whether HungryTurtle agrees with this description, too, or whether he's using a more specific sense of "irrational."

comment by HungryTurtle · 2012-04-06T17:31:36.938Z · LW(p) · GW(p)

Social goals and nonsocial goals are only rarely at odds with one another

hahah than why is smoking cool for many people? Why is binge drinking a sign of status in American colleges? Why do we pull all nighters and damage our health for the pursuit of the perfect paper, party, or performance.

Social goals are a large portion of the time at odds with individual health goals.

Replies from: faul_sname
comment by faul_sname · 2012-04-06T20:05:43.657Z · LW(p) · GW(p)

I'm probably generalizing too much from my own experience, which is social pressure to get educated and practice other forms of self-improvement. I've never actually seen anyone who considers binge drinking a good thing, so I had just assumed that was the media blowing a few isolated cases out of proportion. I could easily be wrong though.

comment by HungryTurtle · 2012-03-09T23:27:41.888Z · LW(p) · GW(p)

One: what is your evidence that humans are "irrational by nature", and how do you define this irrationality.

Do you think humans can avoid interpreting the world symbolically? I do not. The human body, the human brain is hardwired to create symbols. Symbols are irrational. If symbols are irrational, and humans are unable to escape symbols, then humans are fundamentally irrational. That said, I should have added to my above statement that humans are also rational by nature.

Replies from: None, Swimmer963
comment by [deleted] · 2012-03-09T23:55:09.536Z · LW(p) · GW(p)

humans are also rational by nature.

Humans are irrational by nature

Why isn't this just a contradiction? In virtue of what are these two sentences compatible?

Replies from: Gastogh, HungryTurtle
comment by Gastogh · 2012-03-11T07:30:34.222Z · LW(p) · GW(p)

I think they're compatible in that the inaccurate phrasing of the original statement doesn't reflect the valid idea behind it. Yobi is right: it's not a clean split into black and white, though the original statement reads like it is. I think it would've been better phrased as, "There are rational sides to humans. There are also irrational sides to humans." The current phrasing suggests the simultaneous presence of two binary states, which would be a contradiction.

comment by HungryTurtle · 2012-03-11T05:52:06.478Z · LW(p) · GW(p)

It is a paradox. There are a lots of paradoxes in the social sciences.

Replies from: wedrifid, Swimmer963, None
comment by wedrifid · 2012-03-11T21:13:28.901Z · LW(p) · GW(p)

It is a paradox. There are a lots of paradoxes in the social sciences.

No it isn't. It's just contradiction.

Asserting "A" and also "not A" isn't deep. It's just wrong.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-06T17:27:05.952Z · LW(p) · GW(p)

Paradoxes are not contradictions.... Anyone who gave me a minus should explain why.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-04-06T17:34:37.539Z · LW(p) · GW(p)

First of all, wedrifid didn't say that "paradoxes are not contradictions", he just said (correctly) that this particular contradiction is not a paradox.

Secondly:

  • A contradiction is the following: "A. The sky is blue". "B. The sky is green."
    One of the sentence is true, the other is false. No paradox here.

  • A (rather childish) paradox is the following: "A. Sentence (B) is true." "B. Sentence (A) is false."
    You can't assign a truth-value to either (A) or to (B), without leading to a self-contradiction, making this set of two sentences paradoxical.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-09T12:49:55.004Z · LW(p) · GW(p)

There are a lots of paradoxes in the social sciences.

I would agree with you there, in the sense that social sciences very rarely have fundamental laws or absolute truths. But the problem with this view is thinking it's okay for two concepts to appear to be contradictory. Because they can't be contradictory on the level of atoms and molecules. People run on physics.

It's like saying that "Mary had chemotherapy for her cancer, and her cancer went into remission, but Joe had chemotherapy and he died anyway" is a paradox. The only reason that phrase appears contradictory is that information is being lost, lots of it. If you could look at the source code for the universe, you'd be able to see whether Joe's cancer had been at a more advanced stage than Mary's, or whether his tumor had genetic mutations making it harder to kill than Mary's, or whether his DNA predisposed him to more aggressive cancers, period. Or maybe Joe happened to catch pneumonia during his chemo, and Mary didn't.

Humans behave rationally in some situations. Under certain conditions, you give a human input and you get an output, and if you had somehow fed that same complex input to an advanced computer program designed to make rational decisions, it would have given the same output. In some situations, though, humans take input and you get an output that's different from the decision your computer program would make, i.e. irrational. But if you took apart the universe's source code for your human brain, you'd see that both decisions were the result of operations being done with neurons. The "irrational" decision doesn't appear in the brain out of nowhere; it's still processed in the brain itself.

Replies from: David_Gerard, HungryTurtle
comment by David_Gerard · 2012-04-09T14:31:26.676Z · LW(p) · GW(p)

The word for "all methods should get the same answer" is consilience. (I only found out recently this was the word for it.)

comment by HungryTurtle · 2012-04-09T19:12:43.797Z · LW(p) · GW(p)

But the problem with this view is thinking its okay for two concepts to appear to be contradictory. Because they can't be contradictory on the level of atoms and molecules. People run on physics. To say "it is okay for two concepts to appear contradictory, but they cannot be contradictory on the level of atoms and molecules" is a fallacy of composition. Please hold judgment of my use of esoteric terms! It was the easiest way to say what I wanted to say. A fallacy of composition is assuming that what is true for the whole is true for the part; or assuming what is true for the part is true for the whole. For example,

1.Atoms and molecules can't be contradictory 2.Humans are made up of atoms and molecules. 3.Therefore, humans can't be contradictory

A system as a whole contains properties that do not exist if you were to break it apart. The system does not exist if you break it apart. Atoms are not people, even if people come from atoms. Just as seeds are not trees, even though trees come from seeds. It is pragmatically useless to attempt to translate some statement about atoms into a statement about humans. You say atoms and molecules cannot be contradictory, they also cannot be smashed with a hammer, die from a lack of oxygen, or dance the tango. What atoms are or are not capable of is not detriment of what humans are or are not capable of. Secondly, I agree that contradictory statements arise from information being lost, but what I would add is that the loss of information is the process of language. To create and use language is to subjectively divide the totality of reality into decodable chunks. If you read my response to Arran_Stirton ‘s response I say more about this there.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-09T22:47:24.959Z · LW(p) · GW(p)

I'm not disagreeing with the fact that you can make true contradictory statements about humans. Of course humans have properties that don't exist at the atomic level, and it's inevitable that the process of using words as levers for complex concepts results in information loss–if you didn't have some way of filtering out information, communication would be impossible.

But it's the statements you can make with language that are contradictory, not the humans themselves. You can claim that it's a paradox, but it's a very trivial and not very interesting kind of paradox.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-10T13:22:53.151Z · LW(p) · GW(p)

If you accept that statements as much as if not more than biology are what define humans, then it become very interesting.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-10T14:26:20.015Z · LW(p) · GW(p)

What's an example of a statement that defines humans more than biology? I still think that we're talking about a contradiction/paradox in the map, not the territory.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-10T14:29:38.194Z · LW(p) · GW(p)

I guess the point I am trying to make is that for humans the map is the territory.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-10T16:39:15.349Z · LW(p) · GW(p)

I think we've gotten down to the root of our disagreement here. Obviously you find that "for humans, the map is the territory" is a productive framework to do your analyses within. I don't know much about sociology, anthropology, or philosophy, but is this the standard theoretical framework in those fields?

The problem I have with it is that the territory is still there. It doesn't change depending on how accurate our map is. Yes, humans perceive the rest of the universe, including their own bodies, through a very narrow sensory window, and that information is then processed by messy, biased, thrown-together-by-evolution brain hardware. We can't step out of our heads and see the territory "as it really is". But we do have some information, and we can seek out more information, and we benefit from doing that, because the rest of the universe exists and will have its effects on us regardless of what we believe.

Now, I think what you might be trying to say is that what kind of map you have has an effect on what you do and think. I completely agree. Someone could state that 'humans are irrational', and if they believed it to be true, it might influence their behaviour, for example the way they treat other humans. Someone else could state that 'humans are rational', and that would affect the way they treat others, too. You could say that the map goes out and changes the territory in that particular example–the causal arrows run in both directions, rather than it being just the territory that is fed in to produce the map.

This is a useful point to make. But it's not the same as "the map is the territory." There's a lot of universe out there that no human knows about or understands, and that means it isn't on any maps yet, but you can't say that by definition it doesn't exist for humans. Hell, there are things about our own body that we don't understand and can't predict (why some respond differently to treatment than others, for example), but that doesn't mean that the atoms making up a human's tumour are confused about how to behave. The blank is on the map, i.e. our theories and understanding, and not in the territory, and it's a pretty irritating blank to have, which tons of people would like to be filled.

As a side note: I looked up the word 'paradox' on my desktop dictionary, and there are 3 different definitions offered.

  1. A statement or proposition that, despite sound (or apparently sound) reasoning from acceptable premises, leads to a conclusion that seems senseless, logically unacceptable, or self-contradictory : a potentially serious conflict between quantum mechanics and the general theory of relativity known as the information paradox.

  2. A seemingly absurd or self-contradictory statement or proposition that when investigated or explained may prove to be well founded or true : in a paradox, he has discovered that stepping back from his job has increased the rewards he gleans from it.

  3. A situation, person, or thing that combines contradictory features or qualities : an Arizona canyon where the mingling of deciduous trees with desertic elements of flora forms a fascinating ecological paradox.

I think #1 is the standard that most people keep in their head for the word, and #2 and #3 are closer to the way you were using it. Apparently they are all acceptable definitions!

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-10T17:31:48.920Z · LW(p) · GW(p)

I think we've gotten down to the root of our disagreement here. Obviously you find that "for humans, the map is the territory"...The problem I have with it is that the territory is still there. It doesn't change depending on how accurate our map is.

I think a gap in our communication is the type of map we visualize in our use of this analogy. When we say map, what type of map are you envisioning? This is just a guess, but to me it seems like you are imagining a piece of paper with topography, landmarks, and other various symbols marked out on it. It is from this conception of a map that you make the claim "the territory is still there." I imagine you see the individual of our analogy with their nose pressed into this type of parchment moving solely based on its markings and symbols. For you this is a bad choice of navigating, because the individual is ignoring the reality that is divorced from the parchment.

Is this an accurate portrayal of your position within this analogy?

When I say, "The map is the territory." I am not talking about piece of parchment with symbols on it. I literally mean that the map is the territory. As when you navigate by the moss on trees, or the stars in the sky.

When I say "the map is the territory" as in moss or stars, I am implying that humans do not have the type of agency/power over the map that the latter analogy implies. A map as a piece of paper is completely constructed through human will. a map that is the territory is not.

You say

The blank is on the map, i.e. our theories and understanding, and not in the territory, and it's a pretty irritating blank to have, which tons of people would like to be filled.

By saying the map is the territory I am implying that it cannot be filled in; that humans do not construct the map, they just interpret it. There is nothing to be filled in. Do you see how these are radically different interpretations of map? I see this as the point of difference between us.

To answer some of your side questions-

  • This theory is the core of modern anthropology, but only some sub-divisons of the other mentioned fields. In philosophy it is highly controversial, because it questions the entire western project of philosophy and its purpose.
  • If you look back to my original post to arran, I state that there are multiple definitions of a paradox and all are acceptable. That what is fruitful is not trying to argue about which definition is correct, but to accept the plurality and try to learn a new point of reference from the one you have been trained in.
Replies from: Swimmer963, TimS
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-10T17:51:06.785Z · LW(p) · GW(p)

When I say, "The map is the territory." I am not talking about piece of parchment with symbols on it. I literally mean that the map is the territory. As when you navigate by the moss on trees, or the stars in the sky.

When I say "the map is the territory" as in moss or stars, I am implying that humans do not have the type of agency/power over the map that the latter analogy implies. A map as a piece of paper is completely constructed through human will. a map that is the territory is not.

But if you close your eyes and envision your knowledge and understanding of a particular area–I don't know, different types of trees, or different types of cancer, or something–you're not referring to the territory. Not right at that moment. You're not out in the field holding leaves from 6 different types of North American trees, comparing the shape. You're not comparing cancerous to normal cells under a microscope. You're going by memory, by concepts and mental models and words. Humans are good at a lot of things because of that capacity to keep information in our head and navigate by it, instead of needing those leaves or slides right in front of us before we can think about them. I call those mental concepts a map. Do you call them something different?

Maybe you're trying to say that humans can't arbitrarily create maps. When you create your beliefs, it's because you go out there and look at leaves and say to yourself "wow, this one has lobes and looks a bit like a ladder...I'll call it "oak"." You don't sit at home and arbitrarily decide to believe that there is a kind of tree called an oak and then draw what you think an aesthetically pleasing oak leaf would look like. (Actually, there are some areas of human "knowledge" that are depressingly like this. Theology, anyone?)

Still, if you're later reading a book about insects and you read about the 'oak gall beetle' that infests oak trees and makes them produce galls, you don't have to go back to the forest and stand looking at a tree to know what the author's talking about it. You remember what an oak tree looks like, or at least the salient details that separate all oak trees from all maple and fir and tamarack trees. I'd call that navigating by the map.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-10T19:17:52.638Z · LW(p) · GW(p)

You're not comparing cancerous to normal cells under a microscope. You're going by memory, by concepts and mental models and words. Humans are good at a lot of things because of that capacity to keep information in our head and navigate by it, instead of needing those leaves or slides right in front of us before we can think about them. I call those mental concepts a map. Do you call them something different?

You are not divorced from the territory. When you close your eyes the images and ideas you create are not magically outside of the territory they are the territory. In my analogy with the moss and stars the mental concepts are the moss and stars. Closing your eyes as opposed to seeing; reading a book as opposed to being there; these analogies setup an inside-outside dichotomy. I am saying this is a false dichotomy. The map is the territory.

Maybe you're trying to say that humans can't arbitrarily create maps.

What I am trying to say is that reading from a book vs. being there and closing your eyes vs looking are not opposites. They appear to be opposites due to the philosophical position engrained in our language. The map-territory divide is erroneous. The map is the territory; the territory is the map. There is no inner mental world and outer "real" world; this supposes a stratification of reality that simply does not exist. Our minds are not abstract souls or essential essences. The human brain and everything it does is a part of the territory.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-10T19:38:12.174Z · LW(p) · GW(p)

Our minds are not abstract souls or essential essences.

Is that what you think I'm trying to say? No wonder you are disagreeing! The last thing I believe is that our minds are 'abstract souls.'

When you close your eyes the images and ideas you create are not magically outside of the territory they are the territory.

Of course, the images and thoughts and ideas in your head are not magically happening outside the universe. If someone could look at the "source code" of the universe from the outside, they would see your neurons, made out of atoms, running through all the steps of processing a mental image of, say, an oak leaf.

But that mental image isn't the same as the physical oak leaf that you're modelling it off! Your 'mental world' runs on atoms, and it obeys the laws of physics, and all the information content comes from somewhere...but if you have a memory of an oak tree in a forest 100 miles away, that's a memory, and the oak tree is an oak tree, and they aren't the same thing at all. In the universe source code, one would look like atoms arranged into plant cells with cellulose walls, and one would look like atoms arranged into neurons with tiny electrical impulses darting around. You can imagine the oak tree burning down, but that's just your mental image. You can't make the actual oak tree, 100 miles away, burn down just by imagining it. Which should make it obvious that they aren't the same thing.

If you've been understanding the phrase "the map is not the territory" to mean 'human minds are essential essences that don't need to run on physics", then you've gotten a misleading idea of what most of us belief it to mean, and I apologize for not pointing that out sooner. Most people would find our fault is in being too reductionist. I think the problem might be that what we're calling "map" and what we're calling "territory" both fit under your definition of "territory", while you consider the "map" to mean a hypothetical outside-the-universe 'essential essence.' Does that capture it?

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-10T20:18:07.338Z · LW(p) · GW(p)

I think the problem might be that what we're calling "map" and what we're calling "territory" both fit under your definition of "territory", while you consider the "map" to mean a hypothetical outside-the-universe 'essential essence.' Does that capture it?

So I don't really know what to say. Because you have definitely captured it, but it is like you don't see it in the same way I do? I don't know. You say

The last thing I believe is that our minds are 'abstract souls.'

But to me the idea that a physical oak leaf and your mental image are not the same thing is the same thing as saying you believe in 'abstract souls' or a hypothetical outside-the-universe 'essential essence.' It is the modern adaptation of the soul. Just as the croc is the modern adaptation of the shoe. It is packaged differently, and there are some new functional elements packaged in, but ultimately it stems from the same root.

When you see an oak tree and when you think about an oak tree it triggers the same series of neural impulses in your brain. Athletes visualize their actions before doing them, and this provides real benefits to achieving those actions. For humans, there is never any "physical oak leaf" there is only ever constructs.

Replies from: Swimmer963, thomblake
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-10T20:43:07.714Z · LW(p) · GW(p)

When you see an oak tree and when you think about an oak tree it triggers the same series of neural impulses in your brain. Athletes visualize their actions before doing them, and this provides real benefits to achieving those actions. For humans, there is never any "physical oak leaf" there is only ever constructs.

Okay I think I understand what you're trying to say. So let's go back to our hypothetical observer outside the universe, looking in at the source code. (Not that this is possible, but I find it clarifies my thinking and what I'm trying to say.) The human is looking at an oak tree. The observer is looking at the human's brain, and sees that certain neurons are sending signals to other neurons. The human is closing their eyes and visualizing an oak tree. There's a similar but not identical neural pattern going on–I find the subjective experience of visualizing an oak tree using my imagination isn't quite the same as the experience of looking at one, but the neural firing is probably similar.

Now the human keeps their eyes closed, and the outside-the-universe hypothetical observer looks at the oak tree, which is made out of cellulose, not neurons. The oak tree starts to fall down. In the neural representations in the human's head, the oak tree isn't falling down, because last time he looked at it, it was nice and steady. He keeps his eyes closed, and his earplugs in, and the oak tree falls on his head and he dies. Up until the moment he died, there was no falling oak tree in his mental representation. The information had no sensory channel to come in through. Does that mean it didn't exist for him, that there was never any "physical oak tree?" If so, what killed him?

I think the LessWrong overall attitude to this is comparable to a bunch of observers saying "gee, wouldn't it have been nice if he'd kept his eyes open, and noticed the tree was falling, and gotten out of the way?" The philosophy behind it is that you can influence what goes into your mental representations of the world (I'll stop calling it "map" to avoid triggering your 'modern equivalent of the soul' detector). If you keep your eyes closed when walking in the forest (or you don't get around to going to the doctor and getting a mammogram or a colonoscopy, or [insert example here]), you get hit by falling trees (or your cancer doesn't get detected until it's Stage 5, at which point you might as well go straight to palliative care).

For me there's something basically wrong with claiming that something doesn't exist if no human being knows about it. Was the core of the planet solid before any human knew it was molten? Is an asteroid going to decide not to hit the Earth after all, just because no telescopes were pointed outwards to look for it? What we don't know does hurt us. It hurts us plenty.

Granted, the 'map and territory' claim, along with many other analogies rampant on LW, was aimed more at topics where their is fairly clear evidence for a particular position (say, evolution), and people have ideological reasons not to believe it. But it goes just as well for topics where no human being knows anything yet. They're still out there.

In another comment, you said that you don't think hard science is possible. (Forgive me if i'm misquoting.) Since our entire debate has been pretty much philosophy and words, let's go for some specifics. Do you think research in hard science will stop advancing, or that is should stop advancing? If so, why?

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-10T21:09:07.790Z · LW(p) · GW(p)

Okay I think I understand what you're trying to say. So let's go back to our hypothetical observer outside the universe, looking in at the source code. (Not that this is possible, but I find it clarifies my thinking and what I'm trying to say.) The human is looking at an oak tree. The observer is looking at the human's brain, and sees that certain neurons are sending signals to other neurons. The human is closing their eyes and visualizing an oak tree. There's a similar but not identical neural pattern going on–I find the subjective experience of visualizing an oak tree using my imagination isn't quite the same as the experience of looking at one, but the neural firing is probably similar.

If you don't mind I would like to play off your analogy. I agree that the arrangement of neurons will not be identical, but I would pose the question how does the observer know that the human is closing his eyes? When he is looking at the tree perhaps there is wind and a feeling of coolness; but when he is closing his eyes it can also be windy and cool. If there is a lack of wind in one model how does the observer know that the neurons are the result of a mental construct and not the result of looking at a tree through a window while sitting inside?
The way memories/ mental images work is that they are networked. When we recall a past memory we are irrevocably altering it by attaching it to our current consciousness. So for example, let’s say when I am 20 I remember an exploit of my early teens while at a sleep over drinking vanilla soda. The next time I go to recall that memory, I will also unintentionally, and unavoidably, activate memories of that sleep over and vanilla soda. Every time I reactive that memory the soda and sleep over get activated too, strengthening their place in the memory. In another 10 years the two memories are indistinguishable. Back to our observer, when he is looking at me thinking about an oak tree, it irrevocably activates a network of sensory experiences that will not identically replicate the reality of the oak tree in front of me, but will present an equally believable reality of some oak tree. Not the same oak tree, but I would suggest that the workings of the human brain are more complicated than what you have imagined. Where the observer would not be able to tell the real from the construct.

For me there's something basically wrong with claiming that something doesn't exist if no human being knows about it.

I have no disagreement here. It is wrong to claim that something doesn’t exist if humans do not know about it. What I have been arguing about is not ontology (what does or does not exist), but epistemology (how humans come to know). It is not that I am saying no territory exists outside of what is human, but that humans have no other way of knowing territory besides through human means, linguistic means.

Do you think research in hard science will stop advancing, or that is should stop advancing? If so, why?

It is not that I think scientific research will or should stop, but that it should be moderated. What is the purpose of science-technology? I understand the purpose of these things to help humans to be able to better know and predict their environment for the sake of creating a safer niche. Is this what the scientific institution currently does? I would argue no. Currently, I see the driving impetus of science and technology to be profit. That is not a critique in anyway of A.I or the projects of this community. To the contrary, I think the motives of this group are exceptions and exceptional. But I am talking about the larger picture of the scientific institution. The proliferation of new technologies and sciences for the sake of profit has rendered the world less knowable to people, harder to predict, and no in some sense more dangerous (when every technological victory brings with it more sever problems). I am not against hard science. I am against the overemphasis of this one technique to be superimposed onto every facet of human reality.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-10T21:13:34.581Z · LW(p) · GW(p)

I agree that the arrangement of neurons will not be identical, but I would pose the question how does the observer know that the human is closing his eyes? When he is looking at the tree perhaps there is wind and a feeling of coolness; but when he is closing his eyes it can also be windy and cool. If there is a lack of wind in one model how does the observer know that the neurons are the result of a mental construct and not the result of looking at a tree through a window while sitting inside?

Um...because a hypothetical observer who can look at neurons can look at the eyes 1 cm away from them, too?

Also, I can tell the difference between a real tree and an imagined tree. It'd be pretty inconvenient if humans couldn't distinguish reality from fantasy. If we can feel a difference, that means there's a difference in the neurons (because you are neurons, not an existential essence), and an observer who knew how to read the patterns could see it too.

It is not that I am saying no territory exists outside of what is human, but that humans have no other way of knowing territory besides through human means, linguistic means.

Actually, quite a lot of what you're saying comes across as 'no territory exists outside of what is human.' But obviously that's not what you believe. Yay! We agree!

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-10T21:54:45.436Z · LW(p) · GW(p)

Also, I can tell the difference between a real tree and an imagined tree. It'd be pretty inconvenient if humans couldn't distinguish reality from fantasy.

You can tell the difference because you are aware of the difference to begin with. I don't think it is so obvious that our hypothetical observer would observer neurons with eye sight. I thought the observation of neurons would require some extrasensory phenomena, and if that is the case there is no reason why he or she could not have this sense, but lack normal eye sight.

Actually, quite a lot of what you're saying comes across as 'no territory exists outside of what is human.' But obviously that's not what you believe. Yay! We agree!

Haha that is how I felt about the whole not beleving in the soul thing. By the way thanks for being so light hearted about this whole conversation, in my experience, people can tend to get pretty nasty if you do not submit to what they think is right. I hope I have not come across in a nasty manner.

As to my comment "It is not that I am saying no territory exists outside of what is human, but that humans have no other way of knowing territory besides through human means." I am not trying to argue that we completely abandon empiricism, or that all of reality is deducible to our thoughts. But I can see how it comes across in that way. That is why I used the moss and stars analogy to try and divorce the idea from an analogy of a totally human constructed reality.

Do you think the territory exists without the map (the human)? I think A territory would exist without the map (the human), but it would be a different territory. The territory humans exist in is one that is defined by having a map. The map shapes the territory in a way that to remove it would remove humanity.

How does this sit with you O_o

Replies from: Swimmer963, TimS, wedrifid
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-10T22:07:51.008Z · LW(p) · GW(p)

By the way thanks for being so light hearted about this whole conversation, in my experience, people can tend to get pretty nasty if you do not submit to what they think is right. I hope I have not come across in a nasty manner.

Bah. Nastiness begets more nastiness. And more nastiness means less actual information getting transmitted. And I happen to like new information more than I like being self-righteous. Also, I'm pretty young and pretty sheltered, and I'm dedicating this period of my life to absorbing as much knowledge as I can. Even if I finish a discussion thinking you're wrong, I've still learned something, if only about how a certain segment of humanity sees the world.

comment by TimS · 2012-04-11T00:45:38.560Z · LW(p) · GW(p)

Do you think the territory exists without the map (the human)? I think A territory would exist without the map (the human), but it would be a different territory. The territory humans exist in is one that is defined by having a map. The map shapes the territory in a way that to remove it would remove humanity.

I basically agree with this statement, as I think you intend it. Why not call that leftover thing "the territory" and then assert that most scientists are incorrectly asserting that some things are in the territory when they are actually in the map?

In other words, I don't understand what purpose you are trying to achieve when you say:

I disagree with the core ontological assumption being made here, namely a divide between the map and the territory.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T14:43:41.747Z · LW(p) · GW(p)

I don't really know what the leftover part you are talking about is, but I do not think there is a leftover part. I don't think things can be broken down that way. Maybe my comment about the visual and audio cortexes was confusing in this degree, but that was just to sound like a know it all.

Replies from: TimS
comment by TimS · 2012-04-11T16:11:39.458Z · LW(p) · GW(p)

Maybe I'm confused. You said that you thought something would exist even if there were no humans. I'm suggesting that, for purposes of the map/territory metaphor, you could use "territory" to reference the what-would-exist-without-humans stuff.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T17:31:58.823Z · LW(p) · GW(p)

You mean the portion of reality we don't interact with? Like for example whatever is outside the universe or in a galaxy on the other side of the universe?

Replies from: TimS
comment by TimS · 2012-04-11T17:48:37.309Z · LW(p) · GW(p)

You think that if humans went away, the portion of the universe that we interact with would disappear?

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T19:49:43.111Z · LW(p) · GW(p)

No, but it would be fundamentally altered. In my mind,

If you don't mind I will use an analogy to more precisely explain my thoughts. The reality humans interact with in the analogy is an ocean. I don't see humans as fish in the ocean. That would imply a fundamental separateness; that humans come into interaction with their reality, but are not a part of it. If you remove a fish from the ocean, the ocean for the most part is still the ocean. I see humans more as the salt in the ocean. Not as synthesized as say the hydrogen and oxygen are, but salt is pretty thoroughly mixed into the ocean. To remove all salt from the ocean would have such huge ramifications that what would remain would no longer be "an ocean" in traditional terms.

So no, I do not think that if humans disappear the universe disappears. I do think the portion of the universe we affect is largely defined by our presence, and that the removable of this presence would so alter its constitution that it would not be the reality we think of as reality today.

Replies from: TimS, Swimmer963
comment by TimS · 2012-04-12T14:54:22.338Z · LW(p) · GW(p)

If humanity is as integral to our reality as you describe, then I am confused why our beliefs about how reality works don't totally control how reality actually works. That is, I would expect human beliefs to have as much causal effect on objects as external forces like gravity and magnetism. You are study of the world shows that this isn't so. In short, many people think it is a fundamental physical law that objects in motion eventually come to a stop. That's their ordinary experience, but it is easy to show that it is totally wrong.

In general, scientific predictions about what will happen to physical objects in the future is not related to the consensus people have about what would happen (in other words, people are scientifically illiterate). Despite how unintuitive it seems, nothing can travel faster than light, and humans are descended from monkeys.

Improving humanity's ability to make predictions about the future is the empirical project in a nutshell. That's the source of the pushback in this post. If it turns out to be the case that there is no objective reality, external to human experience, it follows pretty closely that the modern scientific project is pointless. In short, if the world is not real (existing external to humanity), what's the point in studying it?

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T17:18:32.938Z · LW(p) · GW(p)

If humanity is as integral to our reality as you describe, then I am confused why our beliefs about how reality works don't totally control how reality actually works.

Wouldn't you say oxygen is integral to the current reality of earth? That does not mean that the current reality of earth is shaped by the will of oxygen. Saying that humanity is integral to the constitution of our reality is different from saying humanity consciously defines the constitution of its reality. Right?

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-11T20:17:25.296Z · LW(p) · GW(p)

I do think the portion of the universe we affect is largely defined by our presence, and that the removable of this presence would so alter its constitution that it would not be the reality we think of as reality today.

Do you think that, say, 100 million years ago (when there were mammals and plants pretty similar to what we know now on Earth, but no humans), that reality was significantly different?

I'm not saying I disagree with you. An Earth with humans on it has a very different potential future path than an Earth with nothing more complicated than squirrels. But the current Earth with humans on it is only a bit changed (hole in ozone layer, mild-so-far global warming which may or may not be human-caused), and outside of the solar system, we've hardly changed anything as of yet.

Replies from: army1987, HungryTurtle
comment by A1987dM (army1987) · 2012-04-12T14:51:56.420Z · LW(p) · GW(p)

Do you think that, say, 100 million years ago (when there were mammals and plants pretty similar to what we know now on Earth, but no humans)

You want to have one fewer zero in that figure, or some weaker adverb before pretty.

ETA: [goes to read http://en.wikipedia.org/wiki/Timeline_of_evolutionary_history_of_life#Cenozoic_Era] Wow, apparently there were no butterflies or grass 100 Myr ago. I would have never guessed they were that recent.

Replies from: wedrifid
comment by wedrifid · 2012-04-12T16:07:11.846Z · LW(p) · GW(p)

ETA: [goes to read http://en.wikipedia.org/wiki/Timeline_of_evolutionary_history_of_life#Cenozoic_Era] Wow, apparently there were no butterflies or grass 100 Myr ago. I would have never guessed they were that recent.

Grass is recent? That I didn't know. Butterflies I would expect to be fairly recent since they are so vulnerable to evolving themselves to extinction.

comment by HungryTurtle · 2012-04-11T21:23:03.911Z · LW(p) · GW(p)

we've hardly changed anything as of yet.

No offense, but I would suggest reading some environmental studies literature, or finding a friend in the field. We have changed a staggering amount of the earth's surface level topography and ecosystems. Will we ever "destroy the earth" probably not. Humans lack the power to literally destroy the earth. Could we destroy the existing biosphere and the majority of life it supports? Yes, that is within our power, and in fact we are actively moving towards such a reality.

Do you think that, say, 100 million years ago (when there were mammals and plants pretty similar to what we know now on Earth, but no humans), that reality was significantly different?

Yes I do. Current extinction rates are the same as characterizes the Big Five episodes of mass extinction in the fossil record. Obviously it is not unanimous, but humans are the primary factor in this. In my opinion, the fact that the human impact on Earth's biosphere is the equivalent of one of The five biggest episodes of mass extinction in the history of the planet is pretty good sign that we are making a significant difference compared to the mammals 100 million year ago.

Replies from: HungryTurtle, Swimmer963
comment by HungryTurtle · 2012-04-11T21:23:39.170Z · LW(p) · GW(p)

That was my first attempt at Lesswrong link posting . ^_^ one small step for me!

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T22:28:18.073Z · LW(p) · GW(p)

I spend sizable chunks of time trying figure out how to deliver (what I feel are) just revolutionary ideas. Each time I find myself back on a computer I take a quick second to eagerly check if any one of these gems has earned me some Karma points. Ironically, in over 20 posts, the one that gets me any positive recognition is the one quick mental fart in the bunch that makes no real contribution to the running dialogue.

Haha for a community of rationalists the karma here has an odd rhetorical and aesthetic after taste. ^)^

cheers

Replies from: Swimmer963, TheOtherDave, wedrifid
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-12T01:05:15.878Z · LW(p) · GW(p)

I upvoted it :) Because it made me smile, so upvoting it was more the equivalent of liking someone's status on Facebook than anything. I don't think it's really fair that you haven't gotten more upvotes though, since someone is upvoting most of my comments in this discussion, and your points are just as detailed and well explained as mine...unless that's you upvoting, in which case maybe I should scratch your back in return and upvote all of yours?

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T01:31:40.980Z · LW(p) · GW(p)

Haha I have been up voting you some. Not all the time, you definitely have other secret supporters, but some. But don't up vote me now just because of that. I don't want an up vote out of social obligation. And I didn't post the comment about my mental fart to get up votes. I just think it is interesting how the karma system here works. To me it is a stark contradiction of the group mission. I don't mean that to be mean.

So no, I only want up votes if my words genuinely spark in your mind as your read them. I could use a good back scratch though, no lie.

comment by TheOtherDave · 2012-04-11T23:32:14.305Z · LW(p) · GW(p)

Why do you think that is?

Replies from: HungryTurtle, HungryTurtle
comment by HungryTurtle · 2012-04-12T19:46:16.817Z · LW(p) · GW(p)

Why did you change your post here?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-12T19:49:55.963Z · LW(p) · GW(p)

?
If you're referring to this comment, I see no evidence that I changed it, nor do I recall changing it, so I suspect your premise is false.
If you're referring to some other comment, I don't know.

EDIT: Edited for demonstration purposes.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T19:59:43.905Z · LW(p) · GW(p)

Ok, then I probably made a mistake when I clicked on my new message from you. Sorry about that.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-12T20:01:34.723Z · LW(p) · GW(p)

An asterisk appears to the right of the date when a post has been edited. (See grandparent for an example)

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T20:04:05.949Z · LW(p) · GW(p)

what does see grandparent mean?

Replies from: ArisKatsaris, DSimon
comment by ArisKatsaris · 2012-04-12T20:07:52.765Z · LW(p) · GW(p)

what does see grandparent mean?

The parent post of the parent post, in this case meaning that comment, which you'll note has an asterisk next to the date, because it has been edited.

comment by DSimon · 2012-04-12T20:07:09.579Z · LW(p) · GW(p)

When you reply to a comment, the comment you are replying to is called the parent, and the comment that it replies to is called the grandparent.

comment by HungryTurtle · 2012-04-11T23:59:41.132Z · LW(p) · GW(p)

I can't tell if you really want to hear the theory I have on this matter, or if this was just a sarcastic jab at the fact that maybe my ideas are not as wonderful as I think they are.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-12T00:17:15.506Z · LW(p) · GW(p)

That's understandable.
I'm curious as to your theory on the matter, but I also suspect that your ideas aren't as revolutionary as you think they are.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T00:55:39.834Z · LW(p) · GW(p)

Yes, I probably worded that wrong. Revolutionary to this community. There is a significant body of minds, research, and literature that my ideas arise from. So to just say that they are "revolutionary ideas" was probably too vague. Revolutions happen in context. And I think the particular nexus of thought I am nestled in contains some attributes that if adopted would be revolutionary to this community.

That said, I actually am not looking for revolution in your community. I like your community. I just like having discussions like this one. I feel it keeps my thoughts and writing sharp. I enjoy it, and there is always the possibility that I will undergo a revolution!

P.S -Dave, more than talking about theories about your community I am interested in continuing our talk in the other post about the overall purpose of my essay .

comment by wedrifid · 2012-04-12T08:29:59.392Z · LW(p) · GW(p)

Ironically, in over 20 posts, the one that gets me any positive recognition is the one quick mental fart in the bunch that makes no real contribution to the running dialogue.

Plausible hypothesis: All the other comments made a negative epistemic contribution and any upvotes you got would be quickly removed by those who dislike the pollution. The 'mental fart' doesn't do any epistemic damage so altruistic punishers leave it alone.

Haha for a community of rationalists the karma here has an odd rhetorical and aesthetic after taste. ^)^

I don't like being downvoted either.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T12:24:58.819Z · LW(p) · GW(p)

the other comments made a negative epistemic contribution and any upvotes you got would be quickly removed by those who dislike the pollution.

Could you further elaborate what you mean by a negative epistemic contribution?

To me it sounds like you are saying the ideas I am suggesting disagree with some intended epistemic regime and that there are agents attempting to weed them out?

I don't like being downvoted either.

No, I don't either, but what I really don't like is reasons I am being down voted or up voted. it would be one thing if I was trolling, but I don't think a person should be down voted for having a real disagreement and attempting to explain their position. It is especially disturbing to think that your hypothesis is right, and I am being down voted because the ideas I am trying to express have been labeled as 'pollution.' Doesn't that strike you as very akin to soviet russia style censorship?

Replies from: wedrifid, wedrifid, TheOtherDave
comment by wedrifid · 2012-04-12T15:31:54.877Z · LW(p) · GW(p)

Could you further elaborate what you mean by a negative epistemic contribution?

People often complain about getting downvoted, attributing the downvotes to ironically irrational voters. Sometimes these accusations are correct (naturally, all instances where it is me that is so complaining fit this category!). More often the comments really did deserve to be downvoted and the complainer would be best served wizening up and taking a closer look at their commenting style.

Of those comments that do deserve downvoting the fault tends to lie in one or more of: behaving like an asshole, being wrong and belligerent about it or using argument styles that are irrational or disengenuous. I expect I would have noticed if you were behaving like an asshole (and I haven't), which leaves the remainder, all things which could be described as a 'negative epistemic contribution'.

I don't make the claim that all your comments are bad. I haven't read all of them - and accordingly I have not voted on all of them. I did read (and downvote) some. If I recall the comments were a bunch of rhetorical and semantic gymanstics trying to support a blatant contradiction and make it sound deep rather than like a mistake.

You make in the ancestor a challenge to the judgement of the community, asserting that their downvotes are wrong and your comments are worthy of upvotes. Usually claims of that form are mistaken and what I have seen suggests that this case is no exception. There is little shame in that - different things are appreciated in different communities and you are relatively unfamiliar with the things that are appreciated in this particular one. It is difficult to jump from one social hierarchy to another and maintain the same status (including level of confidence and unyielding assertiveness with respect to expressions of ideas). Usually a small down time is required while local conventions are learned. And the local conventions here really are more epistemically rational than wherever you came from (if the content of your comments is representative).

If you care about karma or the public sentiment that it represents then write comments that you expect people will appreciate. If you don't care about karma then write comments according to whatever other criteria you do care about.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T18:10:27.140Z · LW(p) · GW(p)

Ok so we agree I am not belligerent. I don't think it is possible for you to tell someone that they are disingenuous, so that one is out too. All that leaves is the claim that my writing uses an irrational style.

If I recall the comments were a bunch of rhetorical and semantic gymnastics trying to support a blatant contradiction and make it sound deep rather than like a mistake.

If you are going to make a claim that I do not use rational styles of argumentation, why do you not have to yourself use rational styles of argumentation? It is not rational to make a claim without providing supporting evidence. You cannot just say that I am making blatant contradictions or performing “semantic gymnastics” without undertaking the burden of proof. You make an argument so that I can counter it, you can’t just libel me because you have deemed that to be what is logical. There is nothing rational about this method of writing.

Replies from: Swimmer963, wedrifid
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-12T18:50:56.190Z · LW(p) · GW(p)

Agreed that making a statement and not giving any supporting evidence doesn't qualify as "rational." I actually haven't found the quality of your argument to be low, most of the time, but I'll try to dredge up some examples of what I think wedrifid is talking about.

If you look back to my original post to arran, I state that there are multiple definitions of a paradox and all are acceptable. That what is fruitful is not trying to argue about which definition is correct, but to accept the plurality and try to learn a new point of reference from the one you have been trained in.

The standard mindset on LessWrong is that words are useful because they are specific and thus transmit the same concept between two people. Some words are more abstract than others (for example, 'beauty' can never be defined as specifically as 'apple'), but the idea that we should embrace more possible definitions of a word goes deeply against LessWrong philosophy. It makes language less clear; a speaker will have to specify that "no, I'm talking about paradox2, not paradox1." In which case you might as well have 2 different words for the 2 different concepts in the first place. I think most people on LW would count this as a negative epistemic contribution

Doesn't that strike you as very akin to soviet russia style censorship?

This kind of comparison is very no-no on LessWrong, unless you very thoroughly explain all the similarities and justify why you think it's a good comparison. See Politics is the Mind-Killer.

You can fight for the definitions you have been indoctrinated in, and in doing so fight to label me as wrong, or we can have a real dialogue.

Comes across as belligerant.

I don't think there are really that many places where you had 'bad' arguments. The main thing is that you're presenting a viewpoint very different from the established one here, and you're using non-LW vocabulary (or vocabulary that is used here, but you're using it differently as per your field of study), and when someone disagrees you start arguing about definitions, and so people pattern-match to 'bad argument.'

comment by wedrifid · 2012-04-12T18:44:26.156Z · LW(p) · GW(p)

I don't think it is possible for you to tell someone that they are disingenuous

I do that all the time. There seems to be nothing in the meaning of the word that means it cannot be applied to another.

It is not rational to make a claim without providing supporting evidence.

That isn't true. It is simply a different form of communication. Description is different from argumentative persuasion. It is not (necessarily) irrational to do the former.

You cannot just say that I am making blatant contradictions or performing “semantic gymnastics”

In the context the statement serves as an explanation for the downvotes. It is assumed that you or any readers familiar with the context will be able to remember the details. In fact this is one of those circumstances where "disingenuous" applies. There are multiple pages of conversation discussing your contradictions already and so pretending that there is not supporting evidence available is not credible.

without undertaking the burden of proof.

NO! "Burden of proof" is for courts and social battles, not thinking.

You make an argument so that I can counter it

This isn't debate club either!

you can’t just libel me because you have deemed that to be what is logical.

Yes, with respect to libel, the aforementioned 'burden of proof' becomes relevant. Of course this isn't libel, or a court. Consider that I would not have explained to you why (I perceive) your comments were downvoted if you didn't bring them up and make implications about the irrationality of the voters and community. If you go around saying "You downvoted me therefore you suck!" then it drastically increases the chances that you will receive a reply "No, the downvotes are right because your comments sucked!"

Replies from: thomblake, HungryTurtle
comment by thomblake · 2012-04-12T19:06:05.939Z · LW(p) · GW(p)

without undertaking the burden of proof.

NO! "Burden of proof" is for courts and social battles, not thinking.

You make an argument so that I can counter it

This isn't debate club either!

So many people don't seem to get this! It's infuriating.

I wonder if it's just word association with Traditional Rationality. People think making persuasive arguments has anything to do with what we're doing here.

Yes, making persuasive arguments is often instrumentally useful, and so in that sense is a 'rationality skill' - but cooking and rock climbing are also 'rationality skills' in that sense.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-12T19:37:13.601Z · LW(p) · GW(p)

Yeah, it comes up a lot.

My usual working theory is that smart people often learn that winning the argument game is a way for smart people to gain status, especially within academia and within communities of soi-disant smart people (aka mensa), and thus come to expect any community of smart people will use the argument game as a primary way to earn and retain status. They identify LW as a community of smart people, so they begin playing the argument game in order to establish their status.

And when playing the argument game results in _losing_status instead, they feel betrayed and defensive.

Replies from: thomblake
comment by thomblake · 2012-04-12T19:40:34.102Z · LW(p) · GW(p)

soi-disant

Never encountered this before. Is it usually italicized?

Replies from: TheOtherDave, Random832
comment by TheOtherDave · 2012-04-12T19:52:00.607Z · LW(p) · GW(p)

I don't usually italicize it, but I wouldn't be too surprised to encounter it italicized, especially in print. I imagine it depends one whether one considers it an English word borrowed from a foreign language (which I do) or a foreign phrase (which one plausibly could).

comment by Random832 · 2012-04-12T19:45:24.285Z · LW(p) · GW(p)

It means "so-called" or "self-proclaimed".

Replies from: DSimon, thomblake
comment by DSimon · 2012-04-13T02:11:54.967Z · LW(p) · GW(p)

We should probably just use those phrases directly then, rather than excluding possible readers without adding any informational content.

(On that note, someone at an LW meetup I went to recently made a good point: why do we say "counterfactual" instead of just "made-up"?)

Replies from: TheOtherDave, thomblake, enoonsti, Random832
comment by TheOtherDave · 2012-04-13T15:28:12.274Z · LW(p) · GW(p)

We should probably just use those phrases directly then, rather than excluding possible readers without adding any informational content.

Buckley's response to this sentiment is apposite.

Replies from: Random832
comment by Random832 · 2012-04-13T16:40:59.019Z · LW(p) · GW(p)

Login required. Summarize?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-13T18:00:21.369Z · LW(p) · GW(p)

How odd! When I went there through google it didn't ask for a login, but when I follow the link it does.

Anyway, summarized, his point is that the benefits to the right audience of using the right word at the right time outweigh the costs to everyone else either looking it up and learning a new word, getting the general meaning from context, or not understanding and ignoring it. But like much of Buckley, the original text is worth reading if you enjoy language.

Googling "Buckley eristic lapidary November" should get you a link that works.

comment by thomblake · 2012-04-13T14:36:58.323Z · LW(p) · GW(p)

We should probably just use those phrases directly then, rather than excluding possible readers without adding any informational content.

Nonsense. More words is better. Nuance is good. Words are trivially easy to look up.

I didn't ask what the word meant, because by the time I was done reading the comment I knew what the word meant and even had a rough sense of when I would want to use "soi-disant" as opposed to "so-called" or "self-proclaimed".

Replies from: DSimon, Swimmer963
comment by DSimon · 2012-04-15T14:16:27.484Z · LW(p) · GW(p)

Nonsense. More words is better. Nuance is good. Words are trivially easy to look up.

What is the additional nuance in "soi-disant" that's not in "self-described"?

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-13T15:02:44.497Z · LW(p) · GW(p)

Agreed that more words are better–more possible information can be conveyed. However, it sounds like you're better than the average reader at grasping the meaning of words from context. (Knowing French, I can guess what 'soi-disant' means...having no idea, I don't know if I would have deduced it from the context of just that one comment.)

Replies from: thomblake
comment by thomblake · 2012-04-13T15:10:27.667Z · LW(p) · GW(p)

I did not deduce it from context - I looked it up. Using the Internet.

It's the obvious thing to do if it's after 1998.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-13T16:09:05.247Z · LW(p) · GW(p)

Somehow in your comment it seemed like you meant you'd figured it our yourself...rereading it, I don't know why I thought that.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-13T16:25:45.953Z · LW(p) · GW(p)

It's not unreasonable to infer from by the time I was done reading the comment I knew what the word meant and even had a rough sense of when I would want to use "soi-disant" as opposed to "so-called" or "self-proclaimed" that thomblake didn't interrupt his reading of the comment to go perform some other task (e.g., looking the word up on google).

I mean, if someone said about an essay that by the time they were done reading it they had a deep understanding of quantum mechanics, I would probably infer that the essay explained quantum mechanics, even though they might mean they started reading it in 2009, put it down unfinished to go study QM for three years, then found the unfinished essay (which was in fact about gardenias) and finished reading it.

comment by enoonsti · 2012-04-13T03:34:21.712Z · LW(p) · GW(p)

I sympathize with this suggestion. But at the same time, I do enjoy learning new words.

comment by Random832 · 2012-04-13T02:25:42.391Z · LW(p) · GW(p)

As I understand it, "counterfactual" originates from history, it means, originally, when historians analyze what would happen if some particular thing had gone differently.

Replies from: Alejandro1, DSimon
comment by Alejandro1 · 2012-04-13T04:04:33.293Z · LW(p) · GW(p)

Really? I always thought it came from logic/semantics: a "counterfactual conditional" is one of the form "If X had happened, Y would have", and there is a minor industry in finding truth conditions for them.

Replies from: Random832
comment by Random832 · 2012-04-13T04:19:59.842Z · LW(p) · GW(p)

Well, I heard it first relating to history.

comment by DSimon · 2012-04-13T02:49:37.929Z · LW(p) · GW(p)

Hm, so I guess the modern term would be "alternate history fic"? :-)

Replies from: Random832
comment by Random832 · 2012-04-13T03:17:36.711Z · LW(p) · GW(p)

No, the difference is between serious historical studies of what would likely have happened, vs people who make up new characters who had no significance OTL to tell a good story.

Replies from: Random832
comment by Random832 · 2012-04-13T16:46:08.623Z · LW(p) · GW(p)

To expand on this - a counterfactual might predict "and then we would still have dirigibles today", or not, if asking "what if the Hindenburg disaster had not occurred." It would probably NOT predict who would be president in 2012, neither would it predict that in a question wholly unrelated to air travel or lighter-than-air technology. An alternate history fiction story might need the president for the plot, and it might go with the current president or it might go with Jack Ryan. An alternate history timeline is somewhere in the middle, but in general will ask "what change could have made [some radically different way the modern world looks like]" rather than "what can we predict would have happened if [some change happened]" and refrain from speculation on stuff that can't be predicted to any reasonable probability.

The line is also to some extent definable as between historians and fiction authors, though these can certainly overlap particularly in the amateur side of things.

comment by thomblake · 2012-04-12T21:47:33.711Z · LW(p) · GW(p)

Right, I should have mentioned that in grandparent. Thanks!

comment by HungryTurtle · 2012-04-12T20:34:09.098Z · LW(p) · GW(p)

I do that all the time. There seems to be nothing in the meaning of the word that means it cannot be applied to another.

Let me rephrase, it is irrational to make a declarative statement about the inner workings of another person's mind, seeing as there is no way for one person to fully understand the mental state of another.

That isn't true. It is simply a different form of communication. Description is different from argumentative persuasion. It is not (necessarily) irrational to do the former.

You talk to me about semantic gymnastics? No, it is not necessarily irrational to be descriptive without providing evidence. Author's of fiction can be descriptive and do not need to provide evidence, as well as several other mediums of writing. But come on, do you really think that if you attack my writing and intentions you don’t need evidence and that is ok?

NO! "Burden of proof" is for courts and social battles, not thinking. This isn't debate club either!

Wedrifid, if you do not think that it is the obligation of a rational statement to provide some evidence or reason for justification of its claim, then I do not know what to say to you.

If you go around saying "You downvoted me therefore you suck!" then it drastically increases the chances that you will receive a reply "No, the downvotes are right because your comments sucked!"

Anyone who read my comments and interpreted them as me saying “you down voted me therefore you suck!” is vilifying me. I made a comment about a time I got up voted and how I did not understand why out of everything I wrote that sentence was deemed more rational. I never insulted anyone, or was demeaning in anyway.

Replies from: wedrifid, thomblake, Random832, David_Gerard
comment by wedrifid · 2012-04-12T21:26:41.137Z · LW(p) · GW(p)

You talk to me about semantic gymnastics? No, it is not necessarily irrational to be descriptive without providing evidence. Author's of fiction can be descriptive and do not need to provide evidence, as well as several other mediums of writing. But come on, do you really think that if you attack my writing and intentions you don’t need evidence and that is ok?

You are blatantly ignoring the direct reference to the relevant evidence that I provided in the grandparent. I repeat that reference now - read your inbox, scroll back until you find the dozen or so messages saying 'this is just a contradiction!' or equivalent. I repeat with extra emphasis that your denial of any evidence is completely incredible.

Any benefit of a doubt that you are communicating in good faith is rapidly eroding.

comment by thomblake · 2012-04-12T20:56:39.297Z · LW(p) · GW(p)

Let me rephrase, it is irrational to make a declarative statement about the inner workings of another person's mind, seeing as there is no way for one person to fully understand the mental state of another.

No.

(Leaving aside the problems with declaring a course of action "irrational" without reference to a goal...)

There is no fact that I am 100% certain of. Any knowledge about the world is held at some probability between 0 and 1, exclusive. We make declarative statements of facts despite the necessary uncertainty. Statements about the inner workings of another person's mind are in no way special with that respect; I can make declarative statements about your mind, and I can make declarative statements about my mind, and in neither case am I going to be completely certain. I can be wrong about your motivations, and you can be wrong about your motivations.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T21:15:40.410Z · LW(p) · GW(p)

(Leaving aside the problems with declaring a course of action "irrational" without reference to a goal...)

If you make a claim about the character of another person or the state of reality do you or do you not need some evidence to support it?

I can be wrong about your motivations, and you can be wrong about your motivations.

Isn't being rational about being less wrong, so if some declarative statements can be wrong wouldn't it be rational to avoid making them?

Replies from: thomblake
comment by thomblake · 2012-04-12T21:20:27.170Z · LW(p) · GW(p)

If you make a claim about the character of another person or the state of reality do you or do you not need some evidence to support it?

I can make claims about anything without supporting it, whether or not it's about someone's character. The moon is made of green cheese. George Washington was more akratic than my mother. See, there, I did it twice.

It can often be rational to do so. For example, if someone trustworthy offers me a million dollars for making the claim "two plus two equals five", I will assert "two plus two equals five" and accept my million dollars.

I'm confused that you do not understand this.

Replies from: TheOtherDave, MarkusRamikin, HungryTurtle
comment by TheOtherDave · 2012-04-12T21:34:54.324Z · LW(p) · GW(p)

If it helps resolve the confusion at all, my working theory is that HT believes unjustified and negative claims have been made about his/her character, and is trying to construct a formal structure that allows such claims to be rejected on formal grounds, rather than by evaluation of available evidence.

Replies from: thomblake
comment by thomblake · 2012-04-12T22:02:32.571Z · LW(p) · GW(p)

If it helps resolve the confusion at all

Thanks. That helps if true.

FWIW, I tend to respond to comments ignoring the context, as my main goal here is to improve the quality of the site by correcting minor mistakes (aside from cracking jokes and discussing Harry Potter).

comment by MarkusRamikin · 2012-04-12T21:33:08.109Z · LW(p) · GW(p)

Pretty sure he means epistemically irrational, not instrumentally.

That he's wrong about it, for the reasons you've listed, is another matter.

Replies from: thomblake
comment by thomblake · 2012-04-12T21:40:59.472Z · LW(p) · GW(p)

Pretty sure he means epistemically irrational, not instrumentally.

Probably. But I'm finding myself more and more in the "epistemic rationality is a case of instrumental rationality" camp, though not to any particular effect personally since I rate epistemic rationality very highly for its own sake.

comment by HungryTurtle · 2012-04-12T22:20:50.633Z · LW(p) · GW(p)

I understand what you are saying; you are saying that for the speaker of the statement it is not irrational, because the false statement might meet their motives. Or in other words, that rationality is completely dependent on the motives of the actor. Is this the rationality that your group idealizes? That as long as what I say or do works towards my personal motives it is rational? So if I want to convince the world that God is real, it is rational to make up whatever lies I see fit to delegitimize other belief systems?

So religious zealots are rational because they have a goal that their lies and craziness is helping them achieve? That is what you are arguing.

If someone told you that the moon was made of cheese, being a rational person, without providing any evidence of the fact, if they had no reason to believe that, they just believed it, you would think they were being irrational. And you know it. You just want to pick a fight.

Replies from: TheOtherDave, thomblake, Random832
comment by TheOtherDave · 2012-04-12T22:44:39.582Z · LW(p) · GW(p)

Or in other words, that rationality is completely dependent on the motives of the actor.

In the sense I think you mean it, yes. Two equally rational actors with different motives will perform different acts.

That as long as what I say or do works towards my personal motives it is rational?

Yes.

So if I want to convince the world that God is real, it is rational to make up whatever lies I see fit to delegitimize other belief systems?

If that's the most effective way to convince the world that God is real, and you value the world being convinced that God is real, yes.

So religious zealots are rational because they have a goal that their lies and craziness is helping them achieve?

Not necessarily, in that religious zealots don't necessarily have such goals. But yes, if a religious zealot who in fact values things that are in fact best achieved through lies and craziness chooses to engage in those lies and craziness, that's a rational act in the sense we mean it here.

If someone told you that the moon was made of cheese, being a rational person, without providing any evidence of the fact, if they had no reason to believe that, they just believed it, you would think they were being irrational.

Sure, that's most likely true.

You just want to pick a fight.

You may be right about thomblake's motives, though I find it unlikely. That said, deciding how likely I consider it is my responsibility. You are not obligated to provide evidence for it.

Replies from: thomblake
comment by thomblake · 2012-04-12T22:49:53.403Z · LW(p) · GW(p)

Thanks - much more concise than my reply. Though I disagree about this bit:

Sure, that's most likely true.

for reasons I've stated in a sibling.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-13T00:04:06.048Z · LW(p) · GW(p)

(nods) I was taking the "if they had no reason to believe that, they just believed it" part of the problem specification literally. (e.g., it's not a joke, etc.)

Replies from: thomblake
comment by thomblake · 2012-04-13T00:35:15.279Z · LW(p) · GW(p)

Aha - I glossed over that bit as irrelevant since the scenario is someone saying some words, which is clearly a case for instrumental rather than epistemic rationality. I should probably have read the "someone told you" as the irrelevant bit and answered as though we were talking about epistemic rationality.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-13T00:41:47.502Z · LW(p) · GW(p)

(nods) Of course in the real world you're entirely correct. That said, I find a lot of thought experiments depend on positing a situation I can't imagine any way of getting into and asking what follows from there.

comment by thomblake · 2012-04-12T22:40:12.616Z · LW(p) · GW(p)

I understand what you are saying; you are saying that for the speaker of the statement it is not irrational, because the false statement might meet their motives. Or in other words, that rationality is completely dependent on the motives of the actor.

Yes.

Is this the rationality that your group idealizes?

See the twelfth virtue:

Do not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.

You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.

.

If someone told you that the moon was made of cheese, being a rational person, without providing any evidence of the fact, if they had no reason to believe that, they just believed it, you would think they were being irrational. And you know it.

No, I would generally not think someone was "being irrational" without specific reference to their motivations. If I must concern myself with the fulfillment of someone else's utility function, it would usually take the form of "You should not X in order to Z because Y will more efficiently Z." ETA: I would more likely think that their statement was a joke, and failing that think that it's false and try to correct it. In case anyone's curious, "the moon is made of green cheese" was a paradigm of a ridiculous, unproveable statement before humans went to the moon; and "green cheese" in this context means "new cheese", not the color green.

You just want to pick a fight.

No, I'd rather be working on my dissertation, but I have a moral obligation to correct mistakes and falsehoods posted on this site.

I understand what you are saying; you are saying that for the speaker of the statement it is not irrational, because the false statement might meet their motives. Or in other words, that rationality is completely dependent on the motives of the actor.

Correct. As noted on another branch of this comment tree, this interpretation characterizes "instrumental rationality", though a similar case could be made for "epistemic rationality".

So religious zealots are rational because they have a goal that their lies and craziness is helping them achieve? That is what you are arguing.

That is not what I was arguing. If I understand you correctly however, you mean to say that what I'm arguing applies equally well to that case.

The important part of that statement is "X is rational", where X is a human. Inasmuch as that predicate indicates that the subject behaves rationally most of the time, I would deny that it should be applied to any human. Humans are exceptionally bad at rationality.

That said, if a person X decided that course of action Y was the most efficient way to fulfill their utility function, then Y is rational by definition. (Of course, this applies equally well to non-persons with utility functions). Even if Y = "lies and craziness" or "religious belief" or "pin an aubergine to your lapel".

So if I want to convince the world that God is real, it is rational to make up whatever lies I see fit to delegitimize other belief systems?

That's a difficult empirical question, and outside my domain of expertise. You might want to consult an expert on lying, though I'd first question whether the subgoal of convincing the world that God is real, really advances your overall goals.

comment by Random832 · 2012-04-12T22:30:02.861Z · LW(p) · GW(p)

I think the idea that you are grasping for (and which I don't necessarily agree with) is that calling someone disingenuous is a dark side tool.

comment by Random832 · 2012-04-12T20:45:05.289Z · LW(p) · GW(p)

Let me rephrase, it is irrational to make a declarative statement about the inner workings of another person's mind, seeing as there is no way for one person to fully understand the mental state of another.

Is it irrational to call a statement a lie? As I had understood the word, "disingenuous" is a fancy way to say "lying".

Replies from: HungryTurtle, thomblake
comment by HungryTurtle · 2012-04-12T20:55:54.541Z · LW(p) · GW(p)

Yes it is irrational to say something is a lie if you have no way of knowing it is a lie or not. Is this incorrect?

Replies from: Random832
comment by Random832 · 2012-04-12T20:59:55.683Z · LW(p) · GW(p)

(I don't suppose you'd be enlightened if I said "Yes, that's incorrect")

Do you consider it irrational to say the sky is blue when you are in a room with no window?

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T21:11:38.771Z · LW(p) · GW(p)

(I don't suppose you'd be enlightened if I said "Yes, that's incorrect")

Tell me honestly, do you really think that it is rational to make a declarative statement about something you know nothing about?

Do you consider it irrational to say the sky is blue when you are in a room with no window?

No, because there is reason and evidence to support the statement that the sky is blue. The most obvious of which is that it has been blue your entire life.

No offense, but your example is a gross misrepresentation of the situation. I am not saying that no statement can be made without evidence (this is Wedrifid's semantic twist). My statement is that it is impossible for a person to directly know the subjective experience of another person's consciousness, and that it is irrational to say that you know when someone is being insincere or not.

Replies from: Random832, wedrifid
comment by Random832 · 2012-04-12T21:55:28.754Z · LW(p) · GW(p)

I think at this point I have to ask when you consider it to be rational to make a declarative statement, and what is "nothing" vs "enough". And in particular, why you must have direct knowledge of the subjective experience to say they are being insincere.

If someone is here, on a site filled with reasonablly intelligent people who understand logic, and demonstrates elsewhere that they are reasonably intelligent and understand logic, and in one particular argument make obviously logically inconsistent statements, I don't need their state of mind to say they're being disingenuous. I don't know how well that maps to this situation or what has been claimed about it.

comment by wedrifid · 2012-04-12T21:19:56.154Z · LW(p) · GW(p)

Tell me honestly, do you really think that it is rational to make a declarative statement about something you know nothing about?

This does not apply to this situation.

comment by thomblake · 2012-04-12T20:49:21.014Z · LW(p) · GW(p)

Is it irrational to call a statement a lie?

It's fairly meaningless to call something "irrational" without reference to a particular goal.

As I had understood the word, "disingenuous" is a fancy way to say "lying".

No, it means lacking in sincerity, frankness, or candor - it usually refers more to attitude or style than content.

For example, strictly speaking a smile can't be a lie, but it can be disingenuous.

Pretending to be truth-seeking when one is actually trolling is also disingenuous, but might not involve any explicit lies.

Replies from: Random832, HungryTurtle
comment by Random832 · 2012-04-12T20:57:36.970Z · LW(p) · GW(p)

Well, maybe it's a bit too fancy to be stretched into "lying". But the point is one of dishonesty, of a difference between actual intent and visible signs, and, what's relevant here, it does imply a model of someone's actual intentions (for "disingenuous"), or actual beliefs (for "lie").

I don't agree with HT that it's irrational (his basis for this seems to imply that any declarative statement about anything ever is irrational), which is why I drew a comparison between it and something I assumed he would be unlikely to consider irrational in the sense he meant when saying it of calling someone disingenuous.

comment by HungryTurtle · 2012-04-12T20:57:36.296Z · LW(p) · GW(p)

So Mr. Thomblake,

If someone were to make a statement about what another person was sincere about, without even knowing that person, without ever having met that person, or without having spent more than a week interacting with that person, would you say their statement was irrational?

Replies from: thomblake, thomblake
comment by thomblake · 2012-04-12T21:06:12.685Z · LW(p) · GW(p)

So Mr. Thomblake,

By the way, you do not need to indicate who a comment is to in a reply - it is clearly listed at the top of any comment you post as a reply, and is automatically sent to the user's inbox.

Replies from: TheOtherDave, HungryTurtle
comment by TheOtherDave · 2012-04-12T21:09:20.002Z · LW(p) · GW(p)

It does have the effect of signaling that third parties are not welcome to respond, though, which might be desirable.

Replies from: thomblake, wedrifid, HungryTurtle
comment by thomblake · 2012-04-12T21:11:38.805Z · LW(p) · GW(p)

It does have the effect of signaling that third parties are not welcome to respond, though, which might be desirable.

There is a mechanism for that too - private messages.

Replies from: Random832, TheOtherDave
comment by Random832 · 2012-04-12T22:03:00.585Z · LW(p) · GW(p)

There is? Oh, there it is. It could stand to be a little more visible.

Replies from: thomblake
comment by thomblake · 2012-04-12T22:08:02.843Z · LW(p) · GW(p)

Agreed. Really just a prominent indication that they exist might be enough, since they're pretty much in the obvious place to check once you know they're there.

comment by TheOtherDave · 2012-04-12T21:27:02.042Z · LW(p) · GW(p)

True.

Though PMs don't let me argue with you in public, which eliminates most of the status-management function of argument. (I'm not claiming here that this is a bad thing, mind you.)

Replies from: thomblake
comment by thomblake · 2012-04-12T21:30:21.113Z · LW(p) · GW(p)

Indeed

comment by wedrifid · 2012-04-12T21:33:16.835Z · LW(p) · GW(p)

It does have the effect of signaling that third parties are not welcome to respond, though, which might be desirable.

This approximately describes my reason for downvoting the comment in question. I deny the right of anyone to choose who may reply to their public comments. That is, I deny the right of anyone to claim a soapbox from which they can speak without reply from those that their rhetoric may impact.

As thomblake mentioned, there is a private messaging feature for direct personal communication. For public communication everyone may reply.

Replies from: Random832
comment by Random832 · 2012-04-12T22:06:55.599Z · LW(p) · GW(p)

Does it really signal that, any more than using the word "you"? Will wedrifid downvote comments that do that, as well? (Who other than wedrifid is qualified to answer that last question, I wonder? I won't tell anyone they can't, but I do reserve the right to assign a low probability to them being correct if wedrifid makes a statement on the issue that contradicts someone else's, and such an answer will have lower value to me and I suspect anyone else reading it.)

Replies from: TheOtherDave, wedrifid
comment by TheOtherDave · 2012-04-12T22:15:14.559Z · LW(p) · GW(p)

Does it really signal that, any more than using the word "you"?

Assuming you're asking the question genuinely, rather than rhetorically: it certainly signals that to me more than using the word "you" does, yes. Of course, my reaction might be idiosyncratic.

Replies from: Random832, thomblake
comment by Random832 · 2012-04-12T22:27:37.654Z · LW(p) · GW(p)

In my own opinion, I think it's more grandstanding than anything... of the same sort as "are you actually claiming"* - it's an attempt to put the other person on the spot, and make them feel as if they need to defend their opinions.

*No offense... I don't like the karma system either, but the comparison to soviet era repression was a bit much. But, then, maybe I don't feel a loss of karma as acutely since I don't write articles.

Replies from: TheOtherDave, thomblake
comment by TheOtherDave · 2012-04-12T22:33:40.812Z · LW(p) · GW(p)

I suspect you're right. Still, I try to treat questions as though they were sincere even if I'm pretty sure they're rhetorical; it seems the more charitable route.

comment by thomblake · 2012-04-12T22:47:21.070Z · LW(p) · GW(p)

But, then, maybe I don't feel a loss of karma as acutely since I don't write articles.

That is a real concern, though it was mostly solved via the discussion section. Article-writing has karma thresholds for posting, so someone who posts a bad article to main could lose enough karma to not be able to post anymore; furthermore, one would expect a first post to be bad and so it seems this might happen to every new user who tries to post an article. But the discussion section has lower penalties for article downvotes, and lower quality standards, so it's not so bad.

comment by thomblake · 2012-04-12T22:16:37.929Z · LW(p) · GW(p)

Of course, my reaction might be idiosyncratic.

I'd express my agreement as though it was evidence against the idiosyncraticity of the reaction, but it's very weak evidence since we've agreed on a few of these now.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-12T22:31:50.591Z · LW(p) · GW(p)

I now want a linguistic convention for expressing agreement while explicitly not proposing that agreement as independent evidence. Kind of like "I agree, but what do I know?" except I want it to be a single word and not read as passive-aggressive.

Replies from: Normal_Anomaly, thomblake
comment by Normal_Anomaly · 2012-04-12T22:47:15.003Z · LW(p) · GW(p)

Perhaps "seconded"?

comment by thomblake · 2012-04-12T22:43:36.419Z · LW(p) · GW(p)

I think it's too uncommon a case to warrant a short linguistic representation.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-13T00:05:12.786Z · LW(p) · GW(p)

Probably.

comment by wedrifid · 2012-04-12T22:13:55.608Z · LW(p) · GW(p)

Does it really signal that, any more than using the word "you"?

Somewhat more, yes.

Will wedrifid downvote comments that do that, as well?

Given that wedrifid seems to use 'you' rather frequently in comments I infer that he doesn't have any particular problem with it. It should be noted that surrounding context and content of the comment makes a big difference on how wedrifid interprets such comments, more so than the specific nature of the address.

Who other than wedrifid is qualified to answer that last question, I wonder?

Vladimir_Nesov could probably give a decent guess. He's been around to see the wedrifid lesswrong persona for as long as lesswrong has been around and of those that have expressed their voting style probably has the most similar voting habits to wedrifid.

I won't tell anyone they can't, but I do reserve the right to assign a low probability to them being correct if wedrifid makes a statement on the issue that contradicts someone else's, and such an answer will have lower value to me and I suspect anyone else reading it.

For some people it would be tempting to say that a third party is more likely to correctly predict the other's voting pattern, simply because people have plausible incentive to lie. I thank you for your implied expression of confidence in my honesty! :)

Replies from: Random832
comment by Random832 · 2012-04-12T22:16:10.880Z · LW(p) · GW(p)

I concede the win to you, even though I still think the objection is silly, and that you should have simply asked him not to do it instead of the high-handed passive-aggressive "I will downvote any future comments that do that".

For some people it would be tempting to say that a third party is more likely to correctly predict the other's voting pattern, simply because people have plausible incentive to lie. I thank you for your implied expression of confidence in my honesty! :)

Well, no-one else can see your votes, anyway, can they? ;)

Replies from: wedrifid, thomblake
comment by wedrifid · 2012-04-12T22:37:09.804Z · LW(p) · GW(p)

I concede the win to you

Win? I'm confused. I didn't think I was competing with you about anything! Were we arguing about something? I thought I was expressing my preferences in the form of a tangent from your description.

passive-aggressive

Aggressive perhaps and I can understand why you may say "high-handed", but passive? There is no way that is remotely passive. It's a clear and direct expression of active policy! Approximately the opposite to passive. Some would even take it as a threat (even though it technically doesn't qualify as such since it is just what would be done anyway even without any desire to control the other.)

and that you should have simply asked him not to do it

I acknowledge your preference that I ask people not to do a behavior rather than declare that I downvote a behavior. In this case I will not comply but can at least explain. I don't like asking things of those with whom I have no rapport and no reason to expect them to wish to assist me. To me that just feels unnatural. In fact, the only reason I would ask in such a case is because to do so influences the audience and thereby manipulates the target. Instead I like to acknowledge where the real boundaries of influence are. I influence my votes. He influences his comments. Others influence their votes.

In the end it could be that you upvote most instances of a thing while I downvote most instances of a thing. We cancel out, with the only change being that between the two of us we lose one person's worth of voting nuance in such cases.

Replies from: Random832
comment by Random832 · 2012-04-12T22:45:02.132Z · LW(p) · GW(p)

The "passive-aggressive" bit, in my opinion, was where he solicited people's opinion on whether it offends them, and you skipped past actually saying it offends you and went to threats.

comment by thomblake · 2012-04-12T22:24:43.242Z · LW(p) · GW(p)

the high-handed passive-aggressive "I will downvote any future comments that do that".

I assume you're referring to this:

For what it is worth, I will (continue to) downvote comments that take the form and role that the great-grandparent takes. Take that into consideration to whatever extent karma considerations bother you.

FWIW, it did not read that way to me - it seemed like an efficient statement of consequences. Asking someone not to do X does not imply X will be downvoted in the future. And folks like myself sometimes make comments with negative expected karma.

Replies from: Random832
comment by Random832 · 2012-04-12T22:32:58.077Z · LW(p) · GW(p)

If someone has stated they won't do it if someone asks them not to, and his goal was for there to be fewer such comments in the future (which is the general goal of downvoting things), then it would be more efficient in terms of achieving that goal to lead by simply asking politely not to, and maybe add the "and I'll downvote anyone who does" as a postscript.

Leading with the downvote threat seemed needlessly belligerent.

comment by HungryTurtle · 2012-04-12T21:17:32.795Z · LW(p) · GW(p)

Or it is just polite

comment by HungryTurtle · 2012-04-12T21:18:00.805Z · LW(p) · GW(p)

I think it is polite, and if it does not offend you or anyone else I will keep doing it.

Replies from: thomblake, wedrifid
comment by thomblake · 2012-04-12T21:24:03.430Z · LW(p) · GW(p)

I find it impolite - it increases the length of your comment and number of characters on the screen and does not provide any information. That said, I am not terribly bothered by it, so 'whatever floats your gerbil'.

comment by wedrifid · 2012-04-12T21:36:12.250Z · LW(p) · GW(p)

I think it is polite, and if it does not offend you or anyone else I will keep doing it.

For what it is worth, I will (continue to) downvote comments that take the form and role that the great-grandparent takes. Take that into consideration to whatever extent karma considerations bother you.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T21:59:35.775Z · LW(p) · GW(p)

So if it bothers you why not just say so. I said if it offends you or anyone else please tell me. At this point wedrifid all you are doing is hazing me due to personal insecurities.

Replies from: wedrifid, thomblake
comment by wedrifid · 2012-04-12T22:07:40.631Z · LW(p) · GW(p)

So if it bothers you why not just say so.

Didn't I just do that? I phrased it in terms of what I can control (my own votes) and what influence that has on you (karma). That gives no presumption or expectation that you must heed my wishes.

At this point wedrifid all you are doing is hazing me due to personal insecurities.

That is a big leap! I don't think I'm doing that. Mind you, given the power that hazing has in making significant and lasting change in people I would make use of it as a tool of influence if I could work out how!

Replies from: TheOtherDave, HungryTurtle
comment by TheOtherDave · 2012-04-12T22:26:33.678Z · LW(p) · GW(p)

Hm.
I was going to say that it's certainly possible.
But then, thinking about it some more, I realized that my working definition of "hazing" is almost entirely congruent with "harassment." It's certainly possible to harass people on LW, but the social costs to the harasser can be significant (depending on how well it's finessed, of course).
I guess that to get the social-influence-with-impunity effects of hazing I'd need to first establish a social norm sufficiently ubiquitous that harassing someone in the name of enforcing that social norm is reliably categorized as "enforcing the norm" (and thus praised) rather than "harassing people" (and thus condemned).

Hm.
It seems to follow that the first step would be to recruit local celebrities (Eliezer, Luke, Alicorn, Yvain, etc.) to your cause. Which means, really, that the first step would be to make a compelling case for the significant changes you want to use hazing to make.

It also seems likely that you should do that in private, rather than be seen to do so.
In fact, perhaps your first step ought to be to convince me to delete this comment.

Replies from: thomblake, wedrifid
comment by thomblake · 2012-04-12T23:02:27.203Z · LW(p) · GW(p)

In fact, perhaps your first step ought to be to convince me to delete this comment.

This reminds me of a time a friend suggested, out loud in public, burning down someone's house in retribution, and I was like "Shut up, we can't use that plan now!" It's annoying when possible paths get pruned for no good reason.

comment by wedrifid · 2012-04-13T05:23:30.544Z · LW(p) · GW(p)

Sounds like more trouble than it is worth... unless.... Do I get to dress up in a cloak and spank people with paddles? Ooh, and beer pong and sending chicks on 'walks of shame'. I really missed out - we don't have frats here in Aus.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-13T05:26:59.985Z · LW(p) · GW(p)

Hey, you're the one who said you'd use it if you knew how. I was just responding to the implied need.

That said, I hereby grant you dispensation to dress up in a cloak. I'm probably OK with you playing beer pong, though I'm not really sure what that is. The spanking and the sending people on walks you'll have to negotiate with the spankees and the walkers.

Replies from: wedrifid
comment by wedrifid · 2012-04-13T05:35:07.864Z · LW(p) · GW(p)

That said, I hereby grant you dispensation to dress up in a cloak. I'm probably OK with you playing beer pong, though I'm not really sure what that is.

I'm not either to be honest.

The spanking and the sending people on walks you'll have to negotiate with the spankees and the walkers.

Come to think of it I think I might dismiss the 'spankees' class entirely, abandon the 'walk' notion and proceed to negotiate mutual spanking options with those formerly in the 'walker' class. I don't think I have the proper frat-boy spirit.

Replies from: enoonsti
comment by enoonsti · 2012-04-13T05:42:32.084Z · LW(p) · GW(p)

I think the people who play beer pong don't even know what it is.

Replies from: wedrifid
comment by wedrifid · 2012-04-13T05:45:22.605Z · LW(p) · GW(p)

Oh no. It's like Calvinball for jocks.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-13T13:39:51.188Z · LW(p) · GW(p)

That is my new favorite thing.

comment by HungryTurtle · 2012-04-12T22:30:53.439Z · LW(p) · GW(p)

So you didn't just go through and down vote a ton of my posts all at once?

Replies from: wedrifid
comment by wedrifid · 2012-04-12T22:40:46.221Z · LW(p) · GW(p)

So you didn't just go through and down vote a ton of my posts all at once?

No, I couldn't have done that. I had already downvoted the overwhelming majority of your comments at the time when I encountered them. We've already had a conversation about whether or not the downvotes you had received were justified - if you recall I said 'yes'. I'm not allowed to vote down twice so no karma-assassination for me!

comment by thomblake · 2012-04-12T22:04:38.739Z · LW(p) · GW(p)

At this point wedrifid all you are doing is hazing me due to personal insecurities.

I'm confused. Aren't personal insecurities the sort of thing you claimed was 'irrational' to comment on? Have you reversed your position, or do you not care about being rational, or is this a special case?

Replies from: thomblake
comment by thomblake · 2012-04-12T22:14:21.672Z · LW(p) · GW(p)

Before anyone complains...

It appears that someone has been downvoting every comment in this tree. Which is arguably appropriate, since responding to trolls is nearly as bad as trolling, and by Lucius's standard (roughly, 'Cui bono?') this thread has undeniably been trolling.

comment by thomblake · 2012-04-12T20:58:38.577Z · LW(p) · GW(p)

would you say their statement was irrational?

No, I don't characterize actions as flatly "irrational", and statements are not a special case.

comment by David_Gerard · 2012-04-12T20:52:48.699Z · LW(p) · GW(p)

This may make things clearer.

Replies from: wedrifid
comment by wedrifid · 2012-04-12T21:38:51.106Z · LW(p) · GW(p)

This may make things clearer.

It does not seem to be at all relevant. Which surprised me. I followed your link because I saw it was to the "not insane unsane" post which has a certain similarity to the rational/not-rational subject. Instead I can only assume you are trying to make some sort of petty personal insinuation - try as I might I can't see any other point you could be making that isn't a plain non-sequitur.

(I am left with the frustrating temptation to delete the referent despite considering it a useful or at least interesting contribution. The potential for any given comment to be used out of context can be limiting!)

comment by wedrifid · 2012-04-12T15:40:26.919Z · LW(p) · GW(p)

It is especially disturbing to think that your hypothesis is right, and I am being down voted because the ideas I am trying to express have been labeled as 'pollution.' Doesn't that strike you as very akin to soviet russia style censorship?

I must admit that history was never my strong suit. In Soviet Russia was 'censorship' a euphemism for "Sometimes when your peers think what you are saying is silly they give indications of disapproval"? I thought it was something more along the lines of:

  • Done by the authorities, not by the democratic action of your peers.
  • Involves stopping you from being able to say stuff and the confiscation or destruction of materials containing ideas they don't want spread.
  • Some sort of harm is done to you. Maybe beatings. Perhaps a little killing of you and or your family. Gulags may be involved somehow.

It would seem that if the voting pattern you have experienced represents a failure mode of some kind that it is a decidedly democratic failure mode, not a communist one.

comment by TheOtherDave · 2012-04-12T14:01:10.375Z · LW(p) · GW(p)

The rule of thumb here about votes is "downvote what you want less of, upvote what you want more of." Everybody gets one vote. (Modulo cheaters who create sock-puppets, of course.)

Is that akin to your model of "soviet russia style censorship"? I'm no expert on politics, but it sounds much more like American-style democracy to me.

Incidentally, from what I've seen you do much more advocating for your position than "attempting to explain" it.
If you want to advocate, go ahead, but I would prefer you label it honestly.

Replies from: Random832, Swimmer963, HungryTurtle, wedrifid
comment by Random832 · 2012-04-12T14:35:09.087Z · LW(p) · GW(p)

"Everybody gets one vote. (Modulo cheaters who create sock-puppets, of course.)" Also modulo people who take a proportionally large karma loss from serial downvoters.

Why is it that a minimum karma is required to downvote, when downvoting does not entail an expenditure of karma?

Replies from: TheOtherDave, thomblake
comment by TheOtherDave · 2012-04-12T15:02:33.106Z · LW(p) · GW(p)

Yes, you're right: also modulo people who have exceeded their downvote limit, which is tied to their karma score.

comment by thomblake · 2012-04-12T14:56:05.932Z · LW(p) · GW(p)

Why is it that a minimum karma is required to downvote

Downvoting several times hides posts from most readers, and so is potentially destructive. Destructive powers are not handed out to new members of the community for various reasons. For example, if there was no minimum karma requirement, then new folks could seriously wreck the site by hiding what members of the community think is "good content".

For a possibly-related post, see well-kept gardens die by pacifism, which is in favor of more downvoting behavior but was posted just before the downvote cap was introduced.

Replies from: Random832, HungryTurtle
comment by Random832 · 2012-04-12T15:29:55.428Z · LW(p) · GW(p)

I actually meant my post as an argument in favor of a karma cost to downvoting, not an opposition to a minimum karma requirement.

(re the other thing, I'm pretty sure HungryTurtle's use of "democratic" was a rebuttal to TheOtherDave's "Everybody gets one vote" and "sounds like democracy" applause lights)

Replies from: thomblake
comment by thomblake · 2012-04-12T15:35:35.211Z · LW(p) · GW(p)

Ah. I'm genearlly against a karma cost to downvoting, for basically the reasons outlined in well kept gardens die by pacifism - people read a loss of karma as "punishment" and should not be punished for helping to curate.

That said, I'm not very strongly against it, as I've seen it used effectively on Q&A sites like Stack Overflow, and I think being emotionally tied to high karma scores is silly.

ETA:

I'm pretty sure HungryTurtle's use of "democratic" was a rebuttal to TheOtherDave's "Everybody gets one vote" and "sounds like democracy" applause lights

Yeah, I figured that out after, but it just seemed like a non-sequitur as a reply to my comment.

Replies from: wedrifid, Random832
comment by wedrifid · 2012-04-12T15:44:01.510Z · LW(p) · GW(p)

Ah. I'm genearlly against a karma cost to downvoting, for basically the reasons outlined in well kept gardens die by pacifism - people read a loss of karma as "punishment" and should not be punished for helping to curate.

I too see downvoting as an altruistic service.

That said, I'm not very strongly against it, as I've seen it used effectively on Q&A sites like Stack Overflow, and I think being emotionally tied to high karma scores is silly.

I'm not too against it myself either. If I start running out of my 19k I'll post some Rationalists Quotes or something.

Replies from: thomblake
comment by thomblake · 2012-04-12T16:10:15.538Z · LW(p) · GW(p)

If I start running out of my 19k I'll post some Rationalists Quotes or something.

Yeah, I was going to make a comment about how the karma system is easy enough to game, but then I realized that by "game" I meant "write high-quality posts about rationality". Rewriting a Wikipedia article about a cognitive bias we haven't covered yet is probably worth about 500 karma. 1000 if it contains actionable material.

comment by Random832 · 2012-04-12T15:40:58.538Z · LW(p) · GW(p)

Loss of karma is a punishment. It only seems like it's not when yours is high enough to isolate you from the actual effects and any realistic chance of having it wiped out over a single disagreement. Having it cost karma to downvote would make people think twice before downvoting a post that is already out of view, or downvoting all of someone's posts in a subthread below a post that is already out of view.

The current system encourages piling on.

(EDIT: replaced "everyone's posts" with "all of someone's posts", original wording was a mistake)

Replies from: thomblake
comment by thomblake · 2012-04-12T16:00:00.908Z · LW(p) · GW(p)

Loss of karma is a punishment. It only seems like it's not when yours is high enough to isolate you from the actual effects and any realistic chance of having it wiped out over a single disagreement. Having it cost karma to downvote would make people think twice before downvoting a post that is already out of view, or downvoting all of someone's posts in a subthread below a post that is already out of view.

It doesn't seem to me like I would regard it as punishment even if someone could wipe out all my karma at once, and I would not downvote less if it cost me karma to downvote (assuming that was done instead of and equivalently to the downvote cap).

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-12T16:32:48.092Z · LW(p) · GW(p)

After about 5 minutes of thought...

I am .85+ confident that replacing the downvote cap with a policy of spending karma to downvote would result in the total number of users issuing at least one downvote in a given month dropping by at least 50%, and .6+ confident of it dropping by at least 75%.

I am less confident about what effect it would have on the downvoting patterns of users who continue to issue at least one downvote. Call it (.2) no measurable effect, (.3) increased downvote rate, (.5) decreased rate, just to put some lines in the sand.

Replies from: thomblake, Random832
comment by thomblake · 2012-04-12T16:38:52.924Z · LW(p) · GW(p)

None of that sounds blatantly unreasonable to me.

comment by Random832 · 2012-04-12T17:14:15.797Z · LW(p) · GW(p)

Do you think that between the detrimental effects of giving people angry about legitimate downvotes a target, and the beneficial effects of making people accountable for actual misuse of downvoting, making vote information publicly available would be a net benefit or net harm? (if downvoting itself is a good thing, wouldn't people be rewarded in their standing in the community if people saw them making good downvotes?)

What about the effect of being able to downvote someone multiple times in a single subthread (with real effects on their karma) discouraging people from responding to requests for clarification? I know I'm not going to make that mistake again after getting burned.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-12T17:43:59.065Z · LW(p) · GW(p)

A few related but distinct things here.

  • I expect making vote information public would change (>95% of) users' processes for deciding whether to vote, introducing significantly more consideration for the signaling effects of being seen to upvote or downvote a comment/post, and therefore proportionally less consideration for the desire to have more or fewer comments/posts like that one. I expect that, in turn, to reduce the overall emphasis on post/comment quality, which would likely make this site less valuable to me.

  • Bulk upvoting/downvoting like you describe is a trickier business. It often seems that people do so without really evaluating the comments they are voting on, as a way of punishing individuals. The term "karmassassination" is sometimes used around here to refer to that practice, and it's frowned upon. On the other hand, voting on multiple comments in a thread, either because one wishes to see more/fewer threads of that sort, or because one genuinely considers each one to be individually entitled to the vote, is considered perfectly acceptable. It is, of course, difficult to automate a system that allows one but not the other.

  • Thinking about it now, enforcing a delay period between downvotes... say, preventing me from issuing more than one downvote in a 30-second period... might be a good modification.

  • A common problem with positive punishment as a training mechanism is that subjects overgeneralize on the target... e.g., learn some global lesson like "don't ever respond to requests for clarification" even if the punisher intended a more narrow lesson like "don't make comments like this one while responding to requests for clarification". A (positive reinforcement plus negative punishment) training program, where undesirable behavior is ignored and desired behavior is rewarded, tends to work better, but requires significant self-discipline on the part of the trainer. When the training responsibility is distributed, this is difficult to manage. On pubilc forums like this one, I've never seen it implemented successfully, someone always ends up rewarding the undesirable behavior.

Replies from: Random832, Random832
comment by Random832 · 2012-04-12T19:19:23.928Z · LW(p) · GW(p)

What about randomly (1 time in 10, say) requiring downvotes to be accompanied with an explanation (which will be posted as a comment, exposed to downvotes by the rest of the community if it's a bad reason, and upvotes if it is a good one)?

What about allowing a post to be marked as "response to clarification request" and not subject to voting by anyone but the person it is in reply to?

Replies from: thomblake, TheOtherDave
comment by thomblake · 2012-04-12T19:35:20.114Z · LW(p) · GW(p)

What about randomly (1 time in 10, say) requiring downvotes to be accompanied with an explanation (which will be posted as a comment, exposed to downvotes by the rest of the community if it's a bad reason, and upvotes if it is a good one)?

In the face of such a mechanism, I would surely protest it by posting a minimal comment along with the downvote, and also deleting it if that's an option. Curation already feels somewhat like work; it doesn't need to get harder.

What about allowing a post to be marked as "response to clarification request" and not subject to voting by anyone but the person it is in reply to?

Some folks actually won't vote on anything in a thread they've commented on for neutrality reasons, and the last bit there seems inharmonious with that.

Is "get thicker-skinned about downvoting" an option?

How long have you been lurking here? You seem to have a lot of opinions about how good the existing mechanisms are for someone who hasn't been commenting for very long.

Replies from: Random832, Random832
comment by Random832 · 2012-04-12T19:51:27.378Z · LW(p) · GW(p)

Curation already feels somewhat like work; it doesn't need to get harder.

I haven't seen a credible argument how downvoting a post that's already at -11, or one that's under several layers of collapsed posts, is "curation".

Replies from: TheOtherDave, thomblake
comment by TheOtherDave · 2012-04-12T20:07:50.882Z · LW(p) · GW(p)

I observe that the line you're quoting was in response to your suggestion about the proposed required-explanation-for-downvote feature, which was not being proposed in the context of a post that was already at -11.

I infer that either you lost track of the context and genuinely believed thomblake was responding in that context, or you intentionally substituted one context for another for some purpose, presumably to make thomblake seem wrong and you seem right by contrast.

The former is a more charitable assumption, so with some misgivings, I am making it.

Replies from: Random832
comment by Random832 · 2012-04-12T20:24:41.248Z · LW(p) · GW(p)

I was responding to the general idea that downvoting is "curation". I don't see why the specific context is necessary for that. Are you suggesting he wouldn't have said the same thing in the other context? That posts already at or below -2 and posts in collapsed subthreads get downvoted shows that people downvote with non-curation purposes. Maybe the site would benefit from an explanation of what purposes they do have.

Replies from: thomblake, TheOtherDave
comment by thomblake · 2012-04-12T21:15:16.893Z · LW(p) · GW(p)

That posts already at or below -2 and posts in collapsed subthreads get downvoted shows that people downvote with non-curation purposes.

No. I do not generally check whether a comment is in a collapsed subthread before downvoting it. I downvote low-quality comments. It is more efficient.

comment by TheOtherDave · 2012-04-12T20:53:01.862Z · LW(p) · GW(p)

If someone says that food is tasty and I reply "I don't see how you can consider durian fruit tasty" I have gone from the general context (food) to a specific context (durian fruit).

In much the same way, if someone says downvoting is curation and I reply "well, nobody's explained how downvoting a post that's already at -11 is 'curation'" I have gone from the general context (downvoting) to a specific (downvoting highly downvoted comments).

I would consider it reasonable, if I did either of those things, for an observer to conclude that I'd changed the context intentionally, in order to make it seem as though the speaker had said something I could more compellingly disagree with.

comment by thomblake · 2012-04-12T20:45:03.988Z · LW(p) · GW(p)

I haven't seen a credible argument how downvoting a post that's already at -11, or one that's under several layers of collapsed posts, is "curation".

I don't think I've seen one either.

Though it's worth noting that downvotes tend to be front-loaded in time, so something that's at -5 a little while after posting could easily rise to +6 in only about a week. So your downvotes don't 'stop counting' once the comment is already at -5.

Replies from: Random832
comment by Random832 · 2012-04-12T20:49:46.824Z · LW(p) · GW(p)

I wonder if an algorithm could be invented to reduce the front-loading in time of negative karma from downvoting that is meant to offset later potential upvotes. Such a thing might have headed off the whole incident in the other thread (he's stated that he was "ready to fight" out of anger from seeing half his karma gone)

Replies from: TheOtherDave, thomblake, thomblake
comment by TheOtherDave · 2012-04-12T21:22:48.794Z · LW(p) · GW(p)

If I'm understanding you correctly, sure. Just truncate all reported net karma scores for comments and posts at zero (while still recording the actual score), and calculate user total karma from reported karma rather than actual.

The suggestion gets made from time to time. Some people think it's a good idea, others don't.

More generally, no mechanism that allows a community to communicate what they do and don't value will serve to prevent people whose contributions the community judges as valueless (or less valuable than they consider appropriate) from being upset by that judgment being communicated.

The question becomes to what degree a given community, acknowledging this, chooses to communicate their value judgments at the potential cost of upsetting people.

comment by thomblake · 2012-04-12T21:04:42.939Z · LW(p) · GW(p)

That might be an interesting experiment. I'm not confident I can predict what the results would be, given the effect you mention and the large amounts of "corrective voting" I've seen.

I imagine the mechanism would immediately apply pending downvotes until it has reached -2, and then apply the rest of pending downvotes either any time it goes above -2 or at some specified rate over time.

But the developer in me is saying that's a too-complicated system with questionable benefit.

comment by thomblake · 2012-04-12T21:29:36.756Z · LW(p) · GW(p)

Such a thing might have headed off the whole incident in the other thread (he's stated that he was "ready to fight" out of anger from seeing half his karma gone)

For reference, the discussed thread is here and User:pleeppleep is the user in question.

comment by Random832 · 2012-04-12T19:46:28.489Z · LW(p) · GW(p)

"Is "get thicker-skinned about downvoting" an option?"

Not at zero karma, it's not.

How long have you been lurking here? You seem to have a lot of opinions about how good the existing mechanisms are for someone who hasn't been commenting for very long.

A couple weeks. Am I really less qualified to examine the current system's actual effect on new and low-karma users, though?

Replies from: thomblake
comment by thomblake · 2012-04-12T19:52:52.540Z · LW(p) · GW(p)

A couple weeks. Am I really less qualified to examine the current system's actual effect on new and low-karma users, though?

Yes. As far as I can tell, you don't have hard data about the impact these things have on usage. Given that, I'm comparing my general impressions gathered over the past 4 years to your general impressions gathered over the past couple weeks.

comment by TheOtherDave · 2012-04-12T19:29:31.997Z · LW(p) · GW(p)

What do you expect the results of those changes to be?

Replies from: Random832
comment by Random832 · 2012-04-12T19:34:48.919Z · LW(p) · GW(p)

Providing [even sporadic] explanations for downvotes will allow people who are downvoted for good reasons a clearer way to adjust their behavior.

Exposing downvote reasons to community moderation will allow bad downvoting to be punished and good downvoting to be rewarded (this last one has the additional effect of raising the user's downvote cap so they can continue making good downvotes.)

What I expect the second one to provide is obvious: remove the problematic incentives discouraging people from providing clarifications if the original post has been downvoted.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-12T19:39:28.217Z · LW(p) · GW(p)

OK. Thanks for clarifying that.

comment by Random832 · 2012-04-12T17:49:13.670Z · LW(p) · GW(p)

I was not describing bulk downvoting that could reasonably be called "karmassassination" or anything like that. This is limited to one subthread. The point is that you're downvoting someone twice for the same thing for no better reason than that it's across two posts. It discourages people from answering replies to their posts (and rewards editing answers into the original post [which doesn't notify the person you're responding to] or simply not engaging in discussion), which stifles discussion, because then the downvoter (who is not engaging in discussion to explain why they do not like the comments) has an extra opportunity to "legitimately" strike again, even though the downvotes are individually legitimate under the "want to have less posts like this one" theory of why people vote.

tl;dr:

On the other hand, voting on multiple comments in a thread, either because one wishes to see more/fewer threads of that sort, or because one genuinely considers each one to be individually entitled to the vote, is considered perfectly acceptable.

What I am suggesting is that it is considered "perfectly acceptable" in part because people have not fully considered this effect.

Maybe if the votes were allowed but the karma effect reduced?

P.S.

"learn some global lesson like "don't ever respond to requests for clarification" even if the punisher intended a more narrow lesson like "don't make comments like this one while responding to requests for clarification"."

The point is that the punishment is for failing to change your mind. If you continue the discussion with anything but a full retraction, it's likely that whatever the silent downvoter disliked is not fixed. So, no, I won't be replying to requests for clarification - people can accept the inconvenience of watching the original post for additions as a cost of the current system.

And it is a global lesson: fewer posts on a topic unpopular enough to draw downvotes always means fewer downvotes, because if you stick to one post the downvoters can't hit you twice while remaining within the "downvoting rules".

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-12T17:59:10.211Z · LW(p) · GW(p)

If you continue the discussion with anything but a full retraction, it's likely that whatever the silent downvoter disliked is not fixed.

That's inconsistent with my experience here.

So, no, I won't be replying to requests for clarification

You are, of course, free to do that.

Replies from: thomblake
comment by thomblake · 2012-04-12T18:40:33.669Z · LW(p) · GW(p)

That's inconsistent with my experience here.

Ditto.

Requests for clarification, in particular, are often upvoted on net after a few days.

Replies from: wedrifid, Random832
comment by wedrifid · 2012-04-12T18:45:24.125Z · LW(p) · GW(p)

Requests for clarification, in particular, are often upvoted on net after a few days.

Except, in some cases, when they are blatantly passive aggressive requests.

comment by Random832 · 2012-04-12T19:17:37.913Z · LW(p) · GW(p)

I don't see how that's relevant, I'm talking about responses to requests for clarification. Controlled for original posts that had a negative score - any downvotes that were due to disagreement with someone's position are obviously unlikely to change with clarification, and the response will get another downvote.

Replies from: TheOtherDave, thomblake
comment by TheOtherDave · 2012-04-12T19:28:04.161Z · LW(p) · GW(p)

any downvotes that were due to disagreement with someone's position are obviously unlikely to change with clarification, and the response will get another downvote

You continue to imply that voting behavior is entirely a function of whether voters agree with the commenter's position. This continues to not match my experience.

It's certainly possible that you're correct and that I'm drawing the wrong lesson from my experience, of course.

It isn't obvious, though.

Replies from: Random832
comment by Random832 · 2012-04-12T19:40:11.618Z · LW(p) · GW(p)

My assumption is that disagreement is one of several reasons that people downvote, and that people are more likely to volunteer explanations (especially to new users) for the other reasons than for disagreement. Therefore, I assumed that the downvotes I got with no explanation were for disagreement. The one person who provided an alternate theory of why I was getting downvoted denied being one of the downvoters, and when I took his advice and clarified something from an earlier post, the new comment was also downvoted.

When I said I had observed a spoiler being stated "numerous" times in the thread, as evidence that the spoiler policy wasn't preventing this effectively, someone replied asking for a list of links to specific comments; I replied with nine, and that post was downvoted three times.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-12T19:55:00.594Z · LW(p) · GW(p)

My assumption is that disagreement is one of several reasons that people downvote

I agree with this

people are more likely to volunteer explanations (especially to new users) for the other reasons than for disagreement

I suppose this is possible, but I doubt the size of the effect is significant.

comment by thomblake · 2012-04-12T19:28:22.504Z · LW(p) · GW(p)

I'm talking about responses to requests for clarification

Aha - I'd missed that bit.

any downvotes that were due to disagreement with someone's position are obviously unlikely to change with clarification

Sure, though I usually avoid downvoting for disagreement, and I've gotten the impression that's still a norm around here.

ETA: And actually countered somewhat by the tendency of several frequent users to upvote for disagreement.

comment by HungryTurtle · 2012-04-12T15:06:54.541Z · LW(p) · GW(p)

I agree there is a reason for it, but you agree that this is not democratic right?

Replies from: thomblake
comment by thomblake · 2012-04-12T15:10:57.239Z · LW(p) · GW(p)

I'm pretty sure "democratic" is ill-defined - that's charitably assuming its use above was not just an applause light.

Taboo "democratic".

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T17:25:09.286Z · LW(p) · GW(p)

When you say "use above" I assume you are referring to TheOtherDave, because my questioning of the democratic principles of Lesswrong Karma were because it was described in response to my comment as democratic.

Replies from: thomblake
comment by thomblake · 2012-04-12T18:06:57.914Z · LW(p) · GW(p)

When you say "use above" I assume you are referring to TheOtherDave

No, I was referring to your use of the word in this comment, whose parent (my comment) did not use the word "democratic" at all.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T19:44:10.941Z · LW(p) · GW(p)

Ok, but your parent comment exists within a context. It was responding to Random832, who was responding to TheOtherDave's comment about democracy. I was not solely responding to you, but to your comment with the context of theotherdave's

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-12T14:21:05.646Z · LW(p) · GW(p)

Incidentally, from what I've seen you do much more advocating for your position than "attempting to explain" it.

I disagree–in most of my discussion with HungryTurtle, neither of us seemed to understand each other's position and were both getting 'ugh' reactions to various surface factors. So the meat of his comments was trying to explain and clarify...with the final result that our diverging philosophies 'add up to normality' and we actually don't disagree on all that much. Would you call that 'advocating' or 'attempting to explain'?

My hypothesis was that he was getting downvotes for the same reason that I was getting ugh reaction–lots of buzzwords that seem to go against everything LessWrong represents. The difference is that when I have an ugh reaction, I like to poke and prod and investigate it, not downvote it and move on.

Replies from: HungryTurtle, TheOtherDave
comment by HungryTurtle · 2012-04-12T15:08:09.859Z · LW(p) · GW(p)

We have the same hypothesis

comment by TheOtherDave · 2012-04-12T15:00:13.673Z · LW(p) · GW(p)

I would call an attempt to explain and clarify one's position culminating in agreement "attempting to explain".

comment by HungryTurtle · 2012-04-12T15:05:05.057Z · LW(p) · GW(p)

Is that akin to your model of "soviet russia style censorship"? I'm no expert on politics, but it sounds much more like American-style democracy to me.

I wasn't aware that in a democracy the you had to first have majority approval to share your ideas on the main stage, or that ideas of the minority could be repressed. I also wasn't aware that in a democracy the minority is not allowed to critique the majority. Or maybe you weren't aware that to down vote requires a certain amount of positive Karma. How is any of the above mentioned things democratic? EDIT: There are democratic elements to the Karma system, but there exist within it undemocratic elements.

Replies from: Swimmer963, TheOtherDave, Desrtopa, thomblake, thomblake
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-12T18:31:57.883Z · LW(p) · GW(p)

You can still upvote comments you like and ignore comments you don't like, no matter how much karma you have. You can still make comments disagreeing with other comments-which to me seems like a much better way of voicing your ideas than a silent downvote.

I believe that the karma cap on making posts (20 karma needed for a top level post) is partly to make sure members understand the vocabulary and concepts used on LessWrong before they start making posts, and partly to keep out spambots.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T20:06:04.959Z · LW(p) · GW(p)

You can still make comments disagreeing with other comments-which to me seems like a much better way of voicing your ideas than a silent downvote.

I think so to.

I believe that the karma cap on making posts (20 karma needed for a top level post) is partly to make sure members understand the vocabulary and concepts used on LessWrong before they start making posts,

I understand the purpose of it. I just think there are some problems with it.

comment by TheOtherDave · 2012-04-12T15:22:02.356Z · LW(p) · GW(p)

I'd forgotten that there was a karma-cap on downvotes, yes. (Also noted here.)
Thanks for the reminder.

Just for my edification: are you actually claiming that your ideas are being repressed, or was that implication meant as hyperbole?

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T17:40:19.270Z · LW(p) · GW(p)

I feel that your use of "actually claiming" and "repression" here falls under the category of applause light. mentioned by thomblake.

The fact that my essay becomes significantly harder to find because 11-27 people ( had some positives) disliked it, what would you call that?

Replies from: TheOtherDave, thomblake
comment by TheOtherDave · 2012-04-12T17:48:06.087Z · LW(p) · GW(p)

My use of "repression" was quoting your use of it, which I consider appropriate, since I was referencing your claim.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T18:12:00.238Z · LW(p) · GW(p)

And your use of "acutally claiming"?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-12T18:15:53.897Z · LW(p) · GW(p)

Was not quoting anyone's use of it.

Incidentally, I'm taking your subsequent rhetoric as confirmation that you did in fact intend the claim that your ideas are being repressed, since you don't seem likely to explicitly answer that question anytime soon.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T20:03:15.176Z · LW(p) · GW(p)

I do think the way negative karma works is a type of repression. Honestly I don't see how you could think otherwise.

And your use of "acutally claiming"?

Perhaps I was not clear enough. What I meant was that you saying "are you actually claiming" is applause light. Do you disagree?

Replies from: TheOtherDave, TheOtherDave
comment by TheOtherDave · 2012-04-12T20:21:26.834Z · LW(p) · GW(p)

I do think the way negative karma works is a type of repression.

OK, thanks for clarifying that.

I infer further, from what you've said elsewhere, that it's a type of repression that works by making some users less able to make comments/posts than others, and some comments less visible to readers than others, and some posts less visible to readers than others. Is that correct?

Assuming it is, I infer you consider it a bad thing for that reason. Is that correct?

Assuming it is, I infer you would consider it a better thing if all comments/posts were equally visible to readers, no matter how many readers considered those comments/posts valueless or valuable. Is that correct?

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T20:52:10.555Z · LW(p) · GW(p)

I am taking your subsequent rhetoric as confirmation that you do in fact agree "are you actually claiming" is a type of applause lights terminology.

I infer further, from what you've said elsewhere, that it's a type of repression that works by making some users less able to make comments/posts than others, and some comments less visible to readers than others, and some posts less visible to readers than others. Is that correct?

Yes.

Assuming it is, I infer you consider it a bad thing for that reason. Is that correct?

No, not exactly. As I told swimmer in theory the karma system is a good idea. I do not think it would be better if all posts were equally visible, I think it would be better if there was a fairer system of down posting ideas. Not exactly, In theory the idea of monitoring for trolling is good, but in my opinion, the LW karma system fails in practice.

First of all, do you believe that the up-down voting and down voting serves the purpose of filtering well written, interesting ideas? I feel a large portion of voting is based on rhetoric.

If a person uses any terminology that exists outside of the LW community, or uses a LW terminology in a different context, they are down-voted. Is this a valid reason to down vote someone? From what you and other LW members have said, I infer that the reason for down voting in these cases is to create a stable foundation of terminology to limit misunderstanding by limiting the number of accepted definitions of a term. Is that correct?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-12T21:06:09.596Z · LW(p) · GW(p)

do you believe that the up-down voting and down voting serves the purpose of filtering well written, interesting ideas?

No, not especially. I think it serves the purpose of allowing filtering posts and comments that other LessWrong users consider valuable. Sometimes they consider stuff valuable because it's well-written and interesting, yes. Sometimes because it's funny. Sometimes because they agree with it. Sometimes because it's engagingly contrarian. Sometimes for other reasons.

I feel a large portion of voting is based on rhetoric.

I would certainly agree with this. I'm not sure what you intend to capture by the contrast between "well-written" and "rhetoric," though.

If a person uses any terminology that exists outside of the LW community, [..] they are down-voted.

That's not just false, it's downright bizarre.
I would agree, though, that sometimes terminology is introduced to discussions in ways that people find valueless, and they vote accordingly.

If a person [..] uses a LW terminology in a different context, they are down-voted.

This is sometimes true, and sometimes false, depending (again) on whether the use is considered valuable or valueless.

Is this a valid reason to down vote someone?

Downvoting a comment/post because it does those things in a valueless way (and has no compensating value) is perfectly valid. Downvoting a comment/post because it does those things in a valuable way is not valid.

From what you and other LW members have said, I infer that the reason for down voting in these cases is to create a stable foundation of terminology to limit misunderstanding by limiting the number of accepted definitions of a term. Is that correct?

No, not especially. I would agree that that's a fine thing, but I'd be really astonished if that were the reason for downvoting in any significant number of cases.

comment by TheOtherDave · 2012-04-12T20:13:05.818Z · LW(p) · GW(p)

I don't agree that it was an applause light specifically, but the distinction is relatively subtle and I'm uninterested in defending it, so we can agree it was an applause light for the sake of argument it if that helps you make some broader point. More generally, I agree that it was a rhetorical tactic in a similar class as applause lights.

comment by thomblake · 2012-04-12T23:07:54.872Z · LW(p) · GW(p)

applause light

I don't think you've grokked that expression.

comment by Desrtopa · 2012-04-12T18:51:06.887Z · LW(p) · GW(p)

It's entirely possible to get karma by being critical of majority opinions here, if your points are well made. XiXiDu, for example, has 5777 karma at the time of this posting, and most of that has come from comments criticizing majority opinions here. Conversely, you can make a large number of comments that agree with majority positions here and not get any karma at all, if other members don't feel you're making any meaningful contribution.

Generally speaking, I find that simple assertions which run directly counter to mainstream positions here will tend to be downvoted, comments that run counter to mainstream positions with explanation, but which are poorly argued and/or written tend to be downvoted, and comments which run counter to mainstream positions which are moderately to well argued tend to be upvoted. Many people, myself included, will upvote comments whose conclusions they do not necessarily agree with, if they think it encourages useful discourse.

It's probably hard not to be offended if a comment you've put thought into starts getting downvoted, but rather than assuming that the community is trying to stomp down dissenting views, I suggest adding a comment, or editing your original, to ask people to explain their reasons for downvoting. At least some people will probably answer.

Replies from: Swimmer963, Random832
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-12T18:53:51.114Z · LW(p) · GW(p)

Many people, myself included, will upvote comments whose conclusions they do not necessarily agree with, if they think it encourages useful discourse.

I'm more likely to upvote comments I disagree with, partly because I think I have more to learn from those ideas and I want to encourage the poster to keep posting.

Replies from: Desrtopa
comment by Desrtopa · 2012-04-12T18:57:27.262Z · LW(p) · GW(p)

I have an additional impulse to upvote well argued comments I disagree with, but I think it's largely because I'm subconsciously trying to reinforce my own self perception as a fair and impartial person.

comment by Random832 · 2012-04-12T19:25:37.508Z · LW(p) · GW(p)

Maybe my tone was off, but the one time I directly asked for an explanation for a downvote, all I got was another downvote.

Replies from: DSimon
comment by DSimon · 2012-04-13T02:16:50.464Z · LW(p) · GW(p)

I would like to say for the record that that sucks, and that whoever did that second downvote should feel like a jerk.

comment by thomblake · 2012-04-12T15:27:28.386Z · LW(p) · GW(p)

the karma system is by no means democratic.

Is this a false overstatement, or is it merely hyperbole?

It uses 'voting', it correlates to some extent with the collective will of the community, and more than one person gets a say. It sounds much more democratic than a banhammer-wielding moderator and no karma, which is the default for web forums. If it seems "by no means" democratic to you, we definitely need to taboo "democratic".

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T17:43:08.297Z · LW(p) · GW(p)

It is a false overstatment. I agree with your point.

comment by thomblake · 2012-04-12T15:29:06.270Z · LW(p) · GW(p)

the karma system is by no means democratic.

Is this a false overstatement, or is it merely hyperbole?

It uses 'voting', it correlates to some extent with the collective will of the community, and more than one person gets a say. It sounds much more democratic than a banhammer-wielding moderator and no karma, which is the default for web forums. If it seems "by no means" democratic to you, we definitely need to taboo "democratic".

comment by wedrifid · 2012-04-12T15:59:29.627Z · LW(p) · GW(p)

I'm no expert on politics, but it sounds much more like American-style democracy to me.

Really? There isn't an efficient system for spending money to reliably buy votes in place. Doesn't sound all that typically "American". Or was there some other kind of message that "American-style democracy" is intended to convey as a diff over "democracy".

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-12T16:19:46.598Z · LW(p) · GW(p)

Mostly, the phrase "American-style democracy" was intended to preserve structural parallelism with "soviet russia style censorship".

But you're right about the lack of an efficient system for buying karma. Given how much attention we collectively pay to karma scores, I wonder if this is a potentially valuable fundraising mechanism for SI?

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-11T21:43:01.381Z · LW(p) · GW(p)

Depending on what scale you choose, you can say humans have had a pretty big impact (on the earth itself, comparable to a big extinction event from the past), or a very small impact (the earth is a very, very tiny fraction of everything that's out there). We've had a big impact on ourselves and our own future options, since right now we live on Earth and that's where we're doing all our messing-stuff-up, but I guess I think of reality as being a bit bigger than that.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T22:41:11.513Z · LW(p) · GW(p)

I think reality is bigger than the earth, and our impact on reality is questionable. BUT when I say our reality I am not talking about all of reality but the portion of it that defines our existence. Personally, I find realities on the other side of the universe to be worth a small small allotment of my resources, or humanities for that matter. I do not think it is a flat out waste of time, but I think that at this stage in human development the amount of consciousness, resources, and man power that should be spent theorizing about or basing decisions on reality that exists beyond our reality is minute.

EDIT: So I guess my response is that i agree, but I think that it would be pretty stupid to pick a scale bigger than the milky way galaxy, and that is being generous. Do you disagree?

comment by wedrifid · 2012-04-11T00:06:21.199Z · LW(p) · GW(p)

Do you think the territory exists without the map (the human)? I think A territory would exist without the map (the human), but it would be a different territory. The territory humans exist in is one that is defined by having a map. The map shapes the territory in a way that to remove it would remove humanity.

Specifically, it would remove a significant proportion of the frontal cortex and hippocampus from all the humans leaving whatever is left of the humans rather useless.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T00:10:31.450Z · LW(p) · GW(p)

If you were going to physically lobotomize it out of people, it would probably include most o the cerebral cortex, not just the frontal lobe. The visual cortex is probably the origin of language and symbolic function, but the audio cortexes play a huge role too.

comment by thomblake · 2012-04-10T20:25:41.846Z · LW(p) · GW(p)

When you see an oak tree and when you think about an oak tree it triggers the same series of neural impulses in your brain.

Correct.

For humans, there is never any "physical oak leaf" there is only ever constructs.

Incorrect.

To understand the distinction, note this passage from The Simple Truth:

Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.

'belief' corresponds to 'map'; 'reality' corresponds to 'territory'.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-10T20:30:09.688Z · LW(p) · GW(p)

But occasionally I believe strongly that something is going to happen, and then something else happens instead.

By what basis are we assuming that beliefs cannot surprise you and determine experimental results. Have you never been thinking about something and suddenly are overcome by some other thought or feeling? Or thought that some idea or line of thinking would take you one place and you end up somewhere radically different, which in turn leads to the need of a new hypothesis?

Replies from: thomblake
comment by thomblake · 2012-04-10T20:34:40.308Z · LW(p) · GW(p)

That's just moving the distinction up one meta-level, not collapsing it. You had beliefs about your beliefs, and they turned out to be wrong as compared to the reality of your beliefs. Your map is also in the territory, and you have a representation of your map on your map. Recurse as necessary.

EDIT: There's a good illustration in A Sketch of an Anti-Realist Metaethics most of the way down the article. We really should have that on the wiki or something.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-10T20:46:22.793Z · LW(p) · GW(p)

When reality surprises you, it is not always the case that it has defied a hypothesis, but often times that it reveals some new sliver of experience that is so unexpected it demands the creation of a new hypothesis. I thought that the point of Swimmer's comment was to suggest that EDIT: in reality we undergo this type of surprise, while in our beliefs we do not. Which I continue to suggest that beliefs also can create the above mentioned surprises, so what is the distinction between the two.

Replies from: thomblake, Swimmer963
comment by thomblake · 2012-04-10T21:03:37.319Z · LW(p) · GW(p)

I thought that the point of Swimmer's comment was to suggest that reality undergoes this type of surprise, while beliefs do not.

That would be stupid. Beliefs are in reality.

Still, I burn down oak trees by changing oak trees, not by changing my beliefs about oak trees.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-10T21:12:49.213Z · LW(p) · GW(p)

But changing your beliefs about oak trees can lead to you either burning them down or preserving them, right?

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-10T21:19:43.740Z · LW(p) · GW(p)

Is your point that every human action happens in belief form before it happens in "reality"? Of course. But when it happens in belief form (I decide to burn down an oak tree), it hasn't necessarily happened yet in reality. I might still get hit by a car on the way to the forest and never end up carrying out my plan, and the oak tree wouldn't burn.

Replies from: TheOtherDave, HungryTurtle
comment by TheOtherDave · 2012-04-10T23:11:12.637Z · LW(p) · GW(p)

...and, conversely, I might burn down an oak tree without ever deciding to. Indeed, I might even watch the tree burning in consternation, never discovering that I was responsible.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T00:18:02.611Z · LW(p) · GW(p)

Ok, this is going to be exceedingly difficult to explain...

You say

I might burn down an oak tree without ever deciding to. Indeed, I might even watch the tree burning in consternation, never discovering that I was responsible.

In some sense, you burning down the tree still is the byproduct of your beliefs. Your beliefs create actions and limit actions. Any voluntary action stems out of either a belief in action or a belief in inaction. Because of some beliefs you have, you were wanton in your handling of fire or some other flammable product.

Perhaps a better example would be the Christian whose strong beliefs lead him to never discover that he or she was responsible for extreme denial and avoidance of the truth. He or she avoided the truth indirectly because of his or her strong beliefs, not out of conscious volition.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-11T00:31:04.026Z · LW(p) · GW(p)

I certainly agree that my burning down a tree (intentionally or otherwise) is the byproduct of my mental states, which create and limit my actions. Which mental states it makes sense to call "beliefs", with all of the connotations of that, is a trickier question, and not one I think it's very useful for us to explore without a lot of groundwork being laid first.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T12:36:32.020Z · LW(p) · GW(p)

Well said. Would you want to try?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-11T12:54:06.284Z · LW(p) · GW(p)

Not really... I've hit my quota for conversations that span metaphysical chasms for the moment.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T14:31:55.652Z · LW(p) · GW(p)

Fair enough

comment by HungryTurtle · 2012-04-11T00:13:21.592Z · LW(p) · GW(p)

Not that EVERY human action happens in belief form first, but that the transition between belief and action is a two way road. Beliefs lead to actions, actions lead to beliefs.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-10T21:09:47.279Z · LW(p) · GW(p)

I thought that the point of Swimmer's comment was to suggest that reality undergoes this type of surprise, while beliefs do not.

Reality doesn't undergo the surprise. It "knew all along." (Quantum mechanics was a pretty big surprise, but it was true even back in Newton's day...even back in paleolithic days.) Beliefs undergo changes in response to a 'surprise.' But that surprise doesn't happen spontaneously...it happens because new information entered the belief system. Because someone had their eyes open. Reality causes the surprise.

If, in some weird hypothetical world, all physics research had been banned in 1900, no one would've ever kept investigating the surprising results of the black-body radiation problem or the photoelectric effect. No human would've ever written the equations down. Newton would be the final word on everything, and no one would be surprised. But quantum mechanics would still be true. A hundred years later, if the anti-physics laws were reversed, someone might be surprised then, and have to create a new hypothesis.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-10T21:11:14.518Z · LW(p) · GW(p)

Sorry that was a typo, I meant to say that in reality we undergo this type of surprise, but in our beliefs we do not.

comment by TimS · 2012-04-10T17:56:52.410Z · LW(p) · GW(p)

Your position seems to be that "hard" science is impossible. I'm a big fan of Kuhn and Feyerabend, but it is possible to make accurate predictions about the future state of physical objects. If the map (my beliefs about physical objects) and the territory (physical objects) were indistinguishable, then there's no reason to ever expect accurate predictions. Given the accuracy of predictions, the overwhelmingly likely conclusion is that there is something outside and independent of our beliefs about the world.

I think a gap in our communication is the type of map we visualize in our use of this analogy. When we say map, what type of map are you envisioning? This is just a guess, but to me it seems like you are imagining a piece of paper with topography, landmarks, and other various symbols marked out on it. It is from this conception of a map that you make the claim "the territory is still there." I imagine you see the individual of our analogy with their nose pressed into this type of parchment moving solely based on its markings and symbols. For you this is a bad choice of navigating, because the individual is ignoring the reality that is divorced from the parchment.

This may help articulate the point of the map/territory metaphor. In short, the only completely accurate depiction of California is . . . the physical object California. But people tend to mistake maps of California for the thing itself. When they find an error in the map, they think something is wrong with California, not their image of California.

Replies from: HungryTurtle, Bugmaster
comment by HungryTurtle · 2012-04-10T19:26:55.218Z · LW(p) · GW(p)

Your position seems to be that "hard" science is impossible.

In an oversimplification yes.

it is possible to make accurate predictions about the future state of physical objects.

I agree that it is possible to make accurate predictions about the future state of physical objects.

If the map (my beliefs about physical objects) and the territory (physical objects) were indistinguishable, then there's no reason to ever expect accurate predictions.

I don't follow your reasoning here. I see my beliefs as both derived from and directly impacting the "territory" they exist within. I don't see how this denies the possibility of accurate predictions.

This may help articulate the point of the map/territory metaphor. In short, the only completely accurate depiction of California is . . . the physical object California. But people tend to mistake maps of California for the thing itself. When they find an error in the map, they think something is wrong with California, not their image of California.

I am familiar with this work, but the map/territory metaphor depicted here is inadequate for the purpose of what I am trying to convey. I disagree with the core ontological assumption being made here, namely a divide between the map and the territory.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-10T19:49:33.298Z · LW(p) · GW(p)

I disagree with the core ontological assumption being made here, namely a divide between the map and the territory.

I'm not sure if that metaphor is designed to be a deep philosophical truth so much as a way to remind us that we (humans) are not perfect and make mistakes and are ignorant about stuff, and that this is bad, and that the only way to fix is it to investigate the world (territory) to improve our understanding (map).

Do you disagree that studying the world is necessary to improve the state of human knowledge? Or do you disagree that we should improve the state of knowledge?

Your position seems to be that "hard" science is impossible.

In an oversimplification yes.

But what about all the lovely benefits of hard science? The fact that now we have computers (transistors are only possible due to the discovery of quantum mechanics as a model of reality), and airplanes (man that took centuries to happen), and intravenous antibiotics? What are all these things due to, if not hard science?

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-10T20:25:37.384Z · LW(p) · GW(p)

Do you disagree that studying the world is necessary to improve the state of human knowledge? Or do you disagree that we should improve the state of knowledge?

I definitely think learning and studying are important, but I guess I would disagree as to what type of knowledge it is we are trying to improve on. In the last couple centuries there has been a segregation of aesthetic and technical knowledge, and I think this is a mistake. In my opinion, the endless pursuit of technical knowledge and efficiency is not beneficial.

But what about all the lovely benefits of hard science? The fact that now we have computers (transistors are only possible due to the discovery of quantum mechanics as a model of reality), and airplanes (man that took centuries to happen), and intravenous antibiotics? What are all these things due to, if not hard science?

I recommend Thomas Khun's the structure of scientific revolutions. He suggests revolutions in scientific knowledge are by no means the product of scientific reasoning. I definitely think we are capable of transforming reality and learning more about it, I just don't think this process of transformation is in itself beneficial.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-10T21:03:22.310Z · LW(p) · GW(p)

In my opinion, the endless pursuit of technical knowledge and efficiency is not beneficial.

I definitely think we are capable of transforming reality and learning more about it, I just don't think this process of transformation is in itself beneficial.

I'm starting to get a feeling that our disagreement is more ideological than factual in nature.

I'm reading between the lines a lot here, but I'm getting the feeling that you think that although: a) you can look at the world from a reductionist, the-territory-is-out-there-to-study way, and b) you can make scientific progress that way, BUT c) scientific progress isn't always desirable, THUS d) if you use your own world-view (the oak leaf in your head is the only reality), then e) we can focus more on developing aesthetic knowledge, which is desirable.

What would you say is an example of aesthetic knowledge? How would you describe a world that has too much tech knowledge compared to aesthetic knowledge? How would you describe a world that has a healthy balance of both?

Side note:

I'm a nursing student. A lot of what we learn about is, I think, what you would call 'aesthetic knowledge'. I'm not supposed to care very much about why or why not a patient's cancer responds to treatment. That's up to the medical specialists who actually know something about cancer cells and how they grow and metabolize. I'm supposed to use caring and my therapeutic presence to provide culturally sensitive support, provide for my patient's self-care needs, use therapeutic communication, etc. (You may detect a slight note of sarcasm. I don't like classes that use words like 'therapeutic communication' or 'culturally sensitive' and then don't give us any examples or teach us how.)

And yeah, a lot of medical doctors are kind of tactless and not very caring, even though they're right about the diagnostic, and that's not very nice for patients. But it's not the technological advance that causes their callousness; it's the fact that some human beings don't know how to be nice to others. Society needs to work on that. But that doesn't mean society shouldn't work on a better cure for cancer because it will make doctors arrogant.

There are lots of consequentialist reasons to think twice about rapid progress. Like: we don't always understand what we're doing until we've done it and 50 years later there's a huge hole in the ozone layer. Is that enough for me to unilaterally oppose progress? No. I was a breech baby–I'd have died at birth if I was born 100 years ago–and I kinda like being alive.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T14:29:17.412Z · LW(p) · GW(p)

Society needs to work on that.... Is that enough for me to unilaterally oppose progress? No.

I feel like you read my words and somewhere in the process they get distorted to the extreme. Never do I say "unilaterally halt progress". In fact, I am very careful to express that I am not advocating the halt of technical progress, but rather a moderation of it. It is important to continue to develop new technology, but it is also important to develop capacities for kindles, limitation, and compassion. Like you say, there are big consequences for unrelenting innovation.

I was brought to this community by a friend. This friend and I have discussions similar to this one. In one particular discussion, after he finally understood my position, his response was very similar to yours, “so what do we just stop trying to be better?” Why it is any talk of limitations on science or technology is misinterpreted as “unilateral opposition” or “an end to progress”. Why can’t moderation be applied to societal development in the same way it can be to eating, fucking, fighting, and all other paradigms of action?

It puzzles me. The impression it gives, is that there is a teleological faith (and I use this word, because it appears to me as religious) in the unconditional benefit of further domination and manipulation of our environment. I offer the following comparison:

In traditional society there can be no flaw in ritual; if a desired outcome is not reached, it is not because the ritual is flawed, but because it was preformed incorrectly.

In current society there can be no flaw in technology; if a desired outcome is not reached or produces unexpected results, it is not because technology is flawed, but because it must be improved on.

What do you think?

I was a breech baby–I'd have died at birth if I was born 100 years ago–and I kinda like being alive.

So the idea of progress something you are personally attached to. Not to sound cold, but the fact that you have benefited from a single aspect of technological development does not make the current rate of development any less dangerous for society as a whole. There are people who benefited from the housing bubble of the past decade, but that does not change the fact that an enormous amount of people did not. This is a bad analogy, because the benefits of technological development are much more widespread than the benefits of shady banking practices; still there is some relation, in that a large portion of tech development benefits the elite, not the masses. And I would argue that the existential risk is at this point greater than the benefits.

comment by Bugmaster · 2012-04-10T18:51:45.737Z · LW(p) · GW(p)

Given the accuracy of predictions, the overwhelmingly likely conclusion is that there is something outside and independent of our beliefs about the world.

I was under the impression that, according to you, this "something" is completely inaccessible to us, as evidenced by the incommensurability of our models. But maybe I'm wrong.

Replies from: TimS
comment by TimS · 2012-04-11T01:01:15.442Z · LW(p) · GW(p)

Maybe with some very technical definition of "inaccessible." We know enough about what's out there to be able to make predictions, after all.

I do think that many scientists assert that certain facts are in the territory when they are actually in the map. Over and above the common errors that non-scientists make about the map/territory distinction.

Replies from: Bugmaster
comment by Bugmaster · 2012-04-11T03:03:07.314Z · LW(p) · GW(p)

We know enough about what's out there to be able to make predictions, after all.

As far as I understand (and I could be wrong), you believe that it's possible to construct two different models of "what's out there", both of which will yield good predictions, but which will be incommensurate. If this is true, how can you then say that we "know enough" about what's out there ? Sure, we may have a model, but chances are that there's another model out there which yields predictions that are just as accurate, and yet has nothing whatsoever to do with the first model; thus, we're no closer to understanding what's actually real than we were before. That's not "knowledge", as I understand it, but perhaps you meant something else ?

comment by [deleted] · 2012-03-11T06:33:57.703Z · LW(p) · GW(p)

I'd say it's not really a paradox, though.

To ABrooks: In a sense, rationality is a matter of degree; it's not black-or-white.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-06T17:28:10.475Z · LW(p) · GW(p)

Yobi, saying rationality is a matter of degrees is basically what a paradox does. Paradoxes deny linear winning. A paradox is the assertion that there can be multiple equally valid truth's to a situation.

Replies from: Arran_Stirton
comment by Arran_Stirton · 2012-04-07T06:08:19.199Z · LW(p) · GW(p)

no, no, No, NO, NO!

That is not what a paradox does. More importantly, saying rationality is a matter of degrees is nothing like saying that there are multiple equally valid truths to a situation.

It's called the Fallacy of Grey, read about it.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-07T15:12:37.105Z · LW(p) · GW(p)

To Mr. Arrran and all others,

I want to talk to you, but I think there is a significant hurdle that we must overcome for this to be productive. Perhaps you do not want to talk to me, and if that is the case fine, but I would like to talk to you. The problem we are having is in the debating of definitions. There are multiple paradigms of what a paradox is. Instead of debating which is right, it is more fruitful to: 1.) accept that I am talking about something that is different from what you are talking about, 2.) try to understand it. And I will try to do the same.

I understand the Fallacy of Grey, and it does not apply at all to what I was trying to convey. Your understanding of the terms I use lead you to that conclusion, but it is false. You can fight for the definitions you have been indoctrinated in, and in doing so fight to label me as wrong, or we can have a real dialogue.

My definition of paradox stems largely from the work of Max Weber, who if I understand historically, develops the ideas of Kant, that are reconstructed from ideas of Aristotle. I am not 100% sure what paradigm of paradox you ascribe to, but if it is the same as what has been argued elsewhere in this series of posts, it seems to be dialetheism, Honestly, I really do not know where this train of thought stems from, or the cognitive implications of it. My understanding of Weber's work places the word "paradox" within the paradigm of antinomy. The primary purpose of this paradigm of paradoxes is to highlight the fallacy of linguistic construction in its fragmentation of reality through categorization.

I assumed that when you said "rationality is a matter of degree" the degrees you were talking about were linear rationality and non-linear rationality, ideas tied to the philosophical traditional of richard rorty, dewy, and james. I see now that is not what you meant.

Something I hate about this blog is that people will down vote something without really understanding it, instead of assuming ignorance and trying to understand it. I come from a very different academic background than you, and I am guessing the majority of members of this community. I do not know much mathematics, I cannot program, and I am not versed in the word of Eliezer.

Ironically my notion of rationality as "winning" was an attempt to meet your community lexicon, and adapt it to the ideas of my own intellectual communities. This attempt failed. I am not sure if the failure is inherent in the method, or this community’s methodology of communication.

So, I guess if you want to hear about what paradox means within my field of study, instead of just labeling me as ignorant then I would love to talk about it.

Replies from: MarkusRamikin, None, Arran_Stirton, Swimmer963
comment by MarkusRamikin · 2012-04-09T13:48:29.205Z · LW(p) · GW(p)

I wonder what this post would look like without so many nouns, author names and so on.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-09T19:09:00.752Z · LW(p) · GW(p)

Probably as lacking of any real contribution as your own.

comment by [deleted] · 2012-04-09T11:40:33.964Z · LW(p) · GW(p)

You are doing a Humpty Dumpty ("When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”)

You need to recognize that you are using 'paradox' in a most unconventional manner — dare I say in a 'wrong' manner. If you are going to be using language in unconventional ways, the burden is on you to make yourself clear.

You are also arguing about definitions — an unproductive pastime. Why don't we taboo 'paradox' for the time being.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-09T12:31:32.965Z · LW(p) · GW(p)

You don't think half the terminology you use on this site is a "most unconventional manner" to the majority of people. The fact that you had to link me to an essay to explain your use of the word taboo is a gaping contradiction to what you are arguing.

The production of ideas beyond a certain level is pretty much the redefining of words. My use of the word paradox is unconventional, but no more than your own unconventional use of words. It represents a different sphere of ideas, but it is not wrong. If you re-read what I wrote above you will see I said that debating definitions is a waste of time, and that it is more fruitful to accept the plurality of words and try to acquire a new understanding of paradox, instead of fighting to hold onto one static interpretation of it.

Replies from: wedrifid
comment by wedrifid · 2012-04-09T12:53:56.510Z · LW(p) · GW(p)

The production of ideas beyond a certain level is pretty much the redefining of words. My use of the word paradox is unconventional, but no more than your own unconventional use of words.

In this instance it doesn't matter whether you call it 'paradox'. It's either wrong or meaningless to present the following as a deep insight:

humans are also rational by nature.

Humans are irrational by nature

comment by Arran_Stirton · 2012-04-07T19:11:53.571Z · LW(p) · GW(p)

Of course I want to talk to you, debates are always interesting.

If you assert this:

saying rationality is a matter of degrees is basically what a paradox does.

and then this:

A paradox is the assertion that there can be multiple equally valid truth's to a situation.

That sounds a lot like the Fallacy of Grey, even if you meant to say something different. Using the word paradox implies that the "multiple equally valid truths" are contradictory in nature, if so you'd end up with the Fallacy of Grey through the Principle of Explosion.

But regardless, you can't just say "It's a paradox." and leave it at that. Feeling that it's a paradox, no matter what paradigm that you're using, shows that you don't have the actual answer. Take antinomy for example, specifically Kant's second antinomy concerning Atomism. It's not actually a paradox, it was just that at the time we had a incomplete theory of how the universe works. Now we know that the universe isn't constructed of individual particles.

You might find this and this useful further reading.

I'm interested in what you see to be the distinction between linear and non-linear rationality, I'm unfamiliar with applying the concept of linearity to rationality.

Something to keep in mind is the "rationality" you see here is very different to traditional rationality, although we use the same name. In fact a lot of what you'll come across here won't be found anywhere else which is why reading a good deal of the sequences is so important. Reading HPMoR is fairly equivalent too though.

I haven't down-voted you simply because I can see where you're coming from, you might be wrong or miscommunicating in certain respects, but you're not being stupid.

Part of the problem is that there's a huge inferential gap between you and most of the people here, as you say, you don't know much mathematics and you're not versed in the word of Eliezer. Similarly the folks here have not (necessarily) studied the social sciences intently, nor are they (necessarily) versed in the words of Weber, Rorty, Dewy or Kant.

Winning in the way we use it, is the best possible course of action to take. It's distinctly different from the notion of winning a game. If losing a game is the best thing you can do then that is winning. The reason the attempt failed is because you didn't understand what it was we meant by winning, and proceeded to say something that was untrue under the definition used on LW.

So yes, I'd like to hear about what a paradox means in your field of study. However you must realise that if you haven't read the sequences, and you don't know the math, there is a lot this community knows that you are ignorant of. By the same token, there is a lot that you know that the community is ignorant of. Neither thing is a bad thing, but that doesn't mean you shouldn't try to remedy it.

Importantly, don't try and mix your knowledge and LW-rationality until you fully understand LW-rationality. No offence meant.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-09T13:59:12.557Z · LW(p) · GW(p)

Hey if it is ok I am going to respond to your comment in pieces. I will start with this one. I say

A paradox is the assertion that there can be multiple equally valid truths to a situation.

To which you respond

That sounds a lot like the Fallacy of Grey, even if you meant to say something different. Using the word paradox implies that the "multiple equally valid truths" are contradictory in nature, if so you'd end up with the Fallacy of Grey through the Principle of Explosion.

The reason you see the principle of explosion in my statement is that you are assuming the paradox is dialetheistic, meaning that the multiple equally valid truths I am talking about exist within a single binary. I am not saying Π and –Π are both true. Rather, I am suggesting that to break matrix [Π, Ρ, Σ, Τ, Υ] into binaries (Π, -Π), (Ρ, -Ρ), (Σ, - Σ), (T,-T), and (Υ,-Y) leads to an incompatibility of measurement, and thus multiple equally valid truths.

You say that Antinomy is an outdated Kantian concept. You are correct. It is The reason precisely because of the fact that “we now know the universe isn't constructed of individual particles” that created antinomy as a type of paradox. Antinomy is a linguistic rather than mathematical paradox. The function of language is to break reality down into schemas of categorization, this process irrevocably takes [Π, Ρ, Σ, Τ, Υ] and transforms it into (Π, -Π), (Ρ, -Ρ), (Σ, - Σ), (T,-T), and (Υ,-Y). As you have said, reality is not constructed of individual particles, but human interaction with reality cannot avoid superimposing individual particles upon it. Because of this, there are instances in the use of language where discourse creates a distinction between elements that does not exist in reality. If we do not acknowledge the potential for such linguistic fallacies, contradiction and competition between these elements cannot be avoided. This is the paradox of antinomy. A talented individual could rephrase this into the dialetheistism “language is both true and not true, but such a statement falls into the very fallacy of language that antinomy as paradox is attempt to warn against, ultimately defeating the purpose of even making the statement.

Not everything can be broken into a tidy maxim or brief summary. Being primarily Bayesians, I am sure you can appreciate that the implementation/ digestion of some ideas have no shortcuts. They do not exist as an individual idea, but rather a monstrous matrix in themselves. Contradicting my own assertion, I will attempt to create my own short maxim to aid in the process of digestion: Language is an inadequate tool for creating reality, but it is the primary tool for creating humans.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-09T12:38:05.786Z · LW(p) · GW(p)

I am not 100% sure what paradigm of paradox you ascribe to, but if it is the same as what has been argued elsewhere in this series of posts, it seems to be dialetheism, Honestly, I really do not know where this train of thought stems from, or the cognitive implications of it. My understanding of Weber's work places the word "paradox" within the paradigm of antinomy. The primary purpose of this paradigm of paradoxes is to highlight the fallacy of linguistic construction in its fragmentation of reality through categorization.

Wow. I don't think I understood any of that. Out of curiosity, what is your field of study? I think there is a part of me that has an innate allergy to any text laden with words like "dialetheism" and "antinomy", especially if they're not defined within the text. Then again, that attitude is exactly what you're criticizing in this comment, so maybe I'll Wikipedia those terms and see if I can hammer out a bit of an understanding.

I do have to admit that I'm in a field that uses long, complex terms of its own. These terms convey very specific information to someone else with the same education as me, and hardly any useful information to someone without that education. But I wouldn't get into a conversation with a stranger and use terms like 'hyperbilirubinemia' or 'myelomeningocele' without explaining them first.

Replies from: HungryTurtle, HungryTurtle
comment by HungryTurtle · 2012-04-10T13:28:23.217Z · LW(p) · GW(p)

I am part of an interdisciplinary field that combines sociology, cultural anthropology, linguistic anthropology, various bits of psychology, ecology, and philosophy.

But I wouldn't get into a conversation with a stranger and use terms like 'hyperbilirubinemia' or 'myelomeningocele' without explaining them first.

I wouldn't either, but when someone tells you that you don't know what the word you are using means you expect them to be pretty well informed on the subject.

comment by HungryTurtle · 2012-04-09T19:08:01.900Z · LW(p) · GW(p)

Coming soon!

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-03-10T23:13:05.999Z · LW(p) · GW(p)

Symbols are irrational. If symbols are irrational, and humans are unable to escape symbols, then humans are fundamentally irrational.

In what sense do you mean that symbols are irrational? Is it because they only imperfectly represent the world that is "really out there?" Is there a better option for humans/hypothetical other-minds to use instead of symbols?

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-06T17:26:11.802Z · LW(p) · GW(p)

Symbols by definition are analogies to reality. Analogies are not rationally based, they are rhetorically based. Rhetoric is by no means rational in the sense that this community uses the word. Therefore language is by definition irrational.

Is there a better option for humans/hypothetical other-minds to use instead of symbols?

No, that is my point. Humans have no other way to relate to reality. The idea of a better option is a fiction of essentialist philosophy.

comment by Dustin · 2012-03-07T23:55:22.956Z · LW(p) · GW(p)

I don't know if this is what you were thinking of, but here is what lukeprog wrote about Spock.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-03-09T12:38:46.440Z · LW(p) · GW(p)

I believe this is what he's thinking of.

Replies from: TimS
comment by TimS · 2012-03-09T13:29:47.226Z · LW(p) · GW(p)

What kind of tragic fool gives four significant digits for a figure that is off by two orders of magnitude?

That's it. Thanks.

comment by [deleted] · 2012-03-08T17:32:50.481Z · LW(p) · GW(p)

You've confused goal-winning (LW sense) with social-winning.

Rationality is the optimal tool for goal-winning, which is always what is desirable. This relation is established by definition, so don't bother criticizing it.

You can show that our current understanding of rationality or winning does not live up to the definition, but that is not a criticism of the definition. Usually when people debate the above definition, they are taking it to be an empirical claim about spock or some specific goal, which is not how we mean it.

EDIT: Also, "air on the side". It's "err" as in "error". Read Orwell's "politics and the english langauge".

Replies from: faul_sname, HungryTurtle
comment by faul_sname · 2012-03-08T23:15:36.278Z · LW(p) · GW(p)

This relation is established by definition, so don't bother criticizing it.

This phrase worries me.

Replies from: wedrifid, HungryTurtle
comment by wedrifid · 2012-03-09T03:46:53.239Z · LW(p) · GW(p)

I hope it means "If you want to criticize this relationship you must focus your criticism on the definition that establishes it".

Replies from: faul_sname
comment by faul_sname · 2012-03-09T07:14:26.930Z · LW(p) · GW(p)

Yes, but considering that social winning is quite often entangled quite closely with goal winning, and that the goal sometimes is social winning. To paraphrase a fairly important post, you only argue a point by definition when it's not true any other way.

Replies from: Nectanebo
comment by Nectanebo · 2012-03-09T14:20:14.237Z · LW(p) · GW(p)

I agree with you that that particular sentence could have been phrased better.

But nyan_sandwich pointed out the key point, that turtle was arguing based upon a specific definition of rationality that did not mean the same thing that LW refers to when they talk about rationality. Therefore when she said the words "by definition" in this case, she was trying to make clear that arguing about it would therefore be arguing about the definition of the word, and not anything genuinely substantial.

Therefore it seems that it is very unlikely that sandwich was falling into the common problem the article you linked to is refering to: of saying that (a thing) is (another thing) by definition when actually the definition of the thing does not call for such a statement to be the case at all.

Yes, the wording made it seem like it may have been the case that she was falling into that trap, however I percieved that what she was actually doing was trying to inform hungry turtle that he was talking about a fairly different concept to what LW talks about, even though we used the same word (a phenomenon that is explained well in that sequence).

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-06T15:00:00.739Z · LW(p) · GW(p)

Nectanebo,

Perhaps you can explain to me how the LW definition differs from the one I provide, because I pulled my definition from this sites terminology to specifically avoid this issue. I am willing to accept that there is a problem in my wording of this definition, but I respectfully hold the position that we are talking about the same rationality.

In my opinion, the problem is not with my concept of rationality, but that I am attacking, even if it is a mild attack, an idea that is held in the highest regard among this community. It is the dissonance of my idea that leads nyan_sandwich to see fault with it, not the idea itself. I hope we can talk this out and see what happens.

Replies from: Nectanebo
comment by Nectanebo · 2012-04-07T04:13:02.864Z · LW(p) · GW(p)

I can think of two situations where increased accuracy is detrimental: 1.) In maintaining moderation; 2.) In maintaining respectful social relations.

increased accuracy is not rationality

Think about it this way: if you want increased accuracy, then rationality is the best way to increase accuracy. If you want to maintain social relations, then the rational choice is the choice that optimally maintains social relations.

I think LessWrong considers rationality as the art of finding the best way of achieving your goals, whatever they may be. Therefore if you think that being rational is not necessarily the best option in some cases, we are not talking about the same concept any longer, because when you attack rationality in this way, you are not attacking the same rationality that people on LessWrong refer to.

For example, it is silly for people to try to attempt to increase accuracy to the detriment of their social relationships. This is irrational if you want to maintain your social relationships, based on how LessWrong tends to use the word.

The points I make have been covered fairly well by many others who have replied in this thread. if you want to know more about what we may have been trying say, that sequence about words also covers it in detail, I personally found that particular sequence to be one of the best and most useful, and it is especially relevant to the discussion at hand.

Replies from: adamisom, HungryTurtle
comment by adamisom · 2012-04-21T01:32:07.051Z · LW(p) · GW(p)

Anything can be included in rationality after you realize it needs to be.

Or: You can always define your utility function to include everything relevant, but in real life estimations of utility, some things just don't occur to us (at least until later). So sure, increased accuracy [to social detriment] is not rationality. Once you realize it.* But you need to realize it. I think HungryTurtle is helping us realize it.

So I think the real question is does your current model of rationality, the way you think about it right now and actually (hopefully) use it, is that inoptimal?*

comment by HungryTurtle · 2012-04-11T14:59:20.084Z · LW(p) · GW(p)

I think LessWrong considers rationality as the art of finding the best way of achieving your goals, whatever they may be.

Do you ever think it is detrimental having goals?

Replies from: Nectanebo, army1987
comment by Nectanebo · 2012-04-12T12:15:10.542Z · LW(p) · GW(p)

Sure, some goals may be detrimental to various things.

But surely people have the goal of not wanting detrimental goals, if the detriment is to things they care about.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T12:36:02.692Z · LW(p) · GW(p)

Yes! So this idea is the core of my essay.

I suggest that the individual who has the goal of not wanting detrimental goals acknowledges the following:

1.) Goal-orientations (meaning the desired state of being that drives one's goals at a particular time) are dynamic.

2.) The implementation of genuine rational methodology to a goal-orientation consumes a huge amount of the individual/group's resources.

If the individual has the goal of not having detrimental goals, and if they accept that goal-orientations are dynamic, and that a genuinely rational methodology consumes a huge amount of resources, then such an individual would rationally desire a system of regulating when to implement rational methodology and when to abandon rational methodology due to the potential triviality of immediate goals.

Because the individual is choosing to abandon rationality in the short-term, I label this as being rationally irrational.

Replies from: TimS, Nectanebo
comment by TimS · 2012-04-12T14:36:15.630Z · LW(p) · GW(p)

Let's play this out with an example.

Imagine I have a goal of running a marathon. To do that, I run every day to increase my endurance. One day, I trip and fall, twisting my ankle. My doctor tells me that if I run on the ankle, I will cause myself permanent injury. Using my powers of rationality, I decide to stop running until my ankle has healing, to avoid the permanent injury that would prevent me from achieving my goal of running a marathon.

Is my decision to stop training for the marathon, which inevitably moves my goal of running in a marathon further away, "rationally irrational"? Or is there something wrong with my example?

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T16:26:56.299Z · LW(p) · GW(p)

No, your example is fine, but I would say it is the most elementary use of this idea. When faced with a serious threat to health it is relatively easy and obvious to realign goal-orientation. It is harder to make such realignments prior to facing serious damage or threats. In your example, a more sophisticated application of this idea would theoretically remove the possibility of twisting an ankle during training, excluding any extreme circumstances.

I imagine this might raise a lot of questions so let me explain a little more.

Training is not serious. The purpose of training is to prepare for a race, but the purpose of training is subsumed over the larger purpose of personal health, happiness, and survival. Therefore, any training one does should always be taken with the context of being trivial in light of these overarching goals. Having this mindset, I do not see how a runner could sprain their ankle, barring extreme circumstances.

A real runner, taking these overarching values into account would

  • Prior to running build knowledge about safe running style and practices
  • During running be primarily concerned with safety and developing positive running habits rather than meeting some short term goal.

To me, someone who has integrated my idea would never prioritize a race to the point that they risk spraining their ankle in training. Of course there are bizarre situations that are hard/ impossible to plan for. But tripping and twisting your ankle does not seem to be one of these.

comment by Nectanebo · 2012-04-12T14:23:19.118Z · LW(p) · GW(p)

That kinda falls apart because it's not being irrational if it's rational not to consume too much of your resources on "rational methodology". I guess it's just a bad label, "rationally irrational", that is, because you're not abandoning rationality, you're just doing the rational thing by choosing not to use too much of your resources when it's better not to. So at no point you're doing anything that could be considered irrational.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T15:29:33.824Z · LW(p) · GW(p)

Let’s say I am playing soccer. I have decided that any goal-orientation within my soccer game is ultimately not worth the expenditure of resources beyond X amount. Because of this I have tuned out my rational calculating of how to best achieve a social, personal, or game-related victory. To anyone who has not appraised soccer related goal-orientations in this way, my actions would appear irrational within the game. Do you see how this could be considered irrational?

I definitely understand how this idea can also be understood as still rational, it is because of that I called it 'rationally irrational,' implying the actor is never truly abandoning rationality. The reason I choose to word it this way instead of finding some other way to label it as meta-rationality is for rhetorical purposes. This community targets a relatively small demographic of thinkers. That being individuals who have both the capacity and the work history to achieve upper levels of rationality. Perhaps this demographic is the majority within this blog, but I thought it was highly possible that there existed Less Wrong members who were not quite at that level, and that it would be a more symbolically appealing idea if it suggest an element of necessary irrationality within the rationalists paradigm. Maybe this was the a poor choice, but it was what I choose to do.

Replies from: thomblake, Nectanebo, TimS
comment by thomblake · 2012-04-12T15:48:42.774Z · LW(p) · GW(p)

That is a good assessment. Saying something false constitutes exceptionally bad rhetoric here.

Replies from: HungryTurtle, wedrifid
comment by HungryTurtle · 2012-04-12T16:05:16.099Z · LW(p) · GW(p)

I still don't think what I said is false, it is a rhetorical choice. Saying it is rational irrationality still makes sense, it just hits some buzz words for this group and is less appealing than choosing some other form of label.

Replies from: thomblake
comment by thomblake · 2012-04-12T16:13:27.167Z · LW(p) · GW(p)

Saying it is rational irrationality still makes sense

No, it doesn't. It's a blatant contradiction, which is by definition false.

Also:

Do you see how this could be considered irrational?

Yes, someone could consider it irrational, and that person would be wrong.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T19:57:25.354Z · LW(p) · GW(p)

No, it doesn't. It's a blatant contradiction, which is by definition false.

Rational Irrationality is talking about rationality within two different levels of analysis. The result of being rational at the level of goal prioritization, the individual abandons rational methodology at the level of goal achievement.

L1- Goal Prioritization L2- Goal Achievement

If I am at a party I have desired outcomes for my interactions and experiences that produce goals. In prioritizing my goals I am not abandoning these goals, but placing them in the context of having desires that exist outside of that immediate situation. I still am trying to achieve my goals, but by correctly assessing their relevance to overarching goals, I either prioritize or de-prioritize them. If I de-prioritize my party goals, I am limiting the effort I put into their achievement. So even if I could think of more potent and effective strategies for achieving my party goals, I have abandon these strategies.

L1 rationality limits L2 rationality within low priority goal context. Rationally condoning the use of irrational methods in minor goal achievement.

comment by wedrifid · 2012-04-12T15:52:30.233Z · LW(p) · GW(p)

Saying something false constitutes exceptionally bad rhetoric here.

That seems false. Perhaps saying something false for the purpose of supporting something else is bad rhetoric. There are possibly also ways of saying something false, or contexts where saying the false thing is bad rhetoric. But for most part saying stuff false is legitimate rhetoric for a bad conclusion.

comment by Nectanebo · 2012-04-13T05:08:06.408Z · LW(p) · GW(p)

Maybe this was the a poor choice, but it was what I choose to do.

Good, now that you've realised that, perhaps you might want to abandon that name.

The idea of using your time and various other resources carefully and efficiently is a good virtue of rationality. Framing it as being irrational is innaccurate and kinda incendiary.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-13T12:57:42.911Z · LW(p) · GW(p)

The idea of using your time and various other resources carefully and efficiently is a good virtue of rationality. Framing it as being irrational is inaccurate and kinda incendiary.

Here is my reasoning for choosing this title. If you don't mind could you read it and tell me where you think I am mistaken.

I realize that saying 'rationally irrational' appears to be a contradiction. However, the idea is talking about the use of rational methodology at two different levels of analysis. Rationality at the level of goal prioritization potentially results in the adoption of an irrational methodology at the level of goal achievement.

L1- Goal Prioritization L2- Goal Achievement

L1 rationality can result in a limitation of L2 rationality within low priority goal context. Let’s say that someone was watching me play a game of soccer (since I have been using the soccer analogy). As they watched, they might critique the fact that my strategy was poorly chosen, and the overall effort exerted by me and my teammates was lackluster. To this observer, who considers themselves a soccer expert, it would be clear that my and my team’s performance was subpar. The observer took notes of all are flaws and inefficient habits, then after the game wrote them all up to present to us. Upon telling me all these insightful f critiques, the observer is shocked to hear that I am grateful for his effort, but am not going to change how I or my team plays soccer. He tries to convince me that I am playing wrong, that we will never win the way I am playing. And he is correct. To any knowledgeable observer I was poorly, even irrationally, playing the game of soccer. Without knowledge of L1 (which is not observable) the execution of L2 (which is observable) cannot be deemed rational or irrational, and in my opinion, will appear irrational in many situations.

Would you say that to you it appears irrational that I have chosen to label this idea as ‘rationally irrational?’ If that is correct. I would suggest that I have some L1 that you are unaware of, and that while my labeling is irrational in regard to L2 (receiving high karma points / recognition in publishing my essay on your blog) that I have de-prioritized this L2 for the sake of my L1. What do you think?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-13T13:57:27.060Z · LW(p) · GW(p)

I think you're welcome to have whatever goals you like, and so are the soccer players. But don't be surprised if the soccer players, acknowledging that your goal does not in fact seem to be at all relevant to anything they care about, subsequently allocate their resources to things they care about more and treat you as a distraction rather than as a contributor to their soccer-playing community.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-19T12:29:39.746Z · LW(p) · GW(p)

What would you say if I said caring about my goals in addition to their own goals would make them a better soccer player?

Replies from: TheOtherDave, TimS
comment by TheOtherDave · 2012-04-19T15:09:05.206Z · LW(p) · GW(p)

I would say "Interesting, if true. Do you have any evidence that would tend to indicate that it's true?"

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-21T00:48:53.796Z · LW(p) · GW(p)

I'm trying to find a LW essay, i can't remember what it is called, but it is about maximizing your effort in areas of highest return. For example, if you are a baseball player, you might be around 80% in terms of pitching and 20% in terms of base running. to go from 80% up in pitching becomes exponentially harder; whereas learning the basic skill set to jump from dismal to average base running is not.

Basically, rather than continuing to grasp at perfection in one skill set, it is more efficient to maximize basic levels in a variety of skill sets related to target field. Do you know the essay i am talking about?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-21T01:12:39.056Z · LW(p) · GW(p)

Doesn't sound familiar.

Regardless, I agree that if I value an N% improvement in skill A and skill B equivalently (either in and of themselves, or because they both contribute equally to some third thing I value), and an N% improvement in A takes much more effort than an N% improvement in B, that I do better to devote my resources to improving A.

Of course, it doesn't follow from that that for any skill A, I do better to devote my resources to improving A.

Replies from: HungryTurtle, adamisom
comment by HungryTurtle · 2012-04-21T02:01:21.719Z · LW(p) · GW(p)

Ok, then the next question is that would you agree for a human skills related to emotional and social connection maximize the productivity and health of a person?

Replies from: TheOtherDave, TimS
comment by TheOtherDave · 2012-04-21T03:27:32.895Z · LW(p) · GW(p)

No.
Though I would agree that for a human, skills related to emotional and social connection contribute significantly to their productivity and health, and can sometimes be the optimal place to invest effort in order to maximize productivity and health.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-21T11:21:38.550Z · LW(p) · GW(p)

Ok, so these skill sets contribute significantly to the productivity and health of a person. Then would you disagree with the following:

  1. Social and emotional skills signifcantly contribute to health and productivity.
  2. Any job, skill, hobby, or task that is human driven can benefit from an increase in the acting agents health and productivity
  3. Therefore social and emotional skills are relevant (to some degree) to all other human driven skill sets
Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-21T15:18:35.396Z · LW(p) · GW(p)

Sure, agreed.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-23T21:00:46.164Z · LW(p) · GW(p)

Ok, so then I would say that the soccer player in being empathetic to my objectives would be strengthening his or her emotional/ social capacity, which would benefit his or her health/ productivity, and thus benefit his or her soccer playing.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-23T21:31:48.237Z · LW(p) · GW(p)

I'm not sure what you mean by "being empathetic to [your] objectives," but if it involves spending time doing things, then one question becomes whether spending a given time doing those things produces more or less improvement in their soccer playing.

I would certainly agree that if spending their available time doing the thing you suggest (which, incidentally, I have completely lost track of what it is, if indeed you ever specified) produces more of an improvement in the skills they value than doing anything else they can think of, then they ought to do the thing you suggest.

comment by TimS · 2012-04-21T02:24:33.385Z · LW(p) · GW(p)

I wouldn't agree to that statement without a lot more context about a particular person's situation.

comment by adamisom · 2012-04-21T01:17:45.265Z · LW(p) · GW(p)

TheOtherDave is being clear. There are obviously two considerations - right? The comparative benefit of improving two skillsets (take into account comparative advantage!) -and- The comparative cost of improving two skillsets Conceptually easy.

comment by TimS · 2012-04-19T12:36:12.905Z · LW(p) · GW(p)

Who are you talking about? Your example was a team filled with low effort soccer players. Specifically, whose goals are you considering beside your own?

comment by TimS · 2012-04-12T15:33:32.376Z · LW(p) · GW(p)

Can you be more concrete with your soccer example. I don't understand what you mean.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T18:24:48.364Z · LW(p) · GW(p)

In a game of soccer, you could want to improve teamwork, you could want to win the game, you could want to improve your skills, you could want to make a good impression. All these are potential goals of a game of soccer. There is a group of objecetives that would most accurately acheive each of these possible goals. I am suggesting that the for each goal, acheiving the goal to the utmost level requres an objective with relatively high resource demands.

Is that better?

Replies from: TimS
comment by TimS · 2012-04-12T18:53:23.128Z · LW(p) · GW(p)

An observer who thinks you are being stupid for not committing all possible effort to achieving your goal in the game (for example, impressing others) needs a justification for why achieving this goal is that important. In the absence of background like "this is the only chance for the scout from the professional team to see you play, sign you, and cause you to escape the otherwise un-escapable poverty and starvation," the observer seems like an idiot.

I hope you don't think pointing out the apparent idiocy of the observer is an insightful lesson. In short, show some examples of people here (or anywhere) making the mistake (or mistakes) you identify, or stop acting like you are so much wiser than us.

comment by A1987dM (army1987) · 2012-04-11T15:29:15.084Z · LW(p) · GW(p)

Do you ever think it is detrimental having goals?

What would that even mean? Do you by detrimental mean something different than ‘making it harder to achieve your goals’?

Replies from: HungryTurtle, TheOtherDave
comment by HungryTurtle · 2012-04-11T17:34:04.420Z · LW(p) · GW(p)

Detrimental means damaging, but you could definitely read it as damaging to goals.

So do you think it is ever damaging or ever harmful to have goals?

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-11T17:45:05.832Z · LW(p) · GW(p)

Goals can be damaging or harmful to each other, but not to themselves. And if you have no goal at all, there's nothing to be damaged or harmed.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T19:51:42.169Z · LW(p) · GW(p)

I think goals can be damaging to themselves. For example, I think anyone who has the explicit goal of becoming the strongest they can be, effectively limits their strength by the very nature of this type of statement.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-11T20:18:35.771Z · LW(p) · GW(p)

Can you clarify this? What do you think is the goal other than 'be the strongest I can be' that would result in me ending up stronger? (Also, not sure what sort of strength you are talking about here: physical? psychological?)

Replies from: HungryTurtle, TheOtherDave
comment by HungryTurtle · 2012-04-11T21:27:34.615Z · LW(p) · GW(p)

To me true strength is a physical and physiological balance. I feel that anyone who has the goal of being "the strongest" (whether they mean physically, mentally, in a game, etc) is seeking strength out of a personally insecurity about their strength. Being insecure is a type of weakness. Therefore by having the goal of being the strongest will never allow them to be truly strong. Does that make sense? It is a very Daoist idea.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-11T21:40:50.223Z · LW(p) · GW(p)

Do you mean someone who wants to be 'the strongest' compared to others. I don't think that's ever a good goal, because whether or not it is achievable doesn't depend on you. But 'be the strongest' is also an incredibly non-specific goal, and problematic for that reason. If you break it down, you could say "right now, my weaknesses are that a) I'm out of shape and can't jog more than 1 mile, and b) I'm insecure about it" then you could set sub-goals in both these areas, prioritize them, make plans on how to accomplish them, and evaluate afterwards whether they had been accomplished...and then make a new list of weaknesses, and a new list of goals, and a new list of plans. You're doing a lot more than just trying to be as strong as you can, but you're not specifically holding back or trying not to be as strong as you can either, which is what your comment came across as recommending.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T22:35:28.253Z · LW(p) · GW(p)

No not compared to others. Just someone whose goal is to be the strongest. It is the fact that it is an "est" based goal that makes it damaging to itself. I suppose if I were to take all the poetry out of the above mentioned statement I would say that any goal that involves "ests" (fastest, strongest, smartest, wealthiest, etc) involves a degree of abstraction that signifies a lack of true understanding of what the actual quality/ state of being they are targeting encompasses, and that until said person better understands that quality/state they will never be able to achieve said goal.

Note that all your examples take my goal and rewrite to have incredibly practical parameters. You define reachable objectives as targets for your examples, but the point of my example was that it was a goal that lacked such empirically bounded markers.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-12T01:02:53.958Z · LW(p) · GW(p)

OK. Makes sense. As I said in this comment, apparently my brain automatically converts abstract goals into sub-goals...so automatically that I hadn't even imagined someone could have a goal as abstract as 'be as strong as I can' without breaking it down and making it measurable and practicable, etc. I think I understand your point; it's the format of the goal that is damaging, not the content in itself.

Replies from: HungryTurtle, HungryTurtle
comment by HungryTurtle · 2012-04-12T01:26:08.124Z · LW(p) · GW(p)

Ahhh I am a moron, I did not even read that. I read dave's post prior to it and assumed it was irrelevant to the idea I was trying to convey. X_X

comment by HungryTurtle · 2012-04-12T01:24:03.533Z · LW(p) · GW(p)

Yes, exactly. And if you do convert abstract goals into sub-goals you are abnormally brilliant. I don't know if you were taught to do that, or you just deduced such a technique on your own, but the majority of people, the vast majority, is unable to do that. It is a huge problem, one many self-health programs address, and also one that the main paradigms of American education are working to counteract.

It really is no small feat.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-12T01:44:05.794Z · LW(p) · GW(p)

I think it comes from having done athletics as a kid... I was a competitive swimmer, and very quickly it became an obvious fact to me that in order to achieve the big abstract goal (being the fastest and winning the race) you had to train a whole lot. And since it's not very easy for someone who's 11 or 12 years old to wake up every morning at 5 and make it to practice, I turned those into little mini subgoals (examples subgoal: get out of bed and make it to all the practices, subgoal: try to keep up with the fast teenage boys in my lane, subgoal: do butterfly even though it hurts).

So it just feels incredibly obvious to me that the bigger a goal is, the harder you have to train, and so my first thought is 'how do I train for this?'

comment by TheOtherDave · 2012-04-11T20:31:08.835Z · LW(p) · GW(p)

Well, there's "be stronger than I am right now."

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-11T20:57:08.030Z · LW(p) · GW(p)

OK, clarify: If I follow the goal 'be the strongest I can be' I will reach a level of strength X. What other goal would allow me to surpass the level of strength X (not just my initial level)?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-11T21:14:15.672Z · LW(p) · GW(p)

Again: "be stronger than I am right now."

Of course, I need to keep that goal over time, as phrased, rather than unpack it to mean "be stronger than I was back then".

Some context: when I was recovering from my stroke a few years back, one of the things I discovered was that having the goal of doing a little bit better every day than the day before was a lot more productive for me (in terms of getting me further along in a given time period) than setting some target far from my current state and moving towards it. If I could lift three pounds with my right arm this week, I would try for 3.5 next week. If I could do ten sit-ups this week, I would try for 12 next week. And so forth.

Sure, I could have instead had a goal of "do as many situps as I can", but for me, that goal resulted in my being able to do fewer situps.

I suspect people vary in this regard.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-11T21:36:09.036Z · LW(p) · GW(p)

Sure, I could have instead had a goal of "do as many situps as I can", but for me, that goal resulted in my being able to do fewer situps.

I guess to me it seems automatic to 'unpack' a general goal like that into short-term specific goals. 'Be as fit as I can' became 'Improve my fitness' became 'improve my flexibility and balance' became 'start a martial art and keep doing it until I get my black belt' became a whole bunch of subgoals like 'keep practicing my back kick until I can use it in a sparring match'. It's automatic for me to think about the most specific level of subgoals while I'm actually practising, and only think about the higher-level goals when I'm revising whether to add new subgoals.

I guess, because this is the way my goal structure has always worked, I assume that my highest-level goal is by definition to become as good as X as I can. ('Be the strongest I can' has problems for other reasons, namely its non-specificity, so I'll replace it with something specific, so let's say X=swimming speed.)

I don't know the fastest speed is that my body is capable of, but I certainly want to attain that speed, not 0.5 km/h slower. But when I'm actually in the water training, or in bed at home trying to decide whether to get up and go train, I'm thinking about wanting to take 5 seconds off my 100 freestyle time. Once I've taken that 5 seconds off, I'll want to take another 5 seconds off. Etc.

I think the way I originally interpreted HungryTurtle's comment was that he thought you should moderate your goals to be less ambitious than 'be as good at X as you can' because having a goal that ambitious will cause you to lose. But you can also interpret it to mean that non-specific goals without measurable criteria, and not broken down into subgoals, aren't the most efficient way to improve. Which is very likely true, and I guess it's kind of silly of me to assume that everyone's brain creates an automatic subgoal breakdown like mine does.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-11T22:19:15.013Z · LW(p) · GW(p)

I guess to me it seems automatic to 'unpack' a general goal like that into short-term specific goals.

Sure, I can see that. Were it similarly automatic for me, I'd probably share your intuitions here.

comment by TheOtherDave · 2012-04-11T15:44:15.780Z · LW(p) · GW(p)

Hm.
If I have a goal G1, and then I later develop an additional goal G2, it seems likely that having G2 makes it harder for me to achieve G1 (due to having to allocate limited resources across two goals). So having G2 would be detrimental by that definition, wouldn't it?

Replies from: army1987, HungryTurtle
comment by A1987dM (army1987) · 2012-04-11T16:43:24.565Z · LW(p) · GW(p)

Hm... Yeah. So, having goals other than your current goals is detrimental (to your current goals). (At least for ideal agents: akrasia etc. mean that it's not necessarily true for humans.) But I took HungryTurtle to mean ‘having any goals at all’. (Probably I was primed by this.)

comment by HungryTurtle · 2012-04-11T17:54:47.002Z · LW(p) · GW(p)

Yes.

This is very interesting, but I was actually thinking about it in a different manner. I like your idea too, but this is more along the lines of what I meant:

Ultimately, I have goals for the purpose of arriving at some desired state of being. Overtime goals should change rationally to better reach desired states. However, what is viewed as a desired state of being also changes over time.

When I was 12 I wanted to be the strongest person in the world, when I was 18 I wanted to be a world famous comedian. Both of these desired states undoubtedly have goals that the achievement of would more readily and potently produce such desired states. If I had adopted the most efficient methods of pursuing these dreams, I would have been making extreme commitments for the sake of something that later would turn out to be a false desired state. Until one knows their end desired state, any goal that exceeds a certain amount of resources is damaging to the long term achievement of a desired state. Furthermore, I think people rarely know when to cut their losses. It could be that after investing X amount into desired state Y, the individual is unwilling to abandon this belief, even if in reality it is no longer their desired state. People get into relationships and are too afraid of having wasted all that time and resources to get out. I don’t know if I am being clear, but the train of my logic is roughly

  1. Throughout the progression of time what a person finds to be a desired state changes. (Perhaps the change is more drastic in some than others, but I believe this change is normal. Just as through trial and error you refine your methods of goal achievement, through the trials and errors of life you reshape your beliefs and desires. )

  2. If desired states of being are dynamic, then it not wise to commit to too extreme goals or methods for the sake of my current desired state of being. (There needs to be some anticipation of the likelihood that my current desired state might not be in agreement with my final/ actual desired state of being.)

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-11T18:14:56.107Z · LW(p) · GW(p)

(nods)

I certainly agree that the goals people can articulate (e.g., "become a world-famous comedian" or "make a trillion dollars" or whatever) are rarely stable over time, and are rarely satisfying once achieved, such that making non-reversible choices (including, as you say, the consumption of resources) to achieve those goals may be something we regret later.

That said, it's not clear that we have alternatives we're guaranteed not to regret.

Incidentally, it's conventional on LW to talk about this dichotomy in terms of "instrumental" and "terminal" goals, with the understanding that terminal goals are stable and worth optimizing for but mostly we just don't know what they are. That said, I'm not a fan of that convention myself, except in the most metaphorical of senses, as I see no reason for believing terminal goals exist at all.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T19:43:25.987Z · LW(p) · GW(p)

But do you believe that most people pretty predictably experience shifts in goal orientation over a lifetime?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-11T19:47:30.458Z · LW(p) · GW(p)

I'd have to know more clearly what you mean by "goal orientation" to answer that.

I certainly believe that most (actually, all) people, if asked to articulate their goals at various times during their lives, would articulate different goals at different times. And I'm pretty confident that most (and quite likely all, excepting perhaps those who die very young) people express different implicit goals through their choices at different times during their lives.

Are either of those equivalent to "shifts in goal orientation"?

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T19:50:23.201Z · LW(p) · GW(p)

Yes

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-11T20:28:42.310Z · LW(p) · GW(p)

Then yes, I believe that most people pretty predictably experience shifts in goal orientation over a lifetime.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T21:29:36.127Z · LW(p) · GW(p)

Ok, me to.

Then if you believe that, does it seem logical to set up some system of regulation or some type of limitations on the degree of accuracy you are willing to strive for any current goal orientation?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-11T22:16:23.452Z · LW(p) · GW(p)

Again, I'm not exactly sure I know what you mean.

But it certainly seems reasonable for me to, for example, not consume all available resources in pursuit of my currently articulable goals without some reasonable expectation of more resources being made available as a consequence of achieving those goals.

Is that an example of a system of regulation or type of limitation on the degree of accuracy I am willing to strive for my current goal orientation?

Preventing other people from consuming all available resources in pursuit of their currently articulable goals might also be a good idea, though it depends a lot on the costs of prevention and the likelihood that they would choose to do so and be able to do so in the absence of my preventing them.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T22:50:13.847Z · LW(p) · GW(p)

But it certainly seems reasonable for me to, for example, not consume all available resources in pursuit of my currently articulable goals without some reasonable expectation of more resources being made available as a consequence of achieving those goals.

Is that an example of a system of regulation or type of limitation on the degree of accuracy I am willing to strive for my current goal orientation?

Yes in a sense. What I was getting at is that the implementation of rationality , when one's capacity for rationality is high (i.e when someone is really rational), is a HUGE consumption of resources. That

1.) Because goal-orientations are dynamic 2.) The implementation of genuine rational methodology to a goal-orientation consumes a huge amount of the individual/group's resources 3.) Both individuals and groups would benefit from having a system of regulating when to implement rational methodology and to what degree in the pursuit of a specific goal.

This is what my essay is about. This is what I call rational irrationality, or rationally irrational; because I see that a truly rational person for the sake of resource preservation and long-term (terminal) goal achievement would not want to achieve all their immediate goals in the fullest sense. This to me is different than having the goal of losing. Because you still want to achieve your goals, you still have immediate goals, you just do not place the efficient achievement of these goals as your top priority.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-11T23:29:21.877Z · LW(p) · GW(p)

I certainly agree that sometimes we do best to put off achieving an immediate goal because we're optimizing for longer-term or larger-scale goals. I'm not sure why you choose to call that "irrational," but the labels don't matter to me much.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T23:58:22.800Z · LW(p) · GW(p)

I call it irrational, because in pursuit of our immediate goals we are ignoring/avoiding the most effective methodology, thus doing what is potentially ineffective?

But hell, maybe on a subconscious level I did it to be controversial and attack accepted group norms O_O

comment by HungryTurtle · 2012-04-06T14:54:16.607Z · LW(p) · GW(p)

agreed!

comment by HungryTurtle · 2012-04-06T14:53:23.834Z · LW(p) · GW(p)

Rationality is the optimal tool for goal-winning, which is always what is desirable.

You can show that our current understanding of rationality or winning does not live up to the definition, but that is not a criticism of the definition.

With all due respect, you are missing the point I am trying to make with "erring on the side of caution" segment. I would agree that in theory goal-winning is always desirable, but as you yourself point out, the individual's understanding of rationality or winning (goal-orientation) is flawed. You imply that as time progresses the individual will slowly but surely recognize what "true winning is." In response to this notion, I would ask

1.) How do you rationalize omitting the possibility that the individual will never understand what "true rationality" or "true winning" are? What evidence do you have that such knowledge is even obtainable? If there is none, then would it not be more rational adjust one's confidence in one’s goal-orientation to include the very real possibility that any immediate goal-orientation might later be revealed as damaging?

2.) Even if we make the assumption that eventually the individual will obtain a perfect understanding of rationality and winning, how does this omit the need for caution in early stage goal-orientation? If given enough time, I will understand true rationality, then rationally shouldn't all my goals up until that point is reached by approached with caution?

My point is that while one’s methodology in achieving goals can become more and more precise, there is no way to guarantee that the bearings at which we place our goals will lead us down a nourishing (and therefore rational) path; and therefore, the speed at which we achieve goals (accelerated by rationality) is potentially dangerous to achieving the desired results of those goals. Does that make sense?

Replies from: Arran_Stirton
comment by Arran_Stirton · 2012-04-07T05:30:52.809Z · LW(p) · GW(p)

1.) You should read up on what it really means to have "true rationality". Here's the thing, we don't omit the possibility that the individual will never understand what "true rationality" is, in fact Bayes' Theorem shows that it's impossible to assign a probability of 1.0 to any theory of anything (never mind rationality). You can't argue with math.

2.) Yes, all of your goals should be approached with caution, just like all of your plans. We're not perfectly rational beings, that's why we try to become stronger. However, we approach things with due caution. If something is our best course of action given the amount of information we have, we should take it.

Also remember, you're allowed to plan for more than one eventuality, that's why we use probabilities and Bayes’ theorem it order to work out what eventualities we should plan for.

comment by Furslid · 2012-03-07T21:02:58.373Z · LW(p) · GW(p)

So, sometimes actions that are generally considered rational lead to bad results in certain situations. I agree with this.

However, how are we to identify and anticipate these situations? If you have a tool other than rationality, present it. If you have a means of showing its validity other than the rationalist methods we use here, present that as well.

To say that rationality itself is a problem leaves us completely unable to act.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-06T16:50:29.741Z · LW(p) · GW(p)

So, sometimes actions that are generally considered rational lead to bad results in certain situations. I agree with this.

Well said! I was not trying to attack the use of rationality as a method, but rather to attack the immoderate use of this method. Rationality is a good and powerful tool for acting intentionally, but should there not be some regulation in its use? You state

To say that rationality itself is a problem leaves us completely unable to act.

I would counter: To say that there is no problem with rationality leaves us completely without reason to suspend action.

As you have suggested, rationality is a tool for action. Are there not times when it is harmful to not act? Are there no reasons to suspend action?

Replies from: TimS
comment by TimS · 2012-04-06T17:10:42.777Z · LW(p) · GW(p)

Rationality is a tool for making choices. Sometimes the rational choice is not to play.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-06T17:34:23.642Z · LW(p) · GW(p)

Which is why I call it rational irrationality, or rationally irrational if you would prefer. I do think it is possible to semantically stretch the conception of rationality to cover this, but I still think a fundamental distinction needs to be acknowledged between rationality that leads to taking control in a situation, and rationality that leads to intentional inaction.

Replies from: TimS
comment by TimS · 2012-04-06T18:47:48.117Z · LW(p) · GW(p)

I feel like you are conflating terminal values (goals) and instrumental values (means/effectiveness) a little bit here. There's really no good reason to adopt an instrumental value that doesn't help you achieve your goals. But if you aren't sure of what your goals are, then no amount of improvement of your instrumental values will help.

I'm trying to distinguish between the circumstance where you aren't sure if inactivity will help achieve what you want (if you want your spouse to complete a chore, should you remind them or not?) or aren't sure if inactivity is what you want (do I really like meditation or not?).

In particular, your worry about accuracy of maps and whether you should act on them or check on them seems to fundamentally be a problem about goal uncertainty. Some miscommunication is occurring because the analogy is focused on instrumental values. To push a little further on the metaphor, a bad map will cause you to end up in Venice instead of Rome, but improving the map won't help you decide if you want to be in Rome.

comment by Dustin · 2012-03-07T23:57:13.916Z · LW(p) · GW(p)

If the action you are engaging in is not helping you achieve your goals, than it is not rational.

You are describing a failure of rationality rather than rationality itself.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-06T16:41:18.488Z · LW(p) · GW(p)

What I am describing is the need for a safeguard against overly confident goal orientation.

comment by aliciaparr · 2012-03-08T12:27:09.470Z · LW(p) · GW(p)

I find it interesting, even telling, that nobody has yet challenged the assumptions behind the proposition "Rationality is a tool for accuracy," which would be that "rationality is the best tool for accuracy" and/or that "rationality is the sole tool that can be used to achieve accuracy."

Replies from: Richard_Kennaway, army1987, HungryTurtle
comment by Richard_Kennaway · 2012-03-08T13:27:21.899Z · LW(p) · GW(p)

Why would someone challenge a proposition that they agree with? While I don't see that the proposition "Rationality is a tool for accuracy" presumes "Rationality is the tool for accuracy", I'd agree with the latter anyway. Rationality is the only effective tool there is, and more than merely by definition. Praying to the gods for revelation doesn't work. Making stuff up doesn't work. Meditating in a cave won't tell you what the stars are made of. Such things as observing the world, updating beliefs from experience, making sure that whatever you believe implies something about what you will observe, and so on: these are some of the things in the rationality toolbox, these are the things that work.

If you disagree with this, please go ahead and challenge it yourself.

Replies from: AspiringKnitter, HungryTurtle
comment by AspiringKnitter · 2012-03-11T21:22:22.862Z · LW(p) · GW(p)

Praying to the gods for revelation doesn't work.

Supposing that you lived in a universe where you could pray for and would then always receive infallible instruction, it would be rational to pray.

If it leads to winning more than other possibilities, it's rational to do it. If your utility function values pretending to be stupid so you'll be well-liked by idiots, that is winning.

Replies from: None, Richard_Kennaway
comment by [deleted] · 2012-03-12T00:05:29.168Z · LW(p) · GW(p)

pretending

Key phrase. The accurate map leads to more winning. Acknowledging that X obviously doesn't work, but pretending that it does in order to win is very different from thinking X works.

ETA: It is all fine and dandy that I am getting upvotes for this, and by all means don't stop, but really I am just a novice applying Rationality 101 whereever I see fit in order to earn my black belt.

Replies from: HungryTurtle, AspiringKnitter
comment by HungryTurtle · 2012-04-06T15:17:52.909Z · LW(p) · GW(p)

The accurate map leads to more winning.

What evidence is there that the map is static? We make maps and the world transforms. Rivers become canyons; mountains become mole hills (pardon the rhetorical ring I could not resist). Given that all maps are approximations isn't it rational to moderate one's navigation with the occasional off course exploration to verify that not drastic changes have occurred in the geography?

And because I feel the analogy is pretty far removed at this point, what I mean by that, is that if we have charted a goal-orientation based on our map that puts us on a specific trajectory, would it not be beneficial to occasional abandon our goal-orientation to explore other trajectories for potentially new and more lucrative paths.

Replies from: None, Vaniver
comment by [deleted] · 2012-04-06T18:57:48.681Z · LW(p) · GW(p)

The evidence that the territory is static is called Physics. The laws does not change, and the elegant counterargument against anti-inductionism is that if induction didn't work our brains would stop working, because our brains depend on static laws.

There is no evidence whatsoever that the map is static. It should never be, you should always be prepared to update, there isn't a universal prior that lets you reason inductively about any universe.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-06T20:18:32.963Z · LW(p) · GW(p)

The evidence that the territory is static is called Physics

The territory is not static. Have you ever heard of quantum physics?

Replies from: army1987, None
comment by A1987dM (army1987) · 2012-04-06T22:39:33.404Z · LW(p) · GW(p)

Quantum physics is invariant under temporal translation too.

Replies from: Dmytry
comment by Dmytry · 2012-04-06T22:48:12.584Z · LW(p) · GW(p)

The laws don't change by definition. If something changes, we try to figure out some invariant description of how it changes, and call that a law. We presume a law even when we don't know the invariant description (as is the case with QM&gravity combined). If there was magic in the real world, we'd do the same thing and have same invariant laws of magic, even though number of symmetries may have been lower.

comment by [deleted] · 2012-04-06T20:47:06.675Z · LW(p) · GW(p)

The territory is governed by unchanging perfectly global basic mathematically simple universal laws.

The Schrödinger equation does not change. Ever.

And further more, you can plot the time dimension as a spatial dimension and then navigate a model of an unchanging structure of world lines. That is an accepted model called the Block Universe in General Relativity. The Block universe is 'static' that is, without time.

There is reason to believe the same can be done in quantum mechanics.

comment by Vaniver · 2012-04-06T15:37:18.746Z · LW(p) · GW(p)

would it not be beneficial to occasional abandon our goal-orientation to explore other trajectories for potentially new and more lucrative paths.

Why would that not be part of the trajectory traced out by your goal-orientation, or a natural interaction between the fuzziness of your map and your goals?

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-06T18:26:58.360Z · LW(p) · GW(p)

Well you would try to have that as part of your trajectory, but what I am suggesting is that there will always be things beyond your planning, beyond your reasoning, so in light of this perhaps we should strategically deviate from those plans every now and then to double check what else is out there.

Replies from: Vaniver
comment by Vaniver · 2012-04-06T18:43:52.155Z · LW(p) · GW(p)

I'm still confused by what you're considering inside my reasoning and outside my planning / reasoning. If I say "spend 90% of your time in the area with the highest known EV and 10% of your time measuring areas which have at least a 1% chance of having higher reward than the current highest EV, if they exist," then isn't my ignorance about the world part of my plan / reasoning, such that I don't need to deviate from those plans to double check?

comment by AspiringKnitter · 2012-03-12T02:00:17.385Z · LW(p) · GW(p)

It is all fine and dandy that I am getting upvotes for this, and by all means don't stop, but really I am just a novice applying Rationality 101 whereever I see fit in order to earn my black belt.

Personally, I think that behavior should be rewarded.

Replies from: None
comment by [deleted] · 2012-03-12T02:13:33.521Z · LW(p) · GW(p)

Personally, I think that behavior should be rewarded.

Thank you, and I share that view. Why don't we see everyone doing it? Why, I would be overjoyed if everyone was so firmly trained in Rat101 that comments like these were not special.

But now I am deviating into a should-world + diff.

Replies from: Ben_Welchner
comment by Ben_Welchner · 2012-03-12T02:36:02.874Z · LW(p) · GW(p)

I'm pretty sure we do see everyone doing it. Randomly selecting a few posts, in The Fox and the Low-Hanging Grapes the vast majority of comments received at least one upvote, the Using degrees of freedom to change the past for fun and profit thread have slightly more than 50% upvoted comments and the Rationally Irrational comments also have more upvoted than not.

It seems to me that most reasonably-novel insights are worth at least an upvote or two at the current value.

EDIT: Just in case this comes off as disparaging LW's upvote generosity or average comment quality, it's not.

Replies from: AspiringKnitter
comment by AspiringKnitter · 2012-03-12T02:42:17.742Z · LW(p) · GW(p)

Though among LW members, people probably don't need to be encouraged to use basic rationality. If we could just upvote and downvote people's arguments in real life...

I'm also considering the possibility that MHD was asking why we don't see everyone using Rationality 101.

comment by Richard_Kennaway · 2012-03-12T07:47:49.821Z · LW(p) · GW(p)

Praying to the gods for revelation doesn't work.

Supposing that you lived in a universe where you could pray for and would then always receive infallible instruction, it would be rational to pray.

I'm talking about the real world, not an imaginary one. You can make up imaginary worlds to come up with a counterexample to any generalisation you hear, but it amounts to saying "Suppose that were false? Then it would be false!"

comment by HungryTurtle · 2012-04-06T15:09:22.802Z · LW(p) · GW(p)

Richard,

Would you agree that the rate of speed that you try to do something is directly correlated to the accuracy you can produce?

I imagine the faster you try to do something to poorer your results will be. Do you disagree?

If it is true that at times accuracy demands some degree of suspension/inaction, then I would suggest to you that tools such as praying, meditating, and "making stuff up" serve to slow the individual down, allowing for better accuracy in the long term. Whereas, increasing intentionality will beyond some threshold decrease overall results.

Does that make sense?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-04-06T20:30:05.775Z · LW(p) · GW(p)

Slowing down will only give better results if it's the right sort of slowing down. For example, slowing down to better attend to the job, or slowing down to avoid exhausting oneself. But I wasn't talking about praying, meditating, and making stuff up as ways of avoiding the task, but as ways of performing it. As such, they don't work.

It may be very useful to sit for a while every day doing nothing but contemplating one's own mind, but the use of that lies in more clearly observing the thing that one studies in meditation, i.e. one's own mind.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T15:31:43.318Z · LW(p) · GW(p)

But I wasn't talking about praying, meditating, and making stuff up as ways of avoiding the task, but as ways of performing it. As such, they don't work.

I am suggesting the task they perform has two levels. The first is a surface structure, defined by whatever religious or creative purpose the performer thinks they serve. In my opinion, the medium of this level is completely arbitrary. It does not matter what you pray to, or if you meditate or pray, or play baseball for that matter. The importance of such actions comes from their deep structure, which develops beneficial cognitive, emotional, or physical habits.

Prayer is in many cultures a means of cultivating patience and concentration. The idea, which has been verified by the field of psychology, is that patience, concentration, reverence, toleration, empathy, sympathy, anxiety, serenity, these and many other cognitive dispositions are not the result of a personality type, but rather the result of intentional development.

Within the last several decades there has been a revolution within the field of psychology as to what action is. Previously cognitive actions were not thought of as actions, and therefore not believed to be things that you develop. It was believed that some people where just born kinder, more stressed, more sympathetic, etc, that there were cognitive types. We know now is that this is not true. While it is true that everyone probably is born with a different degree of competency in these various cognitive actions (just as some people are probably born slightly better at running, jumping, or other more physical actions), more important than innate talent is the amount of work someone puts into a capacity. Someone born with a below average disposition for running can work hard and become relatively fast. In the same way, while there are some biological grounds and limitations, for the majority of people, the total level of capacity they are able to achieve in some action is determined by the amount of work they devote to improving that action. If you work out your tolerance muscles, you will become able to exhibit greater degrees of tolerance. If you work out your concentration muscle, you will be able to concentrate to greater degrees. How do you work out tolerance or concentration muscles? By engaging in tasks that require concentration or tolerance. So, does praying 5 times a day to some God have an impact on reality? Well if you mean in the sense that a “God” listens to and acts on your prayers, No. But if you mean in the sense that the commitment to keeping a schedule and concentration on one thing 5 times, then yes it does. It impacts the reality of your cognition and consciousness.

So returning to what I was saying about suspending action. You interpreted it as “avoiding a task” but I would suggest that suspending action here has deeper meaning. It is not avoiding a task, but developing competencies in caution, accepting a locus of control, limitations, and acceptance. There are more uses in meditation than just active reflection of thought. In fact, most meditation discourages thought. The purpose is to clear your mind, suggesting that there is a benefit in reducing intentionality to some degree. Now, let me be clear that what I am advocating here is very much a value based position. I am saying there is a benefit in exercising the acceptance of limitations to some degree , a benefit in caution to some degree, etc. I would be interested to know do you disagree?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-04-12T11:17:22.604Z · LW(p) · GW(p)

That is a lot of words, but it seems to me that all you are saying is that meditation (misspelled as "mediation" throughout) can serve certain useful purposes. So will a spade.

BTW, slowing a drum rhythm down for a beginner to hear how it goes is more difficult than playing it to speed.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T12:14:12.020Z · LW(p) · GW(p)

it seems to me that all you are saying is that meditation (misspelled as "mediation" throughout) can serve certain useful purposes.

Along with religion, praying, and making stuff up. Meditating (thanks for the correction) was just an example.

BTW, slowing a drum rhythm down for a beginner to hear how it goes is more difficult than playing it to speed.

Oh, I also don't get the spade comment either. I mean I agree a spade has useful purposes but what is the point of saying so here?

Not exactly sure what you are trying to express here. Do you mind further explanation?

comment by A1987dM (army1987) · 2012-04-06T19:06:29.045Z · LW(p) · GW(p)

Cox's theorem does show that Bayesian probability theory (around here a.k.a. epistemic rationality) is the only way to assign numbers to beliefs which satisfies certain desiderata.

comment by HungryTurtle · 2012-04-06T15:03:47.078Z · LW(p) · GW(p)

Aliciaparr,

This is in a sense the point of my essay! I define rationality as a tool for accuracy, because I believed that was a commonly held position on this blog (perhaps I was wrong). But if you look at the overall point of my essay, it is to suggest that there are times when what is desired is achieved without rationality, therefore suggesting alternative tools for accuracy. As to the idea of a "best tool", as I outline in my opening, I do not think such a thing exists. A best tool implies a universal tool for some task. I think that there are many tools for accuracy, just as there are many tools for cooking. In my opinion it all depends on what ingredients you are faced with and what you want to make out of them.

Replies from: DSimon, Arran_Stirton
comment by DSimon · 2012-04-06T20:20:32.519Z · LW(p) · GW(p)

Maybe think about it this way: what we mean by "rationality" isn't a single tool, it's a way of choosing tools.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T15:04:55.152Z · LW(p) · GW(p)

That is just pushing it back one level of meta-analysis. The way of choosing tools is still a tool. It is a tool for choosing tools.

Replies from: DSimon
comment by DSimon · 2012-04-12T04:42:08.798Z · LW(p) · GW(p)

I agree, and the thing about taking your selection process meta is that you have to stop at some point. If you have more than 1 tool for choosing tools, how do you choose which one to pick for a given situation? You'd need a tool that chooses tools that chooses tools! Sooner or later you have to have a single top level tool or algorithm that actually kicks things into motion.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T12:51:12.792Z · LW(p) · GW(p)

This is where we disagree. To have rationality be the only tool for choosing tools is to assume all meaningful action is derived from the intentional transformation. I disagree with this idea, and I think modern psychology disagrees as well. It is not only possible, it is at times essential to have meaningful action that is not intentionally driven. If you accept this statement as fact, then it implies the need for a secondary system of tool choosing. More specifically, a type of emergency brake system. You have rationality that is the choosing system, and then the secondary system that shuts the system down when it is necessary to halt further production of intentionality.

Replies from: DSimon
comment by DSimon · 2012-04-12T20:05:06.657Z · LW(p) · GW(p)

[I]t is at times essential to have meaningful action that is not intentionally driven.

If by "not intentionally driven" you mean things like instincts and intuitions, I agree strongly. For one thing, the cerebral approach is way too slow for circumstances that require immediate reactions. There is also an aesthetic component to consider; I kind of enjoy being surprised and shocked from time to time.

Looking at a situation from the outside, how do you determine whether intentional or automatic action is best? From another angle, if you could tweak your brain to make certain sorts of situations trigger certain automatic reactions that otherwise wouldn't, or vice versa, what (if anything) would you pick?

These evaluations themselves are part of yet another tool.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-12T21:04:03.997Z · LW(p) · GW(p)

If by "not intentionally driven" you mean things like instincts and intuitions, I agree strongly.

Yes, exactly.

if you could tweak your brain to make certain sorts of situations trigger certain automatic reactions that otherwise wouldn't, or vice versa, what (if anything) would you pick?

I think both intentional and unintentional action are required at different times. I have tried to devise a method of regulation, but as of now, the best I have come up with is moderating against extremes on either end. So if it seems like I have been overly intentional in recent days, weeks, etc, I try to rely more on instinct and intuition. It is rarely the case that I am relying too heavily on the later ^_^

Replies from: DSimon
comment by DSimon · 2012-04-13T01:56:35.041Z · LW(p) · GW(p)

So if it seems like I have been overly intentional in recent days, weeks, etc, I try to rely more on instinct and intuition.

Right, this is a good idea! You might want to consider an approach that goes by deciding what situations best require intuition, and which ones require intentional thought, rather than aiming only to keep their balance even (though the latter does approximate the former to the degree that these situations pop up with equal frequency).

Overall, what I've been getting at is this: Value systems in general have this property that you have to look at a bunch of different possible outcomes and decide which ones are the best, which ones you want to aim for. For technical reasons, it is always possible (and also usually helpful) to describe this as a single function or algorithm, typically around here called one's "utility function" or "terminal values". This is true even though the human brain actually physically implements a person's values as multiple modules operating at the same time rather than a single central dispatch.

In your article, you seemed to be saying that you specifically think that one shouldn't have a single "final decision" function at the top of the meta stack. That's not going to be an easily accepted argument around here, for the reasons I stated above.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-13T12:26:39.205Z · LW(p) · GW(p)

In your article, you seemed to be saying that you specifically think that one shouldn't have a single "final decision" function at the top of the meta stack. That's not going to be an easily accepted argument around here, for the reasons I stated above.

Yeah, this is exactly what I am arguing.

For technical reasons, it is always possible (and also usually helpful) to describe this as a single function or algorithm, typically around here called one's "utility function" or "terminal values".

Could you explain the technical reasons more, or point me to some essays where I could read about this? I am still not convinced why it is more benefical to have a single operating system.

Replies from: TheOtherDave, DSimon
comment by TheOtherDave · 2012-04-13T14:05:49.417Z · LW(p) · GW(p)

I'm no technical expert, but: if I want X, and I also want Y, and I also want Z, and I also want W, and I also want A1 through A22, it seems pretty clear to me that I can express those wants as "I want X and Z and W and A1 through A22." Talking about whether I have one goal or 26 goals therefore seems like a distraction.

comment by DSimon · 2012-04-16T03:57:53.582Z · LW(p) · GW(p)

In regards to why it's possible, I'll just echo what TheOtherDaveSaid.

The reason it's helpful to try for a single top-level utility function is because otherwise, whenever there's a conflict among the many many things we value, we'd have no good way to consistently resolve it. If one aspect of your mind wants excitement, and another wants security, what should you do when you have to choose between the two?

Is quitting your job a good idea or not? Is going rock climbing instead of staying at home reading this weekend a good idea or not? Different parts of your mind will have different opinions on these subjects. Without a final arbiter to weigh their suggestions and consider how important comfort and security are relative to each other, how do you do decide in a non-arbitrary way?

So I guess it comes down to: how important is it to you that your values are self-consistent?

More discussion (and a lot of controversy on whether the whole notion actually is a good idea) here.

Replies from: TheOtherDave, HungryTurtle
comment by TheOtherDave · 2012-04-16T13:01:21.053Z · LW(p) · GW(p)

Without a final arbiter to weigh their suggestions and consider how important comfort and security are relative to each other, how do you do decide in a non-arbitrary way?

Well, there's always the approach of letting all of me influence my actions and seeing what I do.

comment by HungryTurtle · 2012-04-18T12:44:35.436Z · LW(p) · GW(p)

Thanks for the link. I'll respond back when I get a chance to read it.

comment by Arran_Stirton · 2012-04-07T05:44:46.303Z · LW(p) · GW(p)

If you're going to use the word rationality, use its definition as given here. Defining rationality as accuracy just leads to confusion and ultimately bad karma.

As for a universal tool for some task? (i.e. updating on your belief) Well you really should take a look at Bayes' theorem before you claim that there is no such thing.

Replies from: HungryTurtle
comment by HungryTurtle · 2012-04-11T15:04:11.477Z · LW(p) · GW(p)

I am willing to look at your defintion of rationality, but don't you see how it is problematic to attempt to prescribe one static defintion to a word?

As for a universal tool for some task? (i.e. updating on your belief) Well you really should take a look at Bayes' theorem before you claim that there is no such thing.

Ok, so you do believe that bayes theorem is a universal tool?