Posts

The Rosenhan Experiment 2012-09-14T22:31:05.307Z

Comments

Comment by chaosmosis on The Winding Path · 2015-11-27T08:08:40.682Z · LW · GW

I like the vibes.

But worse, the path is not merely narrow, but winding, with frequent dead ends requiring frequent backtracking. If ever you think you're closer to the truth - discard that hubris, for it may inhibit you from leaving a dead end, and there your search for truth will end. That is the path of the crank.

I don't like this part. First, thinking that you're closER to the truth is not really a problem, it's thinking you've arrived at the truth that arguably is. Second, I think sometimes human beings can indeed find the truth. Underconfidence is just as much a sin as overconfidence, but referring to hubris in the way that you did seems like it would encourage false humility. I think you should say something more like "for every hundred ides professed to be indisputable truths, ninety nine are false", and maybe add something about how there's almost never good justification to refuse to even listen to other people's points of view.

The path of rationality is a path without destination.

I don't agree with this either, or most of the paragraph before it: there are strong trends.

Comment by chaosmosis on LessWrong podcasts · 2012-12-21T08:24:52.795Z · LW · GW

http://lesswrong.com/lw/fr2/lesswrong_podcasts/83ky

Comment by chaosmosis on LessWrong podcasts · 2012-12-21T08:24:33.933Z · LW · GW

I think that even very small amounts of x-risk increases are significant. I also think that lone LWers have the most impact when they're dealing with things like community attitudes.

Comment by chaosmosis on LessWrong podcasts · 2012-12-21T01:17:36.835Z · LW · GW

You have repeatedly falsely portrayed my arguments as defending belligerent tone, you've used that as an excuse to curse at me, you've defended an absurd model of my intentions, you've shifted the topic of the discussion over and over again and repeatedly ignored points that you find inconvenient. You have never produced a valid response to these objections, you continue to omit them over and over and to instead redirect the topic onto personal attacks on me.

This is all evidence for my belief that you're portraying your motives here dishonestly. The repeated aggression that you've shown, given the additional fact that there's a complete lack of warrants to support it, is strong evidence for my belief that you enjoy being a jerk. You claim that you would have gone into a different line of work if that were the case, but I think that's only incredibly weak evidence.

It's not as though I should have a low prior on a human being an asshole, even if they're not in finance. It's also not as though I should privilege your assertions as to what a counterfactual world would look like over your actual observable behavior within these comments.

Comment by chaosmosis on LessWrong podcasts · 2012-12-21T00:08:01.942Z · LW · GW

I'll concede all of the above, with the qualifier that you're not nearly as sophisticated as you'd like to pretend.

What I'd like to focus on, instead, are the things you've chosen not to talk about. You, once again, have selectively quoted my comments and ignored any points that made you look bad but that you didn't know how to answer. You've conceded that your understanding of my intentions is obviously irrational, that you've utilized strawmen often throughout this discussion, and that you're using guerilla type argumentative tactics against me.

Overall, I don't believe that this conflict is about the merits and risks of a reliance on tone at all, but rather it's about you wanting to make a status grab at the expense of actually furthering rational communication on this site. It is also about your desire to ruin my status, which I believe exists not for the reasons given in these comment trees but rather for reasons that I don't quite understand yet.

If I had to guess, you're just a bully who enjoys bullying whenever they get into a context they can get away with it. You feel powerful when you portray yourself as engaging in the tactics of Machiavelli, or when you remind yourself that you have friends on this website and I don't. You seem seriously messed up. Because of all this, I think you are a danger to LessWrong, and that your ego will increase existential risk by a significant amount.

I hope someone situated in a position to better analyze and respond to your behavior sees my comments, so that they can watch you with this perspective in mind. Take it with a grain of salt, please, but don't dismiss it out of hand either. Hopefully, his influence will be curbed before he does something dangerous with it.

Comment by chaosmosis on LessWrong podcasts · 2012-12-20T23:54:01.741Z · LW · GW

I think neither of those things. This isn't about stupidity or intelligence. This is about how people will behave within a conversation. More intelligence granted to a debator set on winning an argument and securing status does not make them better at accepting and learning from information in the context. It makes them better at defending themselves from needing to. It makes them better and creating straw men and clever but irrelevant counter-arguments.

I agree that tone can provide useful information. The difference between our positions is perhaps more one of emphasis than anything else, despite the stupid and superficial squabbling above. I'm focused on the dangers of relying on tone, whereas you're focused on the benefits.

I'm focused on the dangers of tone since I think that our intuitions about such an inherently slippery concept are untrustworthy and I also think that it's human nature to perceive neutral differences in things like tone as hostile differences. As previously mentioned, I also thing that LessWrongers allow tonal differences to cloud their judgement, and they feel justified in doing so because they are offended by other tones. Tone should be secondary to substance by a very long margin.

I am unsure to what extent you really disagree with any of this. You don't seem to attempt to refute my arguments about how a reliance on tone can be dangerous. Instead, you take pot shots at my credibility, and you say that tone also has legitimate uses. I don't want to deny or preclude legitimate uses of tone, so your position here doesn't clash much with mine.

We also both seem to perceive norms on LessWrong surrounding tone differently. I see a lot of the dangerous type of attitude towards tone going on in this site, the above comment with someone who apparently strawmanned my comment 3 times being a good example. Judging from your overall position, you seem to perceive this as less common. I don't know what could be done to resolve this aspect of our disagreement.

I'm not comfortable identifying with any group 'us' unless I know how that group is identified. I'd be surprised if I even willingly put myself in the same group as you (making a quoted-from-you 'us' unlikely). For better or worse I do not believe I relate to words, argument or communication in general the same way that you do. (And yes, I do believe that my 'us' would refer to the 'smart ones'---or at least ones that are laudable in some way that I consider significant.)

I was using that language tongue-in-cheek, to display the sort of perspective that I perceive as dangerous and that I think you might be trying to justify, not as something that I actually believe. I also thought it was ironic and amusing to place myself in the same category as you, I did so with the belief that you would reject that association, which was exactly what made it funny to me.

Comment by chaosmosis on Average utilitarianism must be correct? · 2012-12-20T23:26:32.758Z · LW · GW

Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.

So, to give a concrete example, you have $10 dollars. You can choose between gaining 5 utilons today and five tomorrow by spending half of the money today and half of the money tomorrow, or between spending all of it today and gaining 10 utilons today and 0 tomorrow. These outcomes both give you equal numbers of utilons, so they're equal.

Phil says that the moral reason they're both equal is because they both have the same amount of average utility distributed across instances of you. He then uses that as a reason that average utilitarianism is correct across different people, since there's nothing special about you.

However, an equally plausible interpretation is that the reason they are morally equal in the first instance is because the aggregate utilities are the same. Although average utilitarianism and aggregate utilitarianism overlap when N = 1, in many other cases they disagree. Average utilitarianism would rather have one extremely happy person than twenty moderately happy people, for example. This disagreement means that average and aggregate utilitarianism are not the same (as well as the fact that they have different metaethical justifications which are used as support), which means he's not justified in either his initial privileging of average utilitarianism or his extrapolation of it to large groups of people.

Comment by chaosmosis on LessWrong podcasts · 2012-12-20T09:10:46.810Z · LW · GW

I said that length was useful insofar as it added to communication. Was I particularly inefficient? I don't think so. As is, it's somewhat ironic, but I think only superficially so because there isn't any real clash between what I claim as ideal and what I engage in (because, again, I think I was efficient). And there's not stupidly there at all, or at least none that I see. You'll need to go into more detail here.

Comment by chaosmosis on LessWrong podcasts · 2012-12-20T09:08:45.663Z · LW · GW

I understand what you're getting at, but what specifically is important about this change? I see the added resource intensity as one thing but that's all I can think of whereas I'm reading your comment as hinting at some more fundamental change that's taking place.

(A few seconds later, my thoughts.)

One change might be that the goals have shifted. It becomes about status and not about solving problems. Maybe that is what you had in mind? Or something else?

Comment by chaosmosis on LessWrong podcasts · 2012-12-20T08:57:30.160Z · LW · GW

You're not hiding behind a mask of politeness; I have no idea what I was thinking because that is clearly wrong. I do think you are hiding behind a mask of rationality though. For example, you basically just conceded that you have no ability to provide any form of evidence for your claims that I am being disingenuous, and you then claimed that my asking for you to do so is evidence that I am being disingenuous. That's both contradictory and suspicious.

The question about lack of specificity is not disingenuous because it gets at the important point that I don't think I've done anything disingenuous. Additionally, I'm justified in suspecting your motives, because every time I make a good point you ignore it in your next response and arguments fall by the wayside, but in reality I have made several good reasons already why you are being disingenuous.

Your above comment is a good example, you manage to totally gloss over my obviously true claim that you've strawmanned me repeatedly. You also gloss over the fact that I showed that your beliefs about my intentions are incredibly stupid. You are ignoring any arguments that you find inconvenient, you are not actually engaging me so much as you are utilizing guerilla tactics. You view me as an evil moron, obviously I have no reason to believe that you would actually conduct yourself with some sort of fairness during this exchange. No one else should believe that either, they should definitely be suspicious of you and your motives at the point where all of these arguments indict your credibility and yet are just pushed to the side.

You say that you are allowing yourself to engage in low status signalling. But I think that aggression is actually perceived as high status, and that you would be aware of this, and that you are falsely portraying yourself as nobly enraged so that you garner even more sympathy. Also, your aggression serves to mask your guerilla tactics, which I think is another reason you are using it.

Comment by chaosmosis on LessWrong podcasts · 2012-12-20T08:47:13.554Z · LW · GW

The below words are yours:

At a more mild level, where the disrespectful tone is below the threshold of outright swearing and abuse, tone gives reliable indications of how the person is likely to respond to continued conversation. It's a good indication of whether they will respond in good faith or need to be treated as a hostile rhetorician that is not vulnerable to persuasion (or learning).

You said that moderate differences in tone were good indicators of whether or not someone was rational enough to be capable to learn. You were vague about what specifically these indicators would be. I felt like that vagueness was suspicious, and could be used to justify over privileging commenters who sound familiar. This is not me arguing in bad faith, this is me attempting to fill in a blank spot in your argument. Admittedly, I framed it with words that made you sound wrong. However, I still believe this is your belief, more or less.

If I'm wrong in my belief about your belief, fix your argument; fill in the blank spot. Which parts of moderate differences in tone are so useful that they can clearly show us when someone is incapable of learning?

If my comment isn't a wholly accurate portrayal, it still gets the general picture across. You responded to none of its content, choosing instead to dismiss it all as irrelevant and a strawman, and you chose to use this as a reason that people should stop listening to my arguments. But my comment was at worst unfair and my comment illustrates very well the potential dangers of your position and so I don't think it should be ignored. People should take it with a grain of salt, perhaps, but don't tell them to ignore it.

No. The quoted point cannot be interpreted as saying that (easily or otherwise) by someone who comprehends English and is intending to truthfully represent the words.

You literally said that "tone... [is] a good indication of whether they... need to be treated as a hostile rhetorician that is not vulnerable to persuasion (or learning)." You think that tone alone is enough to tell us whether or not someone can learn. You think that people with certain tones can reliably be considered stupid.

I don't exactly agree. I think that tone has very limited use in assessing intelligence, and that evaluating argumentative content is a much more straightforward way of doing so. I distrust your and even my own intuitions about tone, also. I think that it's very probable that you dismiss legitimately smart people based simply on neutral differences in tone.

You never stated that you think that people who speak like us are the smart ones. But I believe that you believe that, and I honestly wouldn't trust you if you claimed otherwise, since it's basically human nature to rally around things like tone. However, if similarity isn't the brightline you're using for evaluating what kinds of tones are good and what kinds of tones are bad, I still think the discussion would benefit from you specifying exactly what is.

Comment by chaosmosis on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-20T08:07:14.039Z · LW · GW

Quick Question, a few weeks later: would you be willing to take a guess as to what problems might have caused my comment to be downvoted? I'm stumped.

Comment by chaosmosis on LessWrong podcasts · 2012-12-20T08:00:22.774Z · LW · GW

Again, I'm not defending belligerent tone, I'm attacking overly apologetic tones. You tried that strawman once already. Stop falsely accusing me of doing the exact things that you actually are doing.

Be specific. What on earth am I doing that's so disingenuous? You both claim that I'm utilizing advanced level Dark Arts here, and I'm totally clueless on how that might be so. Your vagueness makes me think that maybe you are just blaming me for your own instinctive irrational responses to neutral differences in tones, instead of actually analyzing the (supposedly) manipulative persuasive tendencies in my comments.

I also want to dispute your framing. You frame it as though I'm demanding that DaFranker adjust to my norms, because I deserve it. I'm not. I'm saying that DaFranker would benefit from being able to accept everyone's tones more easily. It's still his choice, and I make no presumptions. Again, I use no Dark Arts, you just use Dark Arts framing tactics to make it look like I do.

Your beliefs about my intentions are wildly inaccurate. Am I supposed to be some kind of evil moron who has a secret plan to impede rational communication? Why would I want to do that? Why would anyone want to do that? What on earth are you basing this belief of yours on?

Overall, you're hiding behind a mask of rationality and politeness while engaging in egregious instances of the things that I criticize. Your criticisms of me are not only inaccurate but actually apply much better to your own comments. This seems like the perfect illustration of my above claim that "we've changed the game to make it more superficially rational, but [it's actually just] more resource intensive and it masks the underlying mindsets that are bad instead of actually changing them".

Comment by chaosmosis on LessWrong podcasts · 2012-12-20T07:37:03.973Z · LW · GW

In what way am I deliberately manipulating people using their emotional intuitions? Can you give an example? I'm trying to frame myself in contrast to the norm, I agree with that. I don't think that should be perceived as a bad thing. I think that you perceive that as a bad thing is itself a bad thing.

Side note: why am I the one you chose to rebuke, and not Wedrifid? His comment is clearly more illustrative of the things that you are criticizing. My guess: status, and that's all. He's a tough target, but I'm not. I'm going to mentally flag this as a data point pointing towards my belief that LessWrong has an unhealthy level of preoccupation with status that is somehow unquestioningly accepted as normal and healthy. Just a more general point that I felt like making.

Your overall argument is that because it is difficult for you to engage my arguments rationally, I ought to change. I think that this mindset is backwards. I think that a much better option would be for you to work on changing what you're able to engage rationally. Most of the world does not speak in the same way that LessWrong does, but they still have valuable things to say. Recognizing that and adapting to it would be beneficial.

I think the problem here is much more you than me. There is a big problem when you strawman someone three times based only on tone. I especially think this is true since I see nothing in my comment similar to the kind of manipulations that you describe. I simply use terminology that comes easily to me (although more accurately, even now it's still being moderated a little bit; this moderation is towards the type of moderated rational discourse that you want, however, and not away from it). I don't see anything in my comments that you might be having problems with except their different-ness. If you're unable to engage different types of tones in rational ways, it would be productive for you to change that. Why am I the one at fault here?

Comment by chaosmosis on Average utilitarianism must be correct? · 2012-12-20T07:31:50.189Z · LW · GW

It could be either, so he's not justified in assuming that it's the average one in order to support his conclusion. He's extrapolating beyond the scope of their actual equivalence, that's the reason his argument is bringing anything new to the table at all.

He's using their mathematical overlap in certain cases as prove that in cases where they don't overlap the average should be used as superior to the total. That makes no sense at all, when thought of in this way. That is what I think the hole in his argument is.

Comment by chaosmosis on LessWrong podcasts · 2012-12-19T15:43:02.149Z · LW · GW

Sometimes these are bad, usually not. It's difficult for me to outline exactly what kind of disclaimers are bad because I think they're bad whenever they do more to prevent the earnest engagement of ideas than to help it, and determining which category specific cases fall in depends a lot on contextual things that I'm having a difficult time describing.

I know it when I see it, basically. It's easier for me to ask you to make recourse to your own experiences than it is for me to describe these kind of situations all by myself. Personally, lots of the time when I'm writing comments on LessWrong I spend about 30 seconds thinking up the points I want to go over, and then a couple minutes figuring out how to communicate that message in such a way that it will actually be persuasive to my audience. I feel like I spend much more time here trying to "dress up" my comments in the jargon of the site than I do actually learning things. I expect that many other people feel similarly or at least empathize with and understand my perspective on this.

Comment by chaosmosis on LessWrong podcasts · 2012-12-19T15:34:38.807Z · LW · GW

Useful discussion involves engagement with other people's ideas and the ability to engage other peoples ideas is lessened when you have to wade through layers of disclaimers in order to get there. I think there are legitimate and illegitimate uses of disclaimers, but that they're often used wrongly.

Your second point is stupid. There's a middle ground between the wholly impersonal and inefficient pedantry that often goes on in this site and between cursing out people. I also think that despite the somewhat ironic message of your paragraph your rudeness is intentional, you're trying to attack my status behind a stupid "ironically" donned mask. Don't be a fucking dick, in other words.

I think your disclaimer that the paragraph is "ironic" serves as a good example of how disclaimers can mask conflict and make it more difficult to straightforwardly engage other people's ideas. Spending my time addressing that "lolololz u suck just trolling but 4srs" type paragraph in which you were an asshole was way more inefficient than it would have been if you just said that you want me to have lower status and that you disagreed for the reasons you do.

At a more mild level, where the disrespectful tone is below the threshold of outright swearing and abuse, tone gives reliable indications of how the person is likely to respond to continued conversation. It's a good indication of whether they will respond in good faith or need to be treated as a hostile rhetorician that is not vulnerable to persuasion (or learning).

I view this as code for "whether or not they have in-status as a Yudkowskian Rationalist". Your point here can easily be interpreted as saying that if they don't talk like us then we can probably conclude that they're too stupid to learn, and that's what I think is wrong. Substantive argumentative content should be the litmus test here, minor differences in tone should barely matter at all compared to it. Your claim illustrates exactly what's wrong with the social norms of the site.

Comment by chaosmosis on Rationality Quotes December 2012 · 2012-12-16T01:20:34.886Z · LW · GW

"something deep within us expects, even demands moral order—in a world that shouts from the rooftops that no such order exists."

This conclusion is accurate unless he used a specifically Christian definition of "moral order".

Comment by chaosmosis on Average utilitarianism must be correct? · 2012-12-16T01:10:32.803Z · LW · GW

This is the same ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population

This is the part that I think is wrong. You don't assess your average utility when you evaluate your utility function, you evaluate your aggregate utility. 10 utilons plus 0 utilons is equivalent to 5 utilons plus 5 utilons not because their average is the same but because their total is the same.

Comment by chaosmosis on By Which It May Be Judged · 2012-12-10T07:09:49.442Z · LW · GW

It would depend on what exactly what we reprogrammed within you, I expect.

Comment by chaosmosis on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-07T06:08:11.111Z · LW · GW

For example, Kant's categorical imperative is very close to a decision-theory or game theory approach if one thinks about it as asking "what would happen if everyone made the choice that I do?"

This is like the opposite of game theory. Assuming that everyone takes the same action as you instead of assuming that everyone does what is in their own best interest.

Comment by chaosmosis on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-07T01:40:38.086Z · LW · GW

I hate that sub. I was subbed for like a week before I realized that it was always awful like that.

Comment by chaosmosis on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-06T23:33:25.241Z · LW · GW

I didn't have any specific format in mind, but you'd be right otherwise.

Comment by chaosmosis on LessWrong podcasts · 2012-12-06T21:02:46.079Z · LW · GW

I agree but also still think that tone is very overemphasized. We should encourage less reaction to tone instead of taking it as inevitable and a reasonable complaint in response to a comment, which is what I think that we currently do.

Comment by chaosmosis on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-06T21:00:56.017Z · LW · GW

Teach the best case that there is for each of several popular opinions. Give the students assignments about the interactions of these different opinions, and let/require the students the students to debate which ones are best, but don't give a one-sided approach.

Comment by chaosmosis on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-06T08:10:49.208Z · LW · GW

The Best Way Anyone Have Found So Far By A Fair Margin.

This also seems problematic, for the same reasons.

Comment by chaosmosis on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-06T06:58:11.633Z · LW · GW

Your post didn't come across as abrasive, Luke's did. Sorry for my bad communication.

Comment by chaosmosis on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-06T06:55:14.553Z · LW · GW

My impression is that Nietzsche tries to make his philosophical writings an example of his philosophical thought in practice. He likes levity and jokes, so he incorporates them in his work a lot. Nietzsche sort of shifts frames a lot and sometimes disorients you before you get to the meaning of his work. But, there are lots of serious messages within his sarcastic one liners, and also his work comprises a lot more than just sarcastic one liners.

I feel like some sort of comparison to Hofstadter might be apt but I haven't read enough Hofstadter to do that competently, and I think Nietzsche would probably use these techniques more than Hofstadter so the comparison isn't great.

Reading Nietzsche is partially an experience, as well as an intellectual exercise. That doesn't accurately convey what I want to say because intellectual exercises are a subset of experiences and all reading is a kind of experience, but I think that sentence gets the idea across at least.

Comment by chaosmosis on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-06T06:45:30.843Z · LW · GW

I'm trying to think what I would do. I don't know how I'd go about creating the groundwork for the conversation or selecting the person with whom I would converse. But here's an outline of how I think the conversation might go.

Me: What do you believe about epistemology?

Them: I believe X.

Me: I believe that empiricism works, even if I don't know why it works. I believe that if something is useful that's sufficient to justify believing in it, at least up to the point where it stops being useful. This is because I think changing one's epistemology only makes sense if it's motivated by one's values since truth is not necessarily an end in itself.

I think X is problematic because it ignores Y and assumes Z. Z is a case of bad science, and most scientists don't Z.

What do you believe about morality?

Them: I believe A.

Me: I believe that morality is a guide to human behavior that seeks to discriminate between right and wrong behavior. However, I don't believe that a moral system is necessarily objective in the traditional sense. I think that morality has to do with individual values and desires since desires are the only form of inherently motivational facts and are thus the key link between epistemic truth and moral guidance. I think individuals should pursue their values, although I often get confused when those values contradict.

I sort of believe A, in that _. But I disagree with A because X.

What do you think philosophy is and ought to be, if anything?

Them: Q.

Me: Honestly, I don't know or particularly care about the definitions of words because I'm mainly only interested in things that achieve my values. But, I think that philosophy, whatever its specific definition, ought to be aimed towards the purpose of clarifying morality and epistemology because I think that would be a useful step towards achieving my individual values.

Comment by chaosmosis on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-06T06:37:03.279Z · LW · GW

First, make sure that they're actually approachable at all.

Second, don't approach them in a combative fashion, like this post does. You need to approach them by understanding their specific view of morality and epistemology and their view of how philosophy relates to that, and how it should relate to it, or even if they think it does or should at all. Approach them from a perspective that is explicitly open to change. Ask lots of questions, then ask follow up questions. These questions shouldn't be combative, although they should probably expose assumptions that are at least seemingly questionable.

Third, make sure you know what you're getting into yourself. Some of those guys are very smart, and they have a lot more experience than you do. Do your homework.

Comment by chaosmosis on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-06T06:31:23.885Z · LW · GW

Hume and Nietzsche are both excellent exceptions to your general rule.

Also, #4 seems completely fine to me.

Comment by chaosmosis on LessWrong podcasts · 2012-12-06T06:25:15.199Z · LW · GW

1 Length is only good insofar as it adds to meaning. Most length on LessWrong doesn't do that. For example, I can summarize your first point as:

Long comments make arguments clearer and make communication faster. Good communication is good, within certain limits, and I think most comments fall within those limits.

I don't think any important information is lost there. I disagree with your assessment of communication practices on LessWrong.

2 I don't think we should react to differences in tone the way that we do. The fact that our community has different norms depending on whether or not you use certain tones is problematic. We should try to minimize the impact that things like tone have. Substantive issues ought to be a priority and they ought to dominate to the point where things like tone barely matter at all.

3 Disclaimers discourage argumentative clash and take extra time to think of beforehand. Simply putting down a disclaimer allows you to marginalize issues that others might have with your post, it makes relevant criticism superficially appear less relevant. A better practice that we should be cultivated is to simply concede things after those things are pointed out.

4 The mindset of lines of retreat seems to stem from the idea that arguments are soldiers meant to defend your social status. Mental lines of retreat might be good but discursive ones are generally a way of avoiding responsibility.

5 Cross apply my above response to your argument about tone.

You say that they are good social skills. I agree, given the social norms of this site. But I think those social norms are detrimental to cultivating rationality efficiently and so I want to go about changing the social norms of this site.

But it's exactly things like leaving lines of retreat and using a polite tone that allows them to be less personally involved and not get caught up in things like having to "defeat" their "opponent".

I don't think so. At best, we've just changed the nature of the game.

EDIT: Upon reflection, this last point is basically the essence of my criticism. We've just changed the game to make it more superficially rational, but that is more resource intensive and it masks the underlying mindsets that are bad instead of actually changing them.

Comment by chaosmosis on LessWrong podcasts · 2012-12-04T23:55:42.644Z · LW · GW

Though his comment might also be a sinister meta-signaling-signaling trolling :P

http://i.imgur.com/w6mIF.jpg

Comment by chaosmosis on LessWrong podcasts · 2012-12-04T23:49:49.325Z · LW · GW

People make verbose and lengthy comments instead of short and simple ones. People always speak in a certain type of tone, signalling that they are smart but also that they are Reasonable and they are listening to the points of their opponents. People lace their comments with subtle disclaimers and possible lines of retreat. People take care to use an apologetic tone.

I think some of this is a somewhat rational reaction to the amount of nitpicking that happens on this site, which is something that I'm also opposed to. But some of this exists on its own and it shouldn't.

I'd prefer it if we just got to the point and stated in the argument as simply as possible. I don't know how to change the norms on this site and don't think any macro-action could do it. Individual people (no, no one specific) just need to relax and to be less personally involved in the site or in the things they say and the arguments that they make.

Also, the karma system may or may not be exacerbating this behavior, I'm not sure.

Comment by chaosmosis on LessWrong podcasts · 2012-12-04T18:05:25.800Z · LW · GW

In my experience, the people on this site don't perceive signalling as wrong or useless, even when it's superficial. I do not understand why that's so because I perceive most of signalling as a waste of resources and think that cultivating a community which tried to minimize unnecessary signalling would be good.

Comment by chaosmosis on LessWrong podcasts · 2012-12-04T18:02:55.676Z · LW · GW

Making the Babyeaters/SuperHappy posts into an audio story might draw new people to the site.

Comment by chaosmosis on Causal Universes · 2012-12-04T18:01:33.056Z · LW · GW

If you're not interested in discussing the ethics of time travel, why did you respond to my comment which said

I don't understand why it's morally wrong to kill people if they're all simultaneously replaced with marginally different versions of themselves. Sure, they've ceased to exist. But without time traveling, you make it so that none of the marginally different versions exist. It seems like some kind of act omission distinction is creeping into your thought processes about time travel.

with

Because our morality is based on our experiential process. We see ourselves as the same person. Because of this, we want to be protected from violence in the future, even if the future person is not "really" the same as the present me.

It seems pretty clear that I was talking about time travel, and your comment could also be interpreted that way.

But, whatever.

Comment by chaosmosis on Causal Universes · 2012-12-04T03:08:59.793Z · LW · GW

I'm protecting someone over not-someone.

This ignores that insofar as going back in time kills currently existing people it also revives previously existing ones. You're ignoring the lives created by time travel.

Experientially, we view "me in 10 seconds" as the same as "me now." Because of this, the traditional arguments hold, at least to the extent that we believe that our impression of continuous living is not just a neat trick of our mind unconnected to reality. And if we don't believe this, we fail the rationality test in many more severe ways than not understanding morality. (Why would I not jump off buildings, just because future me will die?)

If you're defending some form of egoism, maybe time travel is wrong. From a utilitarian standpoint, preferring certain people just because of their causal origins makes no sense.

Comment by chaosmosis on Causal Universes · 2012-12-02T04:57:34.802Z · LW · GW

practically compute

Your argument is that it is hard and impractical, not that it is impossible, and I think that only the latter type is a reasonable constraint on moral considerations, although even then I have some qualms about whether or not nihilism would be more justified, as opposed to arbitrary moral limits. I also don't understand how anthropic arguments might come into play.

Comment by chaosmosis on Causal Universes · 2012-12-01T18:59:52.928Z · LW · GW

Your argument makes no sense.

"Time travel is too improbable to worry about preserving yous affected by it. Given that, it makes sense to want to protect the existence of the unmodified future self over the modified one."

Those two sentences do not connect. They actually contradict.

Also, you're doing moral epistemology backwards, in my view. You're basically saying, "it would be really convenient if the content of morality was such that we could easily compute it using limited cognitive resources". That's an argumentum ad consequentum which is a logical fallacy.

Comment by chaosmosis on Intuitions Aren't Shared That Way · 2012-12-01T03:39:48.457Z · LW · GW

In fairness there are potential issues here with signalling and culture. Although people might profess to believe X, in reality X just might be a more common type of cached knowledge, or X might be something that they say because they think it is socially useful, or as a permutation of those two they might have conditioned themselves to believe in X. Or, perhaps they interpret the meaning of "X" differently than others do, but they really mean the same thing underneath.

I think there should be a distinction between types of intuitions, or at least two different poles on a broad spectrum. I think we should consider the extreme of one pole to be truly internalized knowledge that's an extremely core part of that persons personality, and the other extreme to be an extremely shallow belief that's produced by lazy introspection or by no introspection at all, just the automatic repetition of cultural means.

I think that the first type of intuition would be extremely similar. I also think that the first type of intuition is what really matters and probably what controls our actions. I think the second type of intuition probably effects behavior to a limited degree, but I don't think that it would be all that significant. I think these things because humans cooperate with each other so easily, and because there are a great many concepts that translate easily across cultures. Even with some the strangest sounding foreign philosophies that I've encountered, I emphasize with a little, and I think that's because those philosophies have origins common to all people.

The fact that all humans are extremely biologically similar is also a big factor in my thought process.

Comment by chaosmosis on Causal Universes · 2012-12-01T03:29:31.485Z · LW · GW

"I think we need to arbitrarily limit something. Given that, this specific limit is not arbitrary."

How is that not equivalent to your argument?

Additionally, please explain more. I don't understand what you mean by saying that we "split ourselves too thinly". What is this splitting and why does it invalidate moral systems that do it? Also, overall, isn't your argument just a reason that considering alternatives to the status quo isn't moral?

Comment by chaosmosis on Causal Universes · 2012-11-30T01:57:10.935Z · LW · GW
  1. These experiences aren't undone. They are stopped. There is a difference. Something happy that happens, and then is over, still counts as a happy thing.

  2. You destroy valuable lives. You also create valuable lives. If creating things has as much value as maintaining them does, then the act of creative destruction is morally neutral. Since the only reasons that I can think of why maintaining lives might matter are also reasons that the existence of life is a good thing, I think that maintenance and creation are morally equal.

Comment by chaosmosis on Causal Universes · 2012-11-30T01:48:37.700Z · LW · GW

Why do you think that death is bad? Perhaps that would clarify this conversation. I personally can't think of a reason that death is bad except that it precludes having good experiences in life. Nonexistence does the exact same thing. So I think that they're rationally morally identical.

Of course, if you're using a naturalist based intuitionist approach to morality, then you can recognize that it's illogical that you value existing persons more than potential ones and yet still accept that those existing people really do have greater moral weight, simply because of the way you're built. This is roughly what I believe, and why I don't push very hard for large population increases.

Comment by chaosmosis on Causal Universes · 2012-11-30T01:38:41.688Z · LW · GW

Why protect one type of "you" over another type? Your response gives a reason that future people are valuable, but not that those future people are more valuable than other future people.

Comment by chaosmosis on Causal Universes · 2012-11-29T21:14:44.530Z · LW · GW

Sure. Again, this isn't relevant and isn't providing information that's new to me. People like Schopenhauer and Benatar might exist, but surely my overall point still stands. The focus on nitpicking is excessive and frustrating. I don't want to have to invest much time and effort into my comments on this site so that I can avoid allowing people to get distracted by side issues; I want to get my points across as efficiently as possible and without interruption.

Comment by chaosmosis on Causal Universes · 2012-11-29T06:29:25.665Z · LW · GW

We are talking about time travel and so this doesn't apply. Your comment is nitpicky for no good reason. I obviously recognize that consequentialists believe that more lives are better; I don't know why you felt an urge to tell me that. Your wording is also unnecessarily pedantic and inefficient.

Comment by chaosmosis on Causal Universes · 2012-11-28T19:12:27.451Z · LW · GW

I don't understand why it's morally wrong to kill people if they're all simultaneously replaced with marginally different versions of themselves. Sure, they've ceased to exist. But without time traveling, you make it so that none of the marginally different versions exist. It seems like some kind of act omission distinction is creeping into your thought processes about time travel.

Comment by chaosmosis on Overconfident Pessimism · 2012-11-27T18:42:05.947Z · LW · GW

It doesn't lead to any new insights. I can't generate any thoughts by pretending that it's now the future and that I'm looking back into the past. I don't know whether or not other people do somehow generate new thoughts this way. It sounds plausible while also sounding ridiculous, so I'm unsure whether or not it's legitimate.

Comment by chaosmosis on Overconfident Pessimism · 2012-11-27T00:06:03.648Z · LW · GW

Does anyone find this useful, personally? I've heard it as advice before, but it never helps me.