Posts

Comments

Comment by StanR on My concerns about the term 'rationalist' · 2009-06-05T09:16:47.081Z · LW · GW

I hope to post on that post shortly, after giving it some thought.

Gris, just as bias against violence may be the reason it's hardly ever considered, alternatively, it may not only be a rational position, but a strategically sensible one. Please consider looking at the literature concerning strategic nonviolence. The substantial literature at the Albert Einstein Institute is good for understanding nonviolent strategy and tactics against regimes, and the insights provided translate into courses of action in other conflicts, as well.

Comment by StanR on Post Your Utility Function · 2009-06-04T07:31:55.182Z · LW · GW

The general premise in the mind sciences is that there are different selves, somehow coordinated through the cortical midline structures. Plenty of different terms have been used, and hypotheses suggested, but the two "selves" I use for shorthand come from Daniel Gilbert: Socrates and the dog. Socrates is the narrative self, the dog is the experiencing self. If you want something a bit more technical, I suggest the lectures about well-being (lecture 3) here, and to get really technical, this paper on cognitive science exploring the self.

Comment by StanR on A social norm against unjustified opinions? · 2009-06-01T02:08:47.028Z · LW · GW

Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped "being rational" to the top of my to-do list.

Voted down because your realization is flawed. Achieving anything does not require you to be rational, as evidenced by this post.

The Master of the Way treats people as straw dogs.

Your strategy of dealing with people is also flawed: does the Master of the Way always defect? If you were a skilled exploiter, you wouldn't give obvious signals that you are an exploiter. Instead, you seem to be signaling "Vote me off the island!" to society, and this community. You may want to reconsider that position.

Comment by StanR on A social norm against unjustified opinions? · 2009-06-01T01:39:44.903Z · LW · GW

I don't believe we are, because I know of no evidence of the following:

evolutionarily speaking, a big function of system 2 is to function as a decoy/shield mechanism for keeping ideas out of a person. And increasing a person's skill at system 2 reasoning just increases their resistance to ideas.

Perhaps one or both of us misunderstands the model. Here is a better description of the two.

Originally, I was making a case that attempting to reason was the wrong strategy. Given your interpretation, it looks like pjeby didn't understand I was suggesting that, and then suggested essentially the same thing.

My experience, across various believers (Christian, Jehovah's Witness, New Age woo-de-doo) is that system 2 is never engaged on the defensive, and the sort of rationalization we're talking about never uses it. Instead, they construct and explain rationalizations that are narratives. I claim this largely because I observed how "disruptable" they were during explanations--not very.

How to approach changing belief: avoid resistance by avoiding the issue and finding something at the periphery of belief. Assist in developing rational thinking where the person has no resistance, and empower them. Strategically, them admitting their mistake is not the goal. It's not even in the same ballpark. The goal is rational empowerment.

Part of the problem, which I know has been mentioned here before, is unfamiliarity with fallacies and what they imply. When we recognize fallacies, most of the time it's intuitive. We recognize a pattern likely to be a fallacy, and respond. We've built up that skill in our toolbox, but it's still intuitive, like a chess master who can walk by a board and say "white mates in three."

Comment by StanR on A social norm against unjustified opinions? · 2009-05-31T22:43:27.164Z · LW · GW

pjeby, sorry I wasn't clear, I should have given some context. I am referencing system 1 and 2 as simplified categories of thinking as used by cognitive science, particularly in behavioral economics. Here's Daniel Kahneman discussing them. I'm not sure what you're referring to with decoys and shields, which I'll just leave at that.

To add to my quoted statement, workarounds are incredibly hard, and focusing on reasoning (system 2) about an issue or belief leaves few cycles for receiving and sending social cues and signals. While reasoning, we can pick up those cues and signals, but they'll break our concentration, so we tend to ignore them while reasoning carefully. The automatic, intuitive processing of the face interferes with the reasoning task; e.g. we usually look somewhere else when reasoning during a conversation. To execute a workaround strategy, however, we need to be attuned to the other person.

When I refer to belief, I'm not referring to fear of the dark or serial killers, or phobias. Those tend to be conditioned responses--the person knows the belief is irrational--and they can be treated easily enough with systematic desensitization and a little CBT thrown in for good measure. Calling them beliefs isn't wrong, but since the person usually knows they're irrational, they're outside my intended scope of discussion: beliefs that are perceived by the believer to be rational.

People are automatically resistant to being asked to question their beliefs. Usually it's perceived as unfair, if not an actual attack on them as a person: those beliefs are associated with their identity, which they won't abandon outright. We shouldn't expect them to. It's unrealistic.

What should we do, then? Play at the periphery of belief. To reformulate the interaction as a parable: We'll always lose if we act like the wind, trying to blow the cloak off the traveller. If we act like the sun, the traveller might remove his cloak on his own. I'll think about putting a post together on this.

Comment by StanR on A social norm against unjustified opinions? · 2009-05-31T09:27:51.095Z · LW · GW

It's not just nontrivial, it's incredibly hard. Engaging "system 2" reasoning takes a lot of effort, lowering sensitivity to, and acute awareness of, social cues and signals.

The mindset of "let's analyze arguments to find weaknesses," aka Annoynance's "rational paths," is a completely different ballgame than most people are willing to play. Rationalists may opt for that game, but they can't win, and may be reinforcing illogical behavior. Such a rationalist is focused on whether arguments about a particular topic are valid and sound, not the other person's rational development. If the topic is a belief, attempting to reason it out with the person is counterproductive. Making no ground when engaging with people on a topic should be a red flag: "maybe I'm doing the wrong thing."

Does anyone care enough for me to make a post about workarounds? Maybe we can collaborate somehow Adelene, I have a little experience in this area.

Comment by StanR on Image vs. Impact: Can public commitment be counterproductive for achievement? · 2009-05-31T07:32:24.731Z · LW · GW

I was part of a meetup on "alternative energy" (to see if actual engineers went to the things--I didn't want to date a solar cell) when I got an all-group email from the group founder about an "event" concerning The Secret* and a great opportunity to make money. Turned out it was a "green" multi-level marketing scam he was deep in, and they were combining it with the The Secret. Being naive, at first I said I didn't think the event was appropriate, assuming it might lead to some discussion. He immediately slandered me to the group, but I managed to send out an email detailing his connections to the scam before I was banned from the group. I did get a thank you from one of the members, at least.

I looked through meetup and found many others connected to him. Their basic routine involves paying the meetup group startup cost, having a few semi-legit meetings, and then using their meetup group as a captive audience.

I admit, I was surprised. I know it's not big news, but the new social web has plenty of new social scammers, and they're running interference. It's hard to get a strong, clear message out when opportunists know how to capitalize on easy money: people wanting to feel and signal like they're doing something. I honestly don't think seasteading can even touch that audience, but then again, I'm not sure you'd want to.

Comment by StanR on Bayesians vs. Barbarians · 2009-04-17T04:00:55.523Z · LW · GW

Having had to explain to other sci-fi lovers in the past why using fiction as a counterargument is so silly, I googled to see if people had written about why it's silly. GUESS WHAT I FOUND?

The Logical Fallacy of Generalization from Fictional Evidence, by Eliezer Yudkowsky: http://www.overcomingbias.com/2007/10/fictional-evide.html

Yeah, my jaw dropped when I found that. I'm sure you won't respond to this, as the mass of LW moves on to the most current post, but really? Was this a self-aware joke? Eliezer 2009 is that much less rational than Eliezer 2007?

Comment by StanR on Tell it to someone who doesn't care · 2009-04-16T04:14:32.558Z · LW · GW

And I said rational atheism, not atheism.

Granted, I didn't express my thoughts on that clearly. I think there is a fundamental difference between attempting to get someone to agree with your opinions and helping them develop rationality--likely through both similar and different opinions. I think the latter is a better moral play, and it's genuine.

What is the higher priority result for a rational atheist targeting a theist:

  • a more rational theist
  • a not-any-more-rational-than-before atheist
  • an irrational agnostic

I think the biggest "win" of the results is the first. But groups claiming to favor rationality most still get stuck on opinions all the time (cryogenics comes to mind). Groups tend to congregate around opinions, and forcing that opinion becomes the group agenda, even though they believe it's the right and rational opinion to have. It's hard, because a group that shares few opinions is hardly a group at all, but the sharing of opinions and exclusion of those with different opinions works against an agenda of rationality. And shouldn't that be the agenda of a rational atheist?

I think the "people who don't care" of your 1st theory are either 1) unimportant or 2) don't exist, depending on the meaning of the phrase.

I think theory 2 makes a fatal mistake, it emphasizes the cart (opinion) rather than the horse (rational thinking). I'm willing to grant they're not so separate and cut-and-dry, but I wanted to illustrate the distinction I see.

Comment by StanR on Bayesians vs. Barbarians · 2009-04-16T03:12:31.232Z · LW · GW

So, according to Freedom House, countries with nonviolent revolutions since the late 1990s are improving. There's not a lot of data beforehand. You named the exception: Georgia's gotten a little worse since the overthrow of the "rigged" election there. Look at the data: http://www.freedomhouse.org/template.cfm?page=42&year=2008

I'm willing to admit I might have some Western bias, but I try to catch it. The general consensus does seem to be that the elections were rigged, but I don't know enough to say with much confidence either way.

In my original post, I was referring to the period of actual revolution, not everything since. I know it's not all sunshine and rainbows. Reality is gritty, nasty stuff. Nonviolent struggle strategy and tactics do not guarantee success nor democracy--but neither do violent methods.

If we're discussing strategies and tactics, most nonviolent movements do not plan much past overthrow. That's bad, but again, no worse than violent overthrow.

These are big and fuzzy concepts, for sure. When does a revolution actually end? If a less or equally undemocratic leader is elected, is that a failure of nonviolent struggle, a failure or planning, a failure of the people, or what? Are Freedom House's metrics valid or consistent? I don't have good answers.

If you were to wager on whether strategic nonviolent or strategic violent struggles in the modern day were more likely to lead toward a successful overthrow, how would you bet? What about leading toward more democratic overthrows (i.e. elections)?

Comment by StanR on Bayesians vs. Barbarians · 2009-04-16T02:00:33.086Z · LW · GW

dclayh, Yes, that came to mind for me too. The small-town Gandhian libertarianism of Russell's story is entertaining, and just as silly. Yet, you didn't receive any karma points, and Eliezer received several, so either someone out there thinks a fictional short story is a reasonable rebuttal, or people are scoring for support of a side or entertainment.

Eliezer, I don't see how Russell or Turtledove even belong as anything more than footnotes, unless the discussion is about fiction writers creating alternate universe just-so stories that tend to align with their ideologies. I didn't think Less Wrong, of all places, would be where I'd have to insist that short story fiction is not adequate or reasonable evidence, or any sort of rebuttal, against real world claims or case studies.

Please try actually reading Sharp. He's not Gandhi. Neither is Robert Helvey--he's actually a retired US colonel.

Comment by StanR on Bayesians vs. Barbarians · 2009-04-15T08:31:04.544Z · LW · GW

My reaction is similar to Nanani's, this is awful.

How are wars, armies, soldiers, and all the trappings relevant to actually practiced rationality? Smart thinking trapped in a stupid metaphor is still stupid, right?

How does one classify these two armies? What IQ, measure of rationality, or other characteristic separates the two sides?

I'd suggest the works of Gene Sharp (http://en.wikipedia.org/wiki/Gene_Sharp) at http://www.aeinstein.org/ as he and his associates are behind some of the few successful (i.e. towards democracy) revolutions since the Cold War.

Comment by StanR on Tell it to someone who doesn't care · 2009-04-15T06:13:42.807Z · LW · GW

I think your secondary purpose is actually the primary purpose, excluding sponsors, who I agree, usually set up the debate for entertainment.

Even if both sides claim that changing minds is the purpose, the actions show otherwise. The "change minds" or "reveal the truth" is a convenient lie, and one that's actually believed. Plus, it would be tacky and uncivilized to state the real reason for the debate, best to claim a more noble imperative--and believe it.

Depending on how polarized the sides are, the audience is either mostly, or completely going there to watch a fight and root for their team. Although the audience may respect the other side if they play well, they're rooting for their side getting in some choice jabs, resulting in a KO. I don't think, leading up to the event, any side of a debate actually says "this will really change some minds!" No, it's usually "we're going to show them why we're right and they're wrong," or some more sophisticated equivalent of "it's beat-down time."

I will grant that if one side comes off as incredibly foolish, some may abandon it, but how often does that happen? Betting on a side already makes a person more confident of that choice being the right one.

Dawkins and other aggressive-in-that-way atheists irritate me: they're being very irrational about either their purpose or their approach. If they want more rational atheists, their chosen methods are very poor. If they just want to have pride in being right, and rallying their base, they shouldn't keep up a pretense of spreading rational atheism. They want both, but they can't have it.

I want to make a longer and more focused reply on what I see as the core questions: how can we change minds, and how should we? I've been wanting to tackle it here for a while, but I have trouble keeping up with this site.

Comment by StanR on Playing Video Games In Shuffle Mode · 2009-03-24T07:52:34.996Z · LW · GW

Absolutely, linking really improves the resource.

A link for each major claim or background topic would be much appreciated. Sometimes I wonder if there shouldn't be an original post layer, also containing the comments, and a wiki-ish layer, that could provide more links and notations. That way, an entrant could dig deeper, and regulars could participate in bridging those gaps, but could also continue in the original post layer without the wiki-ish clutter.

Learning through participation is a problem when a post generates 30+ comments, some of which are asking for "beginner" clarifications of known-by-regulars concepts. I think it's better to include the beginners in grappling with the concepts and attempting to build the course for themselves and others. Isn't it a bit odd that so many learn by following their interest, filling-in gaps as needed, and yet later on those very same people will attempt to teach others using a more linear method?

Looking back on something I know, I see the map of knowledge, and think "Ah, I might have been better off had I learned these foundational basics first." The click-and-pursue nature of the web seems stacked against that method, and really, would have I been better off? Maybe I wouldn't have pursued as much as I did if I couldn't choose my own path.

If a wiki-ish layer is too crazy, maybe fundamental concepts should be more present in the tags, or be a separate little part? Related links and fundamental concepts relating to the post could be voted up or down. I could vote up a "hindsight bias" tag while downvoting a smartass "jedi" tag, and propose or vote up a link to another post that explored something (say, hindsight bias) in more detail.