A Sense That More Is Possible
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-13T01:15:30.208Z · LW · GW · Legacy · 220 commentsContents
220 comments
To teach people about a topic you've labeled "rationality", it helps for them to be interested in "rationality". (There are less direct ways to teach people how to attain the map that reflects the territory, or optimize reality according to their values; but the explicit method is the course I tend to take.)
And when people explain why they're not interested in rationality, one of the most commonly proffered reasons tends to be like: "Oh, I've known a couple of rational people and they didn't seem any happier."
Who are they thinking of? Probably an Objectivist or some such. Maybe someone they know who's an ordinary scientist. Or an ordinary atheist.
That's really not a whole lot of rationality, as I have previously said.
Even if you limit yourself to people who can derive Bayes's Theorem—which is going to eliminate, what, 98% of the above personnel?—that's still not a whole lot of rationality. I mean, it's a pretty basic theorem.
Since the beginning I've had a sense that there ought to be some discipline of cognition, some art of thinking, the studying of which would make its students visibly more competent, more formidable: the equivalent of Taking a Level in Awesome.
But when I look around me in the real world, I don't see that. Sometimes I see a hint, an echo, of what I think should be possible, when I read the writings of folks like Robyn Dawes, Daniel Gilbert, Tooby & Cosmides. A few very rare and very senior researchers in psychological sciences, who visibly care a lot about rationality—to the point, I suspect, of making their colleagues feel uncomfortable, because it's not cool to care that much. I can see that they've found a rhythm, a unity that begins to pervade their arguments—
Yet even that... isn't really a whole lot of rationality either.
Even among those whose few who impress me with a hint of dawning formidability—I don't think that their mastery of rationality could compare to, say, John Conway's mastery of math. The base knowledge that we drew upon to build our understanding—if you extracted only the parts we used, and not everything we had to study to find it—it's probably not comparable to what a professional nuclear engineer knows about nuclear engineering. It may not even be comparable to what a construction engineer knows about bridges. We practice our skills, we do, in the ad-hoc ways we taught ourselves; but that practice probably doesn't compare to the training regimen an Olympic runner goes through, or maybe even an ordinary professional tennis player.
And the root of this problem, I do suspect, is that we haven't really gotten together and systematized our skills. We've had to create all of this for ourselves, ad-hoc, and there's a limit to how much one mind can do, even if it can manage to draw upon work done in outside fields.
The chief obstacle to doing this the way it really should be done, is the difficulty of testing the results of rationality training programs, so you can have evidence-based training methods. I will write more about this, because I think that recognizing successful training and distinguishing it from failure is the essential, blocking obstacle.
There are experiments done now and again on debiasing interventions for particular biases, but it tends to be something like, "Make the students practice this for an hour, then test them two weeks later." Not, "Run half the signups through version A of the three-month summer training program, and half through version B, and survey them five years later." You can see, here, the implied amount of effort that I think would go into a training program for people who were Really Serious about rationality, as opposed to the attitude of taking Casual Potshots That Require Like An Hour Of Effort Or Something.
Daniel Burfoot brilliantly suggests that this is why intelligence seems to be such a big factor in rationality—that when you're improvising everything ad-hoc with very little training or systematic practice, intelligence ends up being the most important factor in what's left.
Why aren't "rationalists" surrounded by a visible aura of formidability? Why aren't they found at the top level of every elite selected on any basis that has anything to do with thought? Why do most "rationalists" just seem like ordinary people, perhaps of moderately above-average intelligence, with one more hobbyhorse to ride?
Of this there are several answers; but one of them, surely, is that they have received less systematic training of rationality in a less systematic context than a first-dan black belt gets in hitting people.
I do not except myself from this criticism. I am no beisutsukai, because there are limits to how much Art you can create on your own, and how well you can guess without evidence-based statistics on the results. I know about a single use of rationality, which might be termed "reduction of confusing cognitions". This I asked of my brain, this it has given me. There are other arts, I think, that a mature rationality training program would not neglect to teach, which would make me stronger and happier and more effective—if I could just go through a standardized training program using the cream of teaching methods experimentally demonstrated to be effective. But the kind of tremendous, focused effort that I put into creating my single sub-art of rationality from scratch—my life doesn't have room for more than one of those.
I consider myself something more than a first-dan black belt, and less. I can punch through brick and I'm working on steel along my way to adamantine, but I have a mere casual street-fighter's grasp of how to kick or throw or block.
Why are there schools of martial arts, but not rationality dojos? (This was the first question I asked in my first blog post.) Is it more important to hit people than to think?
No, but it's easier to verify when you have hit someone. That's part of it, a highly central part.
But maybe even more importantly—there are people out there who want to hit, and who have the idea that there ought to be a systematic art of hitting that makes you into a visibly more formidable fighter, with a speed and grace and strength beyond the struggles of the unpracticed. So they go to a school that promises to teach that. And that school exists because, long ago, some people had the sense that more was possible. And they got together and shared their techniques and practiced and formalized and practiced and developed the Systematic Art of Hitting. They pushed themselves that far because they thought they should be awesome and they were willing to put some back into it.
Now—they got somewhere with that aspiration, unlike a thousand other aspirations of awesomeness that failed, because they could tell when they had hit someone; and the schools competed against each other regularly in realistic contests with clearly-defined winners.
But before even that—there was first the aspiration, the wish to become stronger, a sense that more was possible. A vision of a speed and grace and strength that they did not already possess, but could possess, if they were willing to put in a lot of work, that drove them to systematize and train and test.
Why don't we have an Art of Rationality?
Third, because current "rationalists" have trouble working in groups: of this I shall speak more.
Second, because it is hard to verify success in training, or which of two schools is the stronger.
But first, because people lack the sense that rationality is something that should be systematized and trained and tested like a martial art, that should have as much knowledge behind it as nuclear engineering, whose superstars should practice as hard as chess grandmasters, whose successful practitioners should be surrounded by an evident aura of awesome.
And conversely they don't look at the lack of visibly greater formidability, and say, "We must be doing something wrong."
"Rationality" just seems like one more hobby or hobbyhorse, that people talk about at parties; an adopted mode of conversational attire with few or no real consequences; and it doesn't seem like there's anything wrong about that, either.
220 comments
Comments sorted by top scores.
comment by Scott Alexander (Yvain) · 2009-03-13T02:20:39.923Z · LW(p) · GW(p)
Eliezer, I have recommended to you before that you read The Darkness That Comes Before and the associated trilogy. I repeat that recommendation now. The monastery of Ishual is your rationalist dojo, and Anasurimbor Kellhus is your beisutsukai surrounded by a visible aura of formidability. The book might even give you an idea or two.
My only worry with the idea of these dojos is that I doubt the difference between us and Anasurimbor Kellhus is primarily a difference in rationality levels. I think it is more likely to be akrasia. Even an irrational, downright stupid person can probably think of fifty ways to improve his life, most of which will work very well if he only does them (quit smoking, quit drinking, study harder in school, go on a diet). And a lot of people with pretty well developed senses of rationality whom I know, don't use them for anything more interesting than winning debates about abortion or something. Maybe the reason rationalists rarely do that much better than anyone else is that they're not actually using all that extra brainpower they develop. The solution to that isn't more brainpower.
Kellhus was able to sit down, enter the probability trance, decide on the best course of action for the immediate future, and just go do it. When I tried this, I never found the problem was in the deciding - it doesn't take a formal probability trance to chart a path through everyday life - it was in following the results. Among the few Kellhus-worthy stories I've ever heard from reality was you deciding the Singularity was the most important project, choosing to devote your life to it, and not having lost that resolve fifteen years later. If you could bottle that virtue, it would be worth more than the entire Bayesian corpus combined. I don't doubt that it's positively correlated with rationality, but I do doubt it's a 1 or even .5 correlation.
Replies from: Eliezer_Yudkowsky, Vladimir_Golovin, roland↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-13T17:05:34.055Z · LW(p) · GW(p)
I think the akrasia you describe and methods of combating it would come under the heading of "kicking", as opposing to the "punching" I've been talking about. It's an art I haven't created or learned, but it's an art that should exist.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-03-13T19:06:19.375Z · LW(p) · GW(p)
This "art of kicking" is what pjeby has been working toward, AFAICT. I haven't read much of his writing, though. But an "art of kicking" would be a great thing to mix in with the OB/LW corpus, if pjeby has something that works, which I think he has at least some of -- and if we and he can figure out how to hybridize kicking research and training with punching research and training.
I'd also love to bring in more people from the entrepreneurship/sales/marketing communities. I've been looking at some of their better literature, and it has rationality techniques (techniques for not shooting yourself in the foot by wishful thinking, overconfidence, etc.) and get-things-done techniques mixed together. I love the sit-and-think math nerd types too, and we need sitting and thinking; the world is full of people taking action toward the wrong goals. But I'd expect better results from our rationalist community if we mixed in more people whose natural impulses were toward active experiments and short-term visible results.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2009-03-13T23:04:30.461Z · LW(p) · GW(p)
Pjeby's working on akrasia? I'll have to check out his site.
That brings up a related question that I think Eliezer hinted at: what pre-existing bodies of knowledge can we search through for powerful techniques so that we don't have to re-invent the wheel? Entrepreneurship stuff is one. Lots of people have brought up pick-up artists and poker, so those might be others.
I nominate a fourth that may be controversial: mysticism. Not the "summon demons" style of mysticism, but yoga and Zen and related practices. These people have been learning how to examine/quiet/rearrange their minds and sort out the useful processes from the useless processes for the past three thousand years. Even if they've been working off crazy metaphysics, it'd be surprising if they didn't come up with something. Eliezer talks in mystical language sometimes, but I don't know whether that's because he's studied and approves of mysticism or just likes the feel of it.
What all of these things need is a testing process combined with people who are already high-level enough that they can sort through all the dross and determine which techniques are useful without going native or opening themselves up to the accusation that they're doing so; ie people who can sort through the mystical/pick-up artist/whatever literature and separate out the things that are useful to rationalists from the things specific to a certain worldview hostile to our own. I've seen a few good people try this, but it's a mental minefield and they tend to end up "going native".
Replies from: HughRistik, Rings_of_Saturn, olimay↑ comment by HughRistik · 2009-03-14T02:40:21.701Z · LW(p) · GW(p)
In the case of pickup literature, there is a lot to attract rationalists, but also a lot to inspire their ire.
The first thing rationalists should notice about pickup is that it wins. There are no other resources in mainstream culture or psychology that are anywhere near as effective. Yet even after witnessing the striking ability of pickup theories to win, I am hesitant to say that they are actually true. For example, I acknowledge the fantastic success of notions like "women are attracted to Alpha Males," even though I don't believe that they are literally true, and I know that they are oversimplifications of evolutionary psychology. Consequently, I am an instrumentalist, not a realist, about pickup theories.
If we started a project from scratch where we applied rationality to the domain of sex and relationships, and developed heuristics to improve ourselves in those areas, this project would have a considerable overlap with the teachings of the seduction community. At its best, pickup is "applied evolutionary psychology." Many of the common criticisms of pickup demonstrate an anger against the use of rationality and scientific thinking in the supposedly sacred and mystical area of sex and romance. Yet it falls prey to certain ideological notions that limit its general innovativeness and empirical exploration, and some of its techniques are morally questionable.
I would be happy to say more on the relationship between pickup and rationality at some point, and you can tell me how much I've "gone native."
Replies from: MBlume, wedrifid, taryneast↑ comment by wedrifid · 2011-04-07T18:02:01.697Z · LW(p) · GW(p)
For example, I acknowledge the fantastic success of notions like "women are attracted to Alpha Males," even though I don't believe that they are literally true, and I know that they are oversimplifications of evolutionary psychology.
I tune out wherever I hear the term 'alpha male' in that sort of context. The original scientific concept has been butchered and abused beyond all recognition. Even more so the 'beta' concept. Beta males are the ones standing right behind the alpha ready to overthrow him and take control themselves. 'Omega' should be the synonym for 'pussy'.
But I must admit the theory is at least vaguely in the right direction and works. Reasonably good as popular science for the general public. Better than what people believe about diet, showering, and dental hygene.
↑ comment by taryneast · 2011-04-07T11:51:22.093Z · LW(p) · GW(p)
Many of the common criticisms of pickup demonstrate an anger against the use of rationality and scientific thinking in the supposedly sacred and mystical area of sex and romance.
Actually, the best (and most common) criticisms I see are more due to the use of lies and manipulation in the area of sex and romance.
The evo-psych stuff (and thereby any science and rationality) is perfectly fine by me.
Replies from: Vaniver↑ comment by Vaniver · 2011-04-07T12:22:50.715Z · LW(p) · GW(p)
This seems to me like criticizing the presence of lies in humor- that is, it's something normal and acceptable in practice but unsettling in theory.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-07T15:42:10.428Z · LW(p) · GW(p)
We disagree.
You seem to be suggesting that lies and manipulation in pickup serve to lead the target to a desirable outcome they would not deliberately choose, as in humor. I and many others have repeatedly asserted here that this is not the case. There are pickup techniques that are simply not acceptable - attacking self-esteem, manufacturing breakups, etc.
You (collectively) need to abandon this soldier.
Replies from: wedrifid, cousin_it↑ comment by wedrifid · 2011-04-07T17:30:35.847Z · LW(p) · GW(p)
You seem to be suggesting that lies and manipulation in pickup serve to lead the target to a desirable outcome they would not deliberately choose, as in humor. I and many others have repeatedly asserted here that this is not the case.
I assume you mean to include 'all' in there. Some pickup practitioners (and pickup strategies) do use lies and manipulation without consideration of whether the outcome is desirable (and the means appropriate.) That is a legitimate concern. It would certainly not be reasonable to assert this is the norm, which you didn't make clear in your declaration of repeated assertion.
There are pickup techniques that are simply not acceptable - attacking self-esteem
Here it is important not to beware of other optimising. For the average Joe and Jane a courtship protocol that involves attacking each other's self esteem would just be obnoxious and unpleasant. So I wouldn't 'accept' in that sense self esteem lowering tactics to that kind of target. Yet for particularly high status folks within that kind of social game self-esteem attacks are just how it is played - by both sexes. They attack the heck out of each other with social weapons to assure each other that they have the social prowess to handle each other. And they both love every minute of it. Of course even if you take away 90% of their self esteem they probably still have more that enough left!
The biggest problem with self esteem attacking as a strategy come when clumsy PUAs try to use a tactic that is appropriate for 10s on 6s and 7s (in terms of approximate rank in the dating social hierarchy). That is just unpleasant (not to mention ineffective.) A related problem is confusing gender atypical girl with a gender typical girl (often due to complete ignorance of the possibility of that kind of difference). Again that will be unpleasant for the target in question - instead of exactly what she needs to facilitate a satisfying sexual encounter.
Rather than being 'simply not acceptable', pickup techniques that involve attacking self esteem are complexly not acceptable, depending on the context and parties involved.
manufacturing breakups
I am comfortable in labelling individuals who do this as assholes and do anything possible to keep them out of my social circle and generally undermine their status.
You (collectively) need to abandon this soldier.
You collectively? Exactly which collective are you referring to here? It would be reasonable to level the gist of your objection at Vaniver - or at least his specific comment here. But if you mean to level it at the ancestor (by HughRistik) then you are totally missing the mark.
The biggest opportunity to improve discourse on these kind of subjects - and to actually potentially benefit those participating in the dating game - is to abandon judgements on collectives.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-07T18:31:51.987Z · LW(p) · GW(p)
I assume you mean to include 'all' in there. Some pickup practitioners (and pickup strategies) do use lies and manipulation without consideration of whether the outcome is desirable (and the means appropriate.) That is a legitimate concern. It would certainly not be reasonable to assert this is the norm, which you didn't make clear in your declaration of repeated assertion.
In context, I was responding to a generalization with a counter based on exceptions to a proposed rule. I agree there is variety within the pickup community. I disagree that it is uniformly a force for good - and thus that opposition to it is based on dislike for science.
Here it is important not to beware of other optimising. For the average Joe and Jane a courtship protocol that involves attacking each other's self esteem would just be obnoxious and unpleasant. [...]
You're right. I meant to indicate the case of attacking someone's self-esteem in order to make them feel bad (and become pliable), rather than to engage them in a duel of wits.
You collectively? Exactly which collective are you referring to here?
The posters on lesswrong who claim that opposition to pickup on lesswrong is due to women being uncomfortable with explicit analysis of social reality, or (relatedly) that pickup is a uniformly altruistic enterprise (wrt sexual partners).
It's only a judgment on a collective because it's a judgment on a position, and the collective is people who hold that position.
Replies from: wedrifid, wedrifid↑ comment by wedrifid · 2011-04-07T19:15:01.041Z · LW(p) · GW(p)
You're right. I meant to indicate the case of attacking someone's self-esteem in order to make them feel bad (and become pliable), rather than to engage them in a duel of wits.
No, I don't mean duels of wits in that sense. I really do refer to the case of attacking someone's self esteem to make them become pliable. Not bad per se (that doesn't help), but less secure and less confident and in general that which is lowering self esteem. The judgement you make of all instances of that behaviour is actually narrowminded in as much as enforcing the judgement would worsen the experiences of life of a whole class of people. And I do not refer to a class denominated by sex.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-07T19:25:52.860Z · LW(p) · GW(p)
I am expressing myself poorly, I think. I believe I am familiar with the type of interaction you are describing, and agree that it is not 'bad'.
↑ comment by wedrifid · 2011-04-07T19:23:04.968Z · LW(p) · GW(p)
The posters on lesswrong who claim that opposition to pickup on lesswrong is due to women being uncomfortable with explicit analysis of social reality,
or (relatedly) that pickup is a uniformly altruistic enterprise (wrt sexual partners).
Everyone who does make the claim that pickup is uniformly altruistic is clearly and obviously mistaken. And can look forward to a world of disappointment when they realise their fairytale ideas about romance are absurdly naive. Most people learn the hard way during their teens. (Although nerds tend to take longer on average.)
↑ comment by cousin_it · 2011-04-07T16:14:50.669Z · LW(p) · GW(p)
I'm perfectly willing to abandon this soldier, because I defy the premise that makes it necessary. The goal of pickup is to engineer the most desirable outcome for the user of pickup, not the most desirable outcome for the victim. If I wanted to make women feel better, I'd just buy them flowers instead of doing pickup.
Replies from: HughRistik, CuSithBell, NancyLebovitz↑ comment by HughRistik · 2011-04-08T08:08:41.728Z · LW(p) · GW(p)
After asking that cousin_it abandon charged words like "victim" that I suspect he is just using for shock value, I am actually going to rewrite his statement seriously and examine it seriously:
The goal of pickup is to engineer the most desirable outcome for the user of pickup, not the most desirable outcome for the other participant.
On the face of it, this statement might make pickup sound zero-sum, but that's not the only interpretation. Pickup is about attempting to bring about the most desirable outcome for the user of pickup, yes, but that doesn't mean that it creates an undesirable outcome for the other person (from their perspective). I would propose a slightly altered summary:
"The goal of pickup is to engineer the most desirable outcome for the user of pickup, without harming the other participant."
You have a comparative advantage for advocating for your own preferences. Social interaction (of which sexuality is only a subset) works best when people advocate for their own preferences, attempting to align others' preferences with theirs, and without harming others.
Of course, this process is bilateral (which is why I changed "victim" to "participant"), so both participants are actually trying to engineer the outcome towards their preferences at the same time (and also engineer each other's preferences to align with theirs!). With two people of similar ability, the result will be some sort of intersection or union of their preferences.
But this compromise only comes about when both people mainly advocate for their own preferences. Sexuality and romance are a form of negotiation. Pickup teaches negotiation skills, but it is hardly the only source of them. Many people already have sexual negotiation skills, and certain segments of men may be deficient, which is why pickup is necessary for them.
So yes, the goal is pickup is to advance towards your most desirable outcome... and if you are a decent person, then your most desirable outcome won't include absolutely trampling over the other person if they are a crappy negotiator and can't handle you. Simultaneously, the other person's goal is to advance towards their most desirable outcome.
Unfortunately, the cultural bias towards villainous men abusive damsels in distress makes male sexual negotiation skills seem a lot more suspect than women's. As I pointed out recently, nobody worries about innocent, insecure beginner PUAs getting used by women for sex and validation... thanks to the unwarranted assumption that PUAs are so far ahead of women in negotiation skill that they are performing some kind of black magic or mind control on women.
If I wanted to make women feel better, I'd just buy them flowers instead of doing pickup.
Personally, I find it much easier to make women feel good through pickup than through flowers.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-08T09:25:32.127Z · LW(p) · GW(p)
Social interaction (of which sexuality is only a subset) works best when people advocate for their own preferences, attempting to align others' preferences with theirs, and without harming others.
This is exactly the kind of argument that I wanted to shoot down.
IMO we shouldn't have a norm of requiring people to give altruistic justifications whenever they discuss better ways of maximizing their own utility function, even if that utility function may be repugnant to some. Discussions of morality (ends) should not intrude on discussions of rationality (means), especially not here on LW! If you allow a field to develop its instrumental rationality for a while without moralists sticking their noses in, you get something awesome like Schelling, or PUA, or pretty butterflies. If you get stuck discussing morals, you get... nothing much.
Replies from: komponisto, wedrifid, thomblake↑ comment by komponisto · 2011-04-08T18:30:29.144Z · LW(p) · GW(p)
If you allow a field to develop its instrumental rationality for a while without moralists sticking their noses in, you get something awesome like Schelling, or PUA, or pretty butterflies. If you get stuck discussing morals, you get... nothing much.
You may be on to something here; this may be a very useful heuristic against which to check our moral intuitions.
On the other hand, one still has to be careful: you probably wouldn't want to encourage people to refine the art of taking over a country as a genocidal dictator, for example.
Replies from: wedrifid↑ comment by wedrifid · 2011-04-08T18:37:34.422Z · LW(p) · GW(p)
On the other hand, one still has to be careful: you probably wouldn't want to encourage people to refine the art of taking over a country as a genocidal dictator, for example.
Although it is interesting to study in theory. For example, in the Art of War, Laws of Power, history itself or computer simulations. Just so long as it doesn't involve much real world experimentation. :)
Replies from: Marius↑ comment by Marius · 2011-04-08T19:04:05.027Z · LW(p) · GW(p)
Just so long as it doesn't involve much real world experimentation. :)
But this is the fundamental problem: you don't want to let the theory in any field get too far ahead of the real world experimentation. If it does, it makes it harder for the people who eventually do good (and ethical) research to have their work integrated properly into the knowledge. And knowledge that is not based on research is likely to be false. So an important question in any field should be "is there some portion of this that can be studied ethically?" If we "develop its instrumental rationality for a while without moralists sticking their noses in", we run the risk of letting theories run wild without sufficient evidence [evo-psych, I'm looking at you] or of relying on unethically-obtained (and therefore less-trustworthy) evidence.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-08T20:15:29.981Z · LW(p) · GW(p)
"Unethically obtained evidence is less trustworthy" is the wrongest thing I've heard in this whole discussion :-)
Replies from: Marius↑ comment by Marius · 2011-04-08T20:24:08.057Z · LW(p) · GW(p)
How so? When scientists perform studies, they can sometimes benefit (money, job, or simply reputation) by inventing data or otherwise skipping steps in their research. At other times, they can benefit by failing to publish a result when they can benefit by refraining to publish. A scientist who is willing to violate certain ethical principles (lying, cheating, etc) is surely more willing to act unethically in publishing (or declining to publish) their studies.
Replies from: Desrtopa↑ comment by thomblake · 2011-04-08T14:55:34.796Z · LW(p) · GW(p)
I agree with this in the abstract, but in all particular situations the 'morality' is part of the content of the 'utility function' so is directly relevant to whether something really is a better way of maximizing the utility function.
If you're talking about behaviors, morality is relevant.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-08T15:50:08.217Z · LW(p) · GW(p)
I agree with this in the abstract, but if you adopt the view that morality is already factored into your utility function (as I do), then you probably don't need to pay attention when other people say your behavior is immoral (as many critics of PUA here do). I think when Alice calls Bob's behavior immoral, she's not setting out to help Bob maximize his utility function more effectively, she's trying to enforce a perceived social contract or just score points.
Replies from: Vladimir_Nesov, thomblake↑ comment by Vladimir_Nesov · 2011-04-08T15:56:18.407Z · LW(p) · GW(p)
if you adopt the view that morality is already factored into your utility function
(You are not necessarily able to intuitively feel what your "utility function" specifies, and moral arguments can point out to you that you are not paying attention, for example, to its terms that refer to experience of specific other people.)
↑ comment by thomblake · 2011-04-08T17:59:02.599Z · LW(p) · GW(p)
I disagree, especially here on Lw! When user-Bob tells user-Alice that her behavior is immoral, he's probably setting out to help her maximize her utility function more effectively.
Or at least, that's why I do it. A virtue is a trait of character that is good for the person who has it.
ETA: Otherwise, the argument is fully general. For humanity in general, when Alice says x to Bob, she is trying to enforce a perceived social contract, or score points, or signal tribal affiliation. So, you shouldn't listen to anybody about anything w.r.t. becoming more instrumentally effective. And that seems obviously wrong, at least here.
Replies from: wedrifid↑ comment by wedrifid · 2011-04-08T18:42:32.784Z · LW(p) · GW(p)
I disagree, especially here on Lw! When user-Bob tells user-Alice that her behavior is immoral, he's probably setting out to help her maximize her utility function more effectively.
My historical observations do not support this prediction.
Replies from: thomblake↑ comment by thomblake · 2011-04-08T18:52:27.558Z · LW(p) · GW(p)
I submit that if I say, "you should x", and it is not the case that "x is rational", then I'm doing something wrong. Your putative observations should have been associated with downvotes, and the charitable interpretation remains that comments here are in support of rationality.
↑ comment by CuSithBell · 2011-04-07T16:54:04.246Z · LW(p) · GW(p)
Sure, if you're willing to hurt people non-consensually to obtain sexual favors from them, then you're not part of this argument. I was responding to the notion that pickup is uniformly non-harmful, and that opposition to it is based on a fear of rationality or science or whatever. Essentially, I was arguing that your position is common.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-07T17:12:57.998Z · LW(p) · GW(p)
"Hurting people non-consensually" is an awfully low bar. For example, if you dump someone, you're hurting them non-consensually.
At this point you may try to invent some deontological rule that would say that hurting people is okay in some contexts but not okay in others. If you're especially honest, your rule will even have equal real-life impact on men and women, though it seems to be really hard to achieve. But let's look at the bigger picture instead. Is there any strategy of behavior in love-related matters that is "uniformly non-harmful"?
Replies from: CuSithBell, taryneast↑ comment by CuSithBell · 2011-04-07T18:29:11.810Z · LW(p) · GW(p)
That's not a very charitable interpretation.
Anyway, we're actually arguing for the same thing - the pickup community is not composed of altruists (with regards to their sexual partners).
Replies from: cousin_it, Vaniver↑ comment by Vaniver · 2011-04-07T22:35:22.044Z · LW(p) · GW(p)
Anyway, we're actually arguing for the same thing - the pickup community is not composed of altruists (with regards to their sexual partners).
While that may be the same conte_n_t it seems to be missing valuable conte_x_t. The pickup community is not composed of altruists, but it seems likely to me that anyone who considers themselves an altruist when it comes to romance is self-deceiving.
I can't speak for the pickup community, but I'm only interested in win/win relationships, which seems to me to be your primary concern. Do either lies or manipulation preclude win-win relationships? No, of course not. Thus, any unqualified complaints about lies or manipulation do not interest me.
I share your low opinion of people who pursue win/lose relationships, and hope they change their ways. But I think that's where the real issue is.
Replies from: NancyLebovitz, CuSithBell↑ comment by NancyLebovitz · 2011-04-08T07:42:45.168Z · LW(p) · GW(p)
Do either lies or manipulation preclude win-win relationships? No, of course not.
I assume that lies and/or manipulation make win-win relationships less likely. Am I missing something?
Replies from: taryneast, Vaniver↑ comment by Vaniver · 2011-04-08T22:07:24.822Z · LW(p) · GW(p)
There are many kinds of lies, and many kinds of manipulation. Some are healthy, some are unhealthy, and it takes a fair measure of skill and knowledge of the other person to tell them apart. Honesty is the first order approximation to the best policy, but is not the best policy.
↑ comment by CuSithBell · 2011-04-08T00:29:06.722Z · LW(p) · GW(p)
Okay, I think that's simply a definitional disagreement - by altruism I meant "interested in win/win relationships", basically.
What I take issue with is the idea that
PUA doesn't prominently involve techniques that preclude win/win or are unconcerned with the difference between win/lose and win/win (e.g. sabotaging existing relationships). That is, manipulation in the "on-net harmful" sense.
therefore, people who have a problem with PUA are just not able to deal with science / analysis.
↑ comment by wedrifid · 2011-04-08T03:07:14.295Z · LW(p) · GW(p)
PUA doesn't prominently involve techniques that preclude win/win or are unconcerned with the difference between win/lose and win/win (e.g. sabotaging existing relationships). That is, manipulation in the "on-net harmful" sense.
This seems true as an independent premise. (I agree that it does not lead to the conclusion in the second bullet.)
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-08T03:31:04.714Z · LW(p) · GW(p)
One thing that came up quickly on a cursory search: http://www.pualingo.com/pua-definitions/boy-friend-destroyer-bfd/ I suppose I should correct myself though - I intended to refer to techniques and attitudes etc. (based on the descriptions of people familiar with the culture, I expect that mysogyny is fairly common, even if not in the majority).
Replies from: wedrifid, Vaniver↑ comment by wedrifid · 2011-04-08T04:11:35.855Z · LW(p) · GW(p)
Pardon me Cu. It seems you caught the reply before I deleted it. I had reread the premise in question and noticed it said 'prominently' rather than 'predominantly'. Those two letters make a big difference!
While it is not unlikely that I still disagree on the degree to which kind of behavior is popular within the relevant subculture it certainly wouldn't be enough to quibble over whether it counts as 'prominent'. I can just agree that to the extent that such behaviors exist they are undesirable.
A philosophy I hold dear is that it is important not to judge a whole subculture based on the worst traits of those within it. The pickup arts and feminism both have features (and acolytes) that we would do well to be wary of and reject. We don't want self-centered manipulative misogyny and we don't want hypocritical sexist judgementalness either (which refers not to the behavior of anyone here but to the analogous extreme fringe in feminism to the extreme fringe in PUA). Instead we want to take the lessons of practical rationality, personal development, overcoming of emotional biases, sexual liberation, social justice, equality and empowerment from both. Perhaps one of the most desirable feature common to both of those subcultures is that they cut through bullshit cultural traditions that serve to hold people back from experiencing life to the fullest.
You mentioned before the necessity to abandon a 'soldier' - and that is an important point. There really are bad things related to pickup arts - and one of those is certain behaviors that basically amount to being a bitchy asshole. If someone is so caught up with advocating PUA that they aren't even willing to admit the legitimate problems that are there then the conversation is doomed and their own cause may be undermined. For this reason it disheartens me when discourse reverts to 'sides'. Nothing good is likely to come.
The above is why I feel no dissonance at all as I disapprove of and reject the use of bitchy relationship sabotaging tactics and the use of particularly powerful persuasion techniques on vulnerable women while at the same time appreciating and advocating the use of PUA training as a form of healthy personal development that is a net benefit to society in general.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-08T04:47:38.799Z · LW(p) · GW(p)
I think we're on the same page, then!
I agree quibbling about precise levels would be pointless, particularly because I couldn't give good estimates for those precise levels. I emphatically agree that we shouldn't judge groups by the worst traits they hold within their borders - and in fact, in my research job I am planning to look into some basic pickup literature to see if there's anything useful (regarding first impressions, specifically), as it is (or so I am told) one of the few places where social interactions are subjected to numerical analysis. (The sociological and psychological research I've read has been frustratingly qualitative! It's almost like it wasn't intended for use by robots.)
What I am resisting here is the notion, repeated several times in the LW PUA discussion, that the only reason people (or, alternately, women) are uncomfortable with PUA is discomfort with applying analysis to sex and romance.
It sounds like you agree that this isn't the case (and I imagine you'd agree that it's dismissive, simplistic, and possibly misogynistic), but it comes up disturbingly often (frequently accompanied by arguments like "manipulation isn't a precise or universally negative concept -> dismiss all claims that some form of manipulation is bad").
Cheers, in any case :)
Replies from: HughRistik↑ comment by HughRistik · 2011-04-08T07:36:04.554Z · LW(p) · GW(p)
What I am resisting here is the notion, repeated several times in the LW PUA discussion, that the only reason people (or, alternately, women) are uncomfortable with PUA is discomfort with applying analysis to sex and romance.
Just to clarify, who has said that this is the only reason that some people may be uncomfortable with pickup?
manipulation isn't a precise or universally negative concept -> dismiss all claims that some form of manipulation is bad
Many important concepts aren't precisely defined, yet they are still meaningful (e.g. status). We shouldn't throw out these concepts. Yet sometimes we should try to nail them down a bit more precisely and examine the intuitions behind them.
I've been trying to figure out what people actually mean by "manipulation" on LW, and the ethical theory behind it, but I haven't had much success. I don't want to make people abandon it, because I think that it is a meaningful concept. I've proposed my own definition: "unethical social influence." But I am a bit disappointed that people constantly fling it around without examining it.
My worry is that it is used overbroadly, constraining the personal development of people who need to intentionally learn social skills. Furthermore, I feel that some behaviors get tagged as "manipulation" when they are analogous to other behaviors that are considered ethical: it's just that people are accustomed to one, and not the other.
And I think people just intentional social influence too harshly when calling it manipulation, and/or don't judge unintentional social influence harshly enough. (Didn't learn social skills by age 18? Too bad... if you try now, you'll be manipulating people, so stop trying to get above your station, and return to the back of the bus.)
Finally, the charge of "manipulation" often seems directed to social influence that is framed in a way that triggers a disgust heuristic. I'm not claiming that the disgust heuristic is the entire reason that people use the word manipulation, and disgust can be a pointer to a valid argument, but I do see people getting icked out by social influence around sex, intentional social influence, or social influence that they haven't seen before or don't understand very well.
Replies from: wedrifid, wedrifid, taryneast, CuSithBell↑ comment by wedrifid · 2011-04-08T15:23:43.192Z · LW(p) · GW(p)
Just to clarify, who has said that this is the only reason that some people may be uncomfortable with pickup?
Vaniver did, at least by negligence when making oversimplified replies. The rest of this group seems to be populated by straw men. Conveniently demonstrated as a reply to you here by taryneast. That is one issue that is mentioned at times by yourself and others but certainly never as 'the only' - which is what you would be being condemned for. Chances are I have mentioned the subject myself - and it is so in keeping with the entirety of OvercomingBias that I don't even recall whether Robin Hanson has said anything directly.
↑ comment by wedrifid · 2011-04-08T15:13:46.833Z · LW(p) · GW(p)
Didn't learn social skills by age 18? Too bad... if you try now, you'll be manipulating people, so stop trying to get above your station, and return to the back of the bus.
Of course, back when I was in school the back of the bus was where all the cool kids got to sit. In fact, when I managed to get myself to the back seat of the bus it was much easier to flirt with my female fellow passengers. I was the impressive senior back-seat-sitting cool guy after all!
↑ comment by taryneast · 2011-04-08T14:23:44.734Z · LW(p) · GW(p)
Just to clarify, who has said that this is the only reason that some people may be uncomfortable with pickup?
Um... you did. See the comment that I originally replied to. I quote:
Many of the common criticisms of pickup demonstrate an anger against the use of rationality and scientific thinking in the supposedly sacred and mystical area of sex and romance.
and also
I've been trying to figure out what people actually mean by "manipulation" on LW, and the ethical theory behind it, but I haven't had much success.
Well, in response to one of cousin_it's comments, I've given my own definition:
"deliberately doing something with the intent to hurt a person (without their consent) and thereby to gain advantage over them"
It's pretty clear cut what does and does not count as "unethical" here.
Furthermore, I feel that some behaviors get tagged as "manipulation" when they are analogous to other behaviors that are considered ethical: it's just that people are accustomed to one, and not the other.
Can you give me some examples of these behaviours?
Please note: I am quite interested in a lot of the analysis-side of PUA - I am totally unopposed to guys gaining more confidence, understanding and social skill - especially through analysis of what actually makes women happy and how guys can go about gaining it. I just don't like the Dark Arts parts of it. I think it can be performed with win-win in mind. No manipulation necessary.
I'd love to hear the opposite side too. Is there an equivalent PUA community for women? if not - why not?
Replies from: thomblake, wedrifid↑ comment by thomblake · 2011-04-08T14:40:34.423Z · LW(p) · GW(p)
Jumping in here, this is not correct:
What I am resisting here is the notion, repeated several times in the LW PUA discussion, that the only reason people (or, alternately, women) are uncomfortable with PUA is discomfort with applying analysis to sex and romance.
Just to clarify, who has said that this is the only reason that some people may be uncomfortable with pickup?
Um... you did. See the comment that I originally replied to. I quote:
Many of the common criticisms of pickup demonstrate an anger against the use of rationality and scientific thinking in the supposedly sacred and mystical area of sex and romance.
(emphasis added)
That comment does not state that it is the only reason some people are uncomfortable with pickup - rather, it says that it is demonstrated in many of the common criticisms, which is quite different.
ETA: BTW, that's an American 'quite' - I meant "which is very different".
Replies from: HughRistik, taryneast↑ comment by HughRistik · 2011-04-08T23:42:07.563Z · LW(p) · GW(p)
Thanks, thomblake, you got it.
↑ comment by taryneast · 2011-04-08T14:52:48.825Z · LW(p) · GW(p)
Ok I'll restate the actual point that I believe CuSithBell was trying to make:
What I am resisting here is the notion, repeated several times in the LW PUA discussion, that the main reason people (or, alternately, women) are uncomfortable with PUA is discomfort with applying analysis to sex and romance.
Emphasis added to make the point.
And I might point out that I feel it's a case of Logical Rudeness (on the part of HughRistik) to jump on the single word "only" and totally ignore the rest of the point being made here. Which is why I countered with a quote directly from his own previous comment.
Replies from: thomblake↑ comment by thomblake · 2011-04-08T15:03:50.140Z · LW(p) · GW(p)
When you use a word like 'only', you're inviting that sort of interpretation. I read your statement and it seemed like you meant it literally, that is, "I'm resisting the interpretation that there are not other reasons at all...", and I read the response as confused because nobody said anything about there being no other reasons.
Even if HughRistik was being in some way uncharitable, I fail to see how it's an instance of Logical Rudeness, as it was a matter of correctly parsing your statement, rather than changing his position in the middle of an argument.
Replies from: taryneast↑ comment by taryneast · 2011-04-08T15:45:19.573Z · LW(p) · GW(p)
Whether somebody says "only" or says "mainly" shouldn't matter too much, if the main point is actually something else entirely. In this case - the main point was about "what it is that upsets people about PUA", not whether it's the main point, or the only point that upsets people.
From my reading of CuSithBell's comment - I think she said "only" but probably meant "mainly" - and thus jumping on the word "only" makes HughRistik's comment seem like he was jumping on a side-point to avoid the main issue
Yes, on this site, using "only" to mean "mainly" opens you up to the kind of jumping-on that is common on this site... but I believe Logical Rudeness includes the situation where you jump onto a side-point at the expense of the main point. That's why I mentioned it.
That said, I totally believe that we all should use the more correct word. If CuSithBell really meant "mainly" instead of "only" then that's what she should have said to be more precise.
I restated and re-worded what she said because what I am most interested in is exactly what she said... with only one word of difference that does not (in my view) change anything from the Main Point.
↑ comment by wedrifid · 2011-04-08T15:38:20.082Z · LW(p) · GW(p)
Is there an equivalent PUA community for women?
See HughRistik's comment regarding the Playettes.
if not - why not?
Because they are busy being feminists instead? (But more serious factors are the relative ease at finding a willing mate and qualitatively different consequences for being a poor player at the social game.)
↑ comment by CuSithBell · 2011-04-08T17:12:38.290Z · LW(p) · GW(p)
Just to clarify, who has said that this is the only reason that some people may be uncomfortable with pickup?
Well, let's see. This seems to be an argument against the notion that there are other considerations. This comment regards removing such a claim from the top-level post, and repeats the claim. Here is another one.
I know that earlier in this thread you pointed out this aspect of distaste with PUA, but acknowledged more legitimate criticisms as well.
I've been trying to figure out what people actually mean by "manipulation" on LW [...]
Suppose someone said that people are uncomfortable with discussions on how to rape people on lesswrong because of discomfort with science, I explained that that wasn't the part that bothered me, and they replied by saying that consent is sort of a thorny issue, one that's imprecisely defined and entangled with other complex concepts. Sure, fine, but that's missing the point.
In these contexts, I use 'manipulation' the same way you suggest, and often qualify it with additional terms - 'harmful', 'dark arts', etc. - to clarify.
The wider meaning of manipulation I take to mean a collection of behaviors of varying levels of sinister-ness which may or may not be deliberate. In this less serious sense, both learned and innate social skills involve some level of manipulation.
I still think, just as you do if I recall correctly, that some aspects of pickup practice and culture are extremely undesirable - my main point is that attributing people's discomfort with this to unrelated matters is disingenuous and unhelpful.
Does this sound fair and reasonable?
Edit: My choice of analogy was poor, and I withdraw it completely. In its place, consider "People ( / Women) don't become card counters because they don't like math."
Replies from: HughRistik↑ comment by HughRistik · 2011-04-09T00:28:24.512Z · LW(p) · GW(p)
Suppose someone said that people are uncomfortable with discussions on how to rape people on lesswrong because of discomfort with science, I explained that that wasn't the part that bothered me, and they replied by saying that consent is sort of a thorny issue, one that's imprecisely defined and entangled with other complex concepts. Sure, fine, but that's missing the point.
I don't accept this analogy, because it places pickup techniques as analogous to rape. Your analogy shows more about the potential ugh fields that people may have around pickup.
What actually occurs is that pickup is mentioned, and someone says that pickup (or some pickup techniques) are "manipulative." In that case, it is perfectly reasonable to attempt to approach an agreed upon conceptualization of "manipulation."
In these contexts, I use 'manipulation' the same way you suggest, and often qualify it with additional terms - 'harmful', 'dark arts', etc. - to clarify.
"Dark arts" doesn't really help, because that term has problems of its own.
I still think, just as you do if I recall correctly, that some aspects of pickup practice and culture are extremely undesirable
Yes.
my main point is that attributing people's discomfort with this to unrelated matters is disingenuous and unhelpful.
I'm not sure that some critics of pickup are only uncomfortable with the parts of pickup that I would stipulate as undesirable; their views seem to be broader and more sweeping.
I would simply maintain that some people's discomfort with a scientific/rational approach to dating underlies some criticisms of pickup. Does that sound fair?
For instance, I've seen many criticisms that are uncomfortable with analysis used as the foundation for an intentional approach (though I'm not sure if I've seen that particular one on LW). Edit: example:
By moving from incidental to intentional you’re changing the dynamic. You’re no longer pursuing the relationship between two people but a specific agenda designed around realizing the needs and desires of one.
That person believe that as soon as you start being intentional, you are suddenly being selfish... which makes absolutely no sense.
As another example, I think that some women here are uncomfortable that certain default pickup behaviors are counter to their own preferences... while not recognizing that the priors of PUAs (acting on limited information) are highly influenced by the preferences of other women with dramatically different phenotypes.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-09T02:10:14.396Z · LW(p) · GW(p)
I don't accept this analogy, because it places pickup techniques as analogous to rape. Your analogy shows more about the potential ugh fields that people may have around pickup.
The analogy is accurate, you're just being irrational as an emotional reaction to its content.
[Only, that sort of response is condescending and insulting.]
What actually occurs is...
I'm sure that does happen. It's not the topic under discussion. Yes, there are nuances and shades of gray and people with incorrect opinions and people uncomfortable with explicit analysis of social phenomena.
There are also people here on lesswrong who say that the reason people in general (or women in general) are uncomfortable with pickup is because of such discomfort with analysis. That is also "what actually happens", and it is explicitly what I have been talking about this whole time.
(I guess I shouldn't have used rape in the analogy. The point of it was to illustrate the content of the discourse, not to compare the topics. It would work equally well if it were, say... "People who don't move to Vegas and become card counters avoid it because they dislike math." Or something.)
Edit: I withdraw the analogy as noted above, and apologize.
Replies from: HughRistik↑ comment by HughRistik · 2011-04-09T07:00:20.448Z · LW(p) · GW(p)
The analogy is accurate, you're just being irrational as an emotional reaction to its content.
You said:
Suppose someone said that people are uncomfortable with discussions on how to rape people on lesswrong because of discomfort with science, I explained that that wasn't the part that bothered me, and they replied by saying that consent is sort of a thorny issue, one that's imprecisely defined and entangled with other complex concepts. Sure, fine, but that's missing the point.
In a discussion on LW about how to rape people, the nuances of consent would indeed be a distraction, but only if there was a consensus that the behavior is rape. So I thought that by making pickup analogous to rape, you were presenting it an something that everyone ought to recognize as wrong, such that debating the concept of "manipulation" would be missing the point. That's what objected to in your analogy.
If there wasn't a consensus about whether the behavior was rape, then discussing the concept of consent actually would be a great way to approach the disagreement, and it would not be missing the point. But if that's what you meant, then I don't know why you made the analogy, because it proves my point, not yours.
(As an example: perhaps 24/7 BDSM relationships were under discussion, where the submissive partner gives consent at the beginning of the relationship. Someone might say that the submissive partner is being raped. It would then be perfectly appropriate to discuss the view of consent behind that criticism, and whether someone can consensually give away power at the beginning of a relationship.)
There are also people here on lesswrong who say that the reason people in general (or women in general) are uncomfortable with pickup is because of such discomfort with analysis. That is also "what actually happens", and it is explicitly what I have been talking about this whole time.
To the extent that people hold this view, I disagree with them. After looking at your three links, this interpretation is only plausible for the first link, and even then I would want that poster to clarify before starting a hype train.
Replies from: CuSithBell, wedrifid↑ comment by CuSithBell · 2011-04-09T22:51:52.937Z · LW(p) · GW(p)
I apologize for my metaphor. It was poorly chosen. I let my desire to make a point forcefully overcome my sense of decency. It is retracted. Perhaps you could consider the card-counting metaphor in its place.
If there wasn't a consensus about whether the behavior was rape, then discussing the concept of consent actually would be a great way to approach the disagreement, and it would not be missing the point.
The point is that there was a mis-attribution regarding the reasons to object. There is even what seems to be a general consensus that these reasons are legitimate (see cousin_it's posts, or your own criticisms of PUA).
After looking at your three links, this interpretation is only plausible for the first link, and even then I would want that poster to clarify before starting a hype train.
The posts in the second and third links are part of a larger discussion. In context, the discussion goes something like - "It's not that women don't like analysis, it's that they don't like PUA" is followed by "Of course they don't, people don't like analysis", then "I don't dislike analysis" is followed by "no one dislikes analysis, they just become angry when observing it." I made the above claim then, and no one denied it.
If you are skeptical of my point, I would like to request a summary of said point adjoining a response, if possible.
Replies from: HughRistik↑ comment by HughRistik · 2011-04-10T00:02:53.627Z · LW(p) · GW(p)
I think we are just agreeing violently, at this point.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-10T23:06:10.687Z · LW(p) · GW(p)
I suppose that's a good call. See you in another thread, then.
↑ comment by wedrifid · 2011-04-09T07:24:33.041Z · LW(p) · GW(p)
As an example: perhaps 24/7 BDSM relationships were under discussion, where the submissive partner gives consent at the beginning of the relationship. Someone might say that the submissive partner is being raped. It would then be perfectly appropriate to discuss the view of consent behind that criticism, and whether someone can consensually give away power at the beginning of a relationship.
Would such an arrangement typically involve safe words or would the knowledge of that power of injunction destroy the thrill the experience for the subordinate partner?
↑ comment by Vaniver · 2011-04-08T04:10:01.764Z · LW(p) · GW(p)
First, the technique: I don't see a problem with the BFD. One who is satisfied cannot be seduced. The other man loses, but any success in romance is necessarily a loss for one's competitors. (There's even a reminder that cheating on her while she's still dating the guy could hurt him deeply.)
Which makes me somewhat skeptical about the attitudes: I expect the prevalence of misogyny in the PUA is far above what I'd like it to be. But from everything I've seen, most of their rancor is pointed at the guys they feel superior to, not their targets. That they've attempted to put women under a microscope and figure out what they respond best to seems like it will make them interact with women better. A general improvement in the game of men should also correspond to a general improvement in the lives of women, as relationship satisfaction will increase.
That is, could this be base rate neglect? It's unfortunate, but a lot of men are misogynists.
Replies from: Eliezer_Yudkowsky, wedrifid↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-04-08T06:34:40.029Z · LW(p) · GW(p)
One who is satisfied cannot be seduced.
Erm, this statement is clearly false as soon as you reflect on it?
Personally, if I was going to come up with a clever rationalization for BFDs, it would be something like, "Any boyfriend who keeps her locked up in a closed relationship must clearly be a patriarchal bastard."
Replies from: ata, Vaniver↑ comment by ata · 2011-04-08T22:23:06.027Z · LW(p) · GW(p)
Personally, if I was going to come up with a clever rationalization for BFDs, it would be something like, "Any boyfriend who keeps her locked up in a closed relationship must clearly be a patriarchal bastard."
Only allowed if the BFD actually personally uses that justification in the course of persuading a woman to leave her boyfriend for him.
('cuz I like the mental image, that's all.)
↑ comment by Vaniver · 2011-04-08T22:19:04.509Z · LW(p) · GW(p)
Erm, this statement is clearly false as soon as you reflect on it?
Are you skilled at either seduction or being satisfied?
I have reflected upon it, and it still seems to me to be true. Perhaps rewording it will reveal our disagreement; what do you think of "Satisfaction is the best defense against seduction"?
↑ comment by wedrifid · 2011-04-08T04:28:13.355Z · LW(p) · GW(p)
First, the technique: I don't see a problem with the BFD. One who is satisfied cannot be seduced. The other man loses, but any success in romance is necessarily a loss for one's competitors. (There's even a reminder that cheating on her while she's still dating the guy could hurt him deeply.)
As they say, 'all is fair in love and war'. There is a lot to that sentiment and there is only so much use in judging people for acting in self interest in an inherently self interested game. But do you know another thing that has traditionally been fair in love and war? Killing anyone who is a clear threat to your territory. So challenging these guys to duels to the death isn't legal any more but this is certainly a behavior that I would want to see prevented by cooperative collective punishment if it is possible. Because I don't want that crap anywhere near me.
(Note: I make no distinction as to whether the perpetrators learned BFDs explicitly, whether they are naturally inclined that way or they learned it on 'desperate housewives'. Or, for that matter, whether it is a male or female doing the aggressive seduction of the non-single target. Although I probably would be squeamish about challenging the girl to a duel to the death.)
One who is satisfied cannot be seduced.
Pffft. Nonsense. They can so. Maybe you just need to spend some more manipulative effort making them feel like they are unsatisfied. Or distracting them from that which was satisfying sufficiently. If you are going to go around seducing women who have boyfriends don't try to sugar coating it by pretending it always means that the relationship was unsatisfying.
Replies from: Vaniver↑ comment by Vaniver · 2011-04-08T04:41:03.685Z · LW(p) · GW(p)
Killing anyone who is a clear threat to your territory.
Hence why one should attempt to induce the girl to break up with her boyfriend, rather than attempt to induce her to cheat on him. As you point out, that has a distressingly high chance of ending in murder.
If you are going to go around seducing women who have boyfriends don't try to sugar coating it by pretending it always means that the relationship was unsatisfying.
I don't swing that way. Regardless, I suspect our disagreements about the axiom are definitional. The terrible thing about satisfaction is that it is relative; it seems fair to say that one who willingly ends a relationship does it because it is unsatisfying. If it was made unsatisfying because one put forward a better offer, I have a hard time seeing that as villainous. (If one is fraudulent about the quality of the offer, that fraud is villainous- but that's a separate issue from the BFD.)
↑ comment by taryneast · 2011-04-08T14:13:31.363Z · LW(p) · GW(p)
"Hurting people non-consensually" is an awfully low bar. For example, if you dump someone, you're hurting them non-consensually.
Sure thing - that can probably be easily re-phrased to "deliberately doing something with the intent to hurt a person (without their consent) and thereby to gain advantage over them"
Breakups do not fit the above as you are not generally breaking up with a person for the express purpose of hurting them - it's kind of collateral damage, and leads to a better situation for both in the long run.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-08T14:19:41.455Z · LW(p) · GW(p)
and leads to a better situation for both in the long run
This is often not true. Look at all the people who have killed themselves over a breakup.
it's kind of collateral damage
So is hurting a woman in order to have sex with her. Hurting people is rarely a terminal goal.
In general, as a consequentialist I find it hard to care about intent. It seems you're trying to invent a new deontological rule, but I don't understand why it should be adopted.
Replies from: taryneast, Marius, taryneast↑ comment by taryneast · 2011-04-08T16:45:35.972Z · LW(p) · GW(p)
it's kind of collateral damage
So is hurting a woman in order to have sex with her. Hurting people is rarely a terminal goal.
Nope - collateral damage is damage done unintentionally. "hurting a woman in order to have sex with her" is a pretty good example of intentional damage.
My definition is pretty clear about which is the unethical of these two.
Replies from: wedrifid, TheOtherDave↑ comment by wedrifid · 2011-04-08T19:48:36.124Z · LW(p) · GW(p)
Nope - collateral damage is damage done unintentionally. "hurting a woman in order to have sex with her" is a pretty good example of intentional damage.
You are using the word incorrectly. This is independent of what behavior is ethically acceptable.
All damage that is incidental to the primary purpose of an action is collateral damage.
Additional note: Calling Bob collateral damage when you run him over so that you don't kill lots of children is a correct usage.
Replies from: taryneast↑ comment by taryneast · 2011-04-08T21:12:26.669Z · LW(p) · GW(p)
You are using the word incorrectly.
You and I disagree about whether this is collateral damage, not because we have a different definition of collateral damage, but because we disagree about whether there is intent in this situation.
If the end-goal is to have sex with a woman, and you choose to hurt this woman to gain it, then her being hurt is part of the plan - and is thus intentional. It is an important sub-goal of the main plan, which is what makes it intentional.
You could have instead chosen to buy her flowers, flatter her, or to choose a different woman (one that does not need hurting for you to gain the end-goal of sex). The presence of acceptable alternatives is one reason why I consider this situation to not be a case of mere collateral damage, but of intent.
↑ comment by TheOtherDave · 2011-04-08T18:07:38.147Z · LW(p) · GW(p)
So, I realize this is completely tangential to your main point, but: if the army launches an attack against a military target that happens to be located in a civilian neighborhood, knowing perfectly well as they do so that civilians are going to be killed in the process, I'd consider that a pretty good example of both collateral and intentional damage.
Replies from: taryneast↑ comment by taryneast · 2011-04-08T18:10:20.372Z · LW(p) · GW(p)
Yep - I agree. It's a classic case that covers both ends of the spectrum.
It also only tends to trip up people that fall for the fallacy of the excluded middle ;)
In this case, it matches my pattern of "intentional damage" and therefore ethically questionable, in my opinion.
That's not to say that if more evidence came up eg information about how it's the only alternative, or if the "greater good" outweighed the downsides... it might still be the only preferable choice... but in any case, I'd take a strong interest in the ethics involved before making the decision if I were put in that position.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-08T18:15:17.844Z · LW(p) · GW(p)
Huh.
I agree with you here, but I now have no idea what you meant by "collateral damage is damage done unintentionally."
Replies from: Sniffnoy, taryneast↑ comment by taryneast · 2011-04-08T18:37:08.378Z · LW(p) · GW(p)
"unintentionally" in my head means literally "done with intent".
Ie, if I decide "I hate Joe Bloggs" and then I get in my car, drive until I see him walking alongside the road and intentionally choose to jump the curb and run him over - then I would say that I intentionally killed Joe Bloggs because that is the outcome that I intended to happen.
however - if instead, I get in my car, and am driving down the road, my brakes fail and I see a whole classful of schoolchildren crossing the road... and my only option to not kill them is to jump the curb, which I do... but Joe Bloggs happens to be there and I see that he's there and choose one death over many...
Well - I would consider that him being killed was unintentional. The main intent of my action was not "I want Joe Bloggs dead" but "I want not to kill the schoolchildren". It was unintentional to my main aim.
Does that make sense?
Replies from: Sniffnoy, TheOtherDave↑ comment by Sniffnoy · 2011-04-08T18:42:37.533Z · LW(p) · GW(p)
It makes sense on its own but it contradicts what you said earlier about the cases cousin_it suggested.
Replies from: taryneast↑ comment by taryneast · 2011-04-08T18:52:03.772Z · LW(p) · GW(p)
It does seem to - which made me think about exactly which cases I'd consider one or the other. So here goes again... :)
In the example above - I am trying avoid causing hurt to the children - therefore if I hurt one person because it's the only way to avoid hurting multiple people, it is ethically-difficult... but, in my head, ok in the end because the re is no other option available. If you had the opportunity to choose even less collateral damage (eg slamming on the brakes) you would do so.
In the case of intentionally hurting one woman in order to gain advantage for onself - this does not apply. Especially because you are intentionally hurting another person to help onseself - the sex is the eventual goal... but the hurt is chosen as a necessary step for that goal - there are no other means being considered.
In the case of breaking up with a person - you are intending that you and they not be with one another anymore - you are not hurting them with the intent to hurt them - therefore the "collateral damage" is unintentional. - Also the expectation is that you will both be better off apart (on average). Yes, there are rare cases where an unstable person will not recover... but on average I'd say that if you were trying to have a relationship with the kind of person that was suicidal - you might be better off not being with them... that is obviously an ethical dilemma that will never be covered by a cut-and-dried rule.... but I can safely say that in my head - if I were to leave somebody whom I suspected to be suicidal - I'd be leaving them, not with the intent that they choose to commit suicide - therefore the harm would be unintentional (also, I'd make sure to call somebody that could help them with their suicidal tendencies... but that's by-the-by).
As to the case where we're deliberately choosing to kill people that are located in a civilian location... I'd consider it ethically questionable, because you are deliberately choosing to kill people... not just to avoid killing other people (as in the schoolchildren case).
There is intent to kill - even if these particular people are not part of the main intent. I'd consider it less ethically questionable if they found a way to try to kill these targets without damage to the surrounding areas.
... in fact, in thinking more, I think a big differene is the actual intent itself. Are you trying to Gain by the hurt, or to Reduce a Bad Thing?
I think it's more ok to hurt to reduce a worse Bad Thing, than it is to simply Gain something that you'd otherwise not have.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-08T20:35:58.310Z · LW(p) · GW(p)
This really does seem unnecessarily complicated.
Let's assume for the sake of discussion that more pain is bad, and less pain is good. (You already seem to be assuming this, which is fine, I just want to make it explicit.)
Most of these examples are cases of evaluating which available option results in less pain, and endorsing that option. This seems straightforward enough given that assumption.
The example of breaking up with someone is not clearly a case of that, which sounds like the reason you tie yourself in knots trying to account for it.
So, OK... let me approach that example from another direction. If I suffer mildly by staying with my partner, and my partner suffers massively by my leaving him, and the only rule we have is "more suffering is worse than less suffering," then it follows that I should stay with my partner.
Would you endorse that conclusion?
If not, would you therefore agree that we need more than just that one rule?
Replies from: taryneast↑ comment by taryneast · 2011-04-08T21:40:57.362Z · LW(p) · GW(p)
We certainly need more than just one rule. :)
Less pain frequently seems better than more pain.
Suffering mildly by staying one a person is one thing - but what I had in mind while thinking of that example is that breaking up is painful, but over quickly - whereas staying on in a relationship that isn't working is bad for both partners - not just the one leaving. - and lasts for a long time (probably decades if you keep at it).
The one that would be "happy if you just suffered quietly a little" can also be not as happy as they would be if the relationship ended and they found somebody that really wanted to be with them. oh, and it isn't always just a little suffering involved for the wanting-to-leave partner. Plus the factor that perhaps the one wanting to leave may be wanting to make a third person happy...
Of course, if it's just a small inconvenience to one person - then I wouldn't advocate breaking up at all. Relationships are "about" compromise on the small things to gain over the long term.
Obviously it all depends on circumstance and we cannot make a single rule to fit all possible permutations...
↑ comment by TheOtherDave · 2011-04-08T18:45:27.832Z · LW(p) · GW(p)
"unintentionally" in my head means literally "done with intent".
I'm going to assume you mean "without," here.
It's not how I use the word, but yes, it makes sense. Thanks for clarifying.
That said, to go back to the original example... if you consider "hurting a woman in order to have sex with her" a pretty good example of intentional damage.... it follows that the main aim in that example is not to have sex, but to cause pain?
Replies from: taryneast, taryneast↑ comment by taryneast · 2011-04-08T19:23:36.724Z · LW(p) · GW(p)
Oh - and in any case - thanks for asking these questions. It's helping me clear up what's in my head at least a little. I appreciate not only that you are asking - but also that you're asking in a way that is quite... erm approachable? not-off-putting perhaps? :)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-08T20:22:49.397Z · LW(p) · GW(p)
You're welcome.
↑ comment by taryneast · 2011-04-08T19:06:39.721Z · LW(p) · GW(p)
I'm going to assume you mean "without," here.
Yep, good catch. bad (ok, non-existent) proof-reading on my part. :)
And you're right - nutting it out a bit more has made me think more about what I consider intentional or not - and also what the main intent is or not.
In the case of "hurting a woman to have sex" - you are deliberately choosing to hurt her to gain. I think the difference is that the intentionally "hurting a woman to have sex" is more pre-meditated than having no choice but to jump the curb and kill one person instead of many.
Goals build on other goals. Your end-goal is to have sex... but if you make it your temporary goal to reduce her self-esteem to make the main goal more likely, then you are intending her to be hurt, in order to further your goal.
In the case of, say, jumping the curb to avoid children your main goal is avoiding children... jumping the curb is not something you choose as a sub-goal... if there were any other way - you'd choose that instead. It's not a goal in and of itself, it's your last possible resort - not your best possible choice.
Anyway - not sure I'm being very clear here - either with you, or in my head. This is the kind of thing that is difficult to extract from one's emotions. I know there's been some psychological research on this kind of thing - and AFA my fuzzy memory serves - it's fairly common to see a moral difference between the "intending to hurt somebody to further a goal" vs "unintentionally having to choose to hurt somebody to help something worse not happen" situation.
Edit: Looks like PhilGoetz mentions it in his comment about trolley problems
I don't think there is a clear dividing line here - because there a confounding of what's "moral/ethical" with whats "intentional".. I think there are two things tangled together that are difficult to separate. I get the feeling that I'm trying to define both at once.
In my head now is that "intentional is generally non-ethical" "unintentional is generally less unethical... but it depends on the main goal and whether or not you are trying to gain, or reduce Bad Things..." :)
Replies from: TheOtherDave, Marius↑ comment by TheOtherDave · 2011-04-08T20:21:51.329Z · LW(p) · GW(p)
OK, I think I'm kinda following this.
I agree with you that this discussion is unhelpfully confounding discussions of intention with discussions of right action, and it also seems to be mingling both with a deontological/consequentialist question.
For my own part, I would say that if I have the intention to perform an act and subsequently perform that act, the act was intentional. If I perform the act knowing that certain consequences are likely, and those consequences occur, then the consequences were intentional.
If the consequences are good ones and I believed at the time that I performed the act that those consequences were good ones, then the resulting good was also intentional.
All of this is completely separate from the question of what acts are good and what acts aren't and how we tell the difference.
Replies from: taryneast, taryneast↑ comment by taryneast · 2011-04-09T08:44:56.451Z · LW(p) · GW(p)
Looks like Shokwave has formulated the distinction a bit better with his comment here
The distinction to me looks something the difference between
"Take action -> one dies, five live" and "Kill one -> five live"
To reinterpret based on cousin_it's example, the difference is:
"Say something -> girl is hurt, get sex" vs "hurt girl -> get sex"
I take issue with the latter as I consider it intentional(my definition) damage. The former is unintentional(my definition).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-09T12:40:47.403Z · LW(p) · GW(p)
OK.
And just to be clear: you say that if Sam does the latter and Pat does the former, Sam has done something worse (or less permissable, or more culpable, or something like that) than Pat, even though the girl is just as hurt in both cases.
Yes?
Replies from: taryneast↑ comment by taryneast · 2011-04-10T15:20:03.719Z · LW(p) · GW(p)
Yep.
Mainly because in most RL situations, it's a choice between: "girl might get hurt in the backlash" and "girl definitely will be hurt".
In my head it feels like the difference between manslaughter and murder. Collateral damage versus target.
Even though the person is still dead - a person that negligently kills somebody is an idiot that really needs to clean up their act, but still might be an ok person (as long as you don't trust them with anything important). Whereas a murderer is somebody you wouldn't want to be alone with... ever.
Back to PU - the guy who accidentally hurts a girl while earnestly trying to have sex with her is a bit of a social klutz... but the guy who actively hurts a girl to have sex with her is somebody I would not trust and would actively un-invite from all my social dealings.
Replies from: shokwave, TheOtherDave↑ comment by shokwave · 2011-04-10T15:42:34.172Z · LW(p) · GW(p)
I actually think the critical issue with pick-up is where the benefits go. In the trolley problem five lives is almost tautologically worth more than one life - in pick-up, though, it's pushing the fat man in front of trolley ... to get laid. Okay, it's not a fat man's death in pick-up, but who can say that the girl's suffering is worth the sexual pleasure? Well, it might be possible to determine, but all of us are culturally programmed to definitely not trust the one guy that benefits to evaluate the situation fairly.
Pick-up artists look sleazy because we don't trust them to make the right decision with such incentives looming. I don't think it has a whole lot to do with whether the act is wrong or whether it just causes some wrong consequences. Well, to the extent that it does, I think that this issue muddies the water. From your own post, even, actions of the permissible type in trolley problems still count as manslaughter.
Replies from: taryneast↑ comment by TheOtherDave · 2011-04-10T15:39:29.834Z · LW(p) · GW(p)
I certainly agree that the lower chance of causing harm is preferably to the higher chance of causing harm.
That said, I prefer to simply say that, and I mostly consider all this talk of intentionality and collateral damage to muddy the waters unnecessarily in this case.
Anyway. My social/moral intuitions are similar to yours, for what that's worth.
That said, I also know someone who has accidentally killed (been driving a car in front of which someone walked), and I know someone who has deliberately killed (fired a bullet from a gun into another person), and when I think about those actual people I find I trust the latter person more than the former (and I trust both of them more-than-baseline).
So I don't put a lot of confidence in my social/moral intuitions in this area.
Replies from: taryneast↑ comment by taryneast · 2011-04-10T18:11:30.068Z · LW(p) · GW(p)
Hmmm - interesting point with the guy who killed. As I mentioned before - I don't think there's one rule to rule them all".
In my mind, "targetted" is worse than "non-targetted" but there are mitigating circumstances due to other social rules (eg "person B is a policeman killing a dangerous guy waving a loaded gun at a crowd" is less worse than "drunk-driver that killed a guy on the road")
Was your person B killing for the purposes of personal gain? I think that may tip the balance for me. In a pure application of just the rule we've been discussing - if the "action" performed was purely for personal gain, then I'd hold person B more culpable than person A (though A's not off the books entirely).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-10T18:37:36.870Z · LW(p) · GW(p)
Well, so, first off, I do not have privileged access to person B's purposes. So the most honest answer to your question is "How would I know?"
But, leaving that aside and going with inferences... well, my instinct is to assume that he wasn't. But, thinking about that, it's pretty clear to me that what underlies that instinct is something like "I like person B and consider him a decent chap. A priori, a decent chap would not kill someone else for the purposes of personal gain. Therefore, person B did not kill for the purposes of personal gain."
Which makes me inclined to discard that instinct for purposes of analysis like this.
Person B was a soldier at the time, and the act was a predictable consequence of becoming a soldier. And at least one of the reasons he became a soldier in the first place was because being a soldier provided him with certain benefits he valued. But there were no particular gains that derived from the specific act of killing.
So... I dunno... you tell me? He received personal gains from being in the environment that led him to pull the trigger, and was aware that that was a plausible consequence of being in that environment, so I guess I'd say that yes, he killed for purposes of personal gain, albeit indirectly.
But I suspect you're now going to tell me that, well, if he was a soldier in a military action, that's different. Which it may well be.
Mostly, I think the "for purposes of personal gain" test just isn't very useful, in that even my own purposes are cognitively impenetrable most of the time, and other people's purposes are utterly opaque to me.
Replies from: taryneast↑ comment by taryneast · 2011-04-11T08:10:38.839Z · LW(p) · GW(p)
Yes, that one definitely sounds like it falls into grey area. I think the "gain" one only works well if the personal gain is clear-cut. It's a heuristic, not a hard-edged rule eg killing somebody to gain their money (to spend on beer and hookers) is generally considered wrong. Killing somebody to stop the performing other bad acts is generally less so. Killing somebody to gain their money to buy medicine for the sick... who knows?
I get the feeling that the rules have a fuzzy-edge so that we can deal with the human-error-margin involved. As you say - often we can only guess at the real motives behind other people's actions - and our guesses may well be wrong. It means we're open to interpreting things because we want them a certain way, rather than because they definitely are, but it's possibly the best we can do with the information we have.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-11T12:55:24.162Z · LW(p) · GW(p)
Sure, if we try to come up with hard-edged rules at the level of the superficial form of the act (e.g., "don't kill people"), we wind up in a huge tangle because of the "human error margin" -- that is, the complexity of behavior.
For legal purposes we have to do it that way, and law is consequently a huge tangle.
For purposes of figuring out what the right thing to do next is, I mostly think it's the wrong way to go about it altogether.
↑ comment by taryneast · 2011-04-08T21:28:12.323Z · LW(p) · GW(p)
I agree with what you say here, but think that in my mind, there is one more dividing line - that I think Marius comment has made clearer to me. that being a narrowing-down of your own definition of "intentional", but only including those acts where you are particularly seeking to enact an action that causes harm.
Perhaps there are two uses of "intentional" - the definition you've given, and the extra definition that I also use (I kinda use both depending on context).
Intentional-1: In the process of my actions, I choose an act that I'm pretty sure I know of the consequences (in this case harmful).
Intentional-2: I "with intent" seek out a particular act whose consequences I know are harmful - I intend to cause those consequences.
Intentional-2 is a sub-set of Intentional-1
↑ comment by Marius · 2011-04-08T19:53:05.713Z · LW(p) · GW(p)
One question that might help you clarify: Fundamentally, is the divide in your head "more interested in taking steps to promote the side effect or in taking steps to avoid it" or "seems to consider the side effect acceptable"?
I think the example of a drunk driver might be an accessible one. Your goal is to get yourself and your car home; your intention is not to hit anyone. In fact, you'd be extremely sad if you hit someone, and would be willing to take some steps to avoid doing so. You drive anyway.
Do you put the risky driving in your intentional category? If you think intentionality means "treats it as a thing to seek rather than to avoid where convenient", the risk is unintentional. If you think intentionality means "seems to consider the side effect acceptable", then the risk is intentional because you weren't willing to sober up, skip drinking, or take a cab.
Replies from: taryneast↑ comment by taryneast · 2011-04-08T21:15:50.492Z · LW(p) · GW(p)
Thats a very good question, I think your "treats it as a thing to seek" is a good dividing line.
I will admit that I have not fully explored all the grey-areas here before, and was even unsure if there was a strict dividing line - but what you have said here rings true in the cases we've considered here.
In the case of hurting a woman to have sex - you are intentionally seeking to hurt her as the path to your objective. In the case of the car-jumping, you are not seeking to have the guy killed. On drunk-driving, I think you are being negligent, but not intentionally seeking to hurt somebody.
yep - sounds about right to me.
↑ comment by Marius · 2011-04-08T15:10:37.346Z · LW(p) · GW(p)
There is already a common deontological rule that one should adopt appropriate caution. "Appropriate caution" allows one to break up with people under most circumstances, but my limited understanding of pickup sounds closer to "recklessness".
↑ comment by taryneast · 2011-04-08T14:50:21.210Z · LW(p) · GW(p)
I wasn't actually telling other people to adopt my rule, in fact it isn't even a rule. Other people might call it a part of the social contract. I'd consider it to be an overwhelmingly useful heuristic for getting along in society. One example of a pathalogical case does not overbalance the majority of cases where it holds true.
If you are only interested in the consequences for you and haven't figured out why it's sometimes good for you to behave according to the social contract, then that's your choice. But my choice is not to trust you or anybody like you... which is kind of the whole point of this argument.
The consequence for you is that other people watch your actions (or even just your words in this case) and no longer trust or respect you.
That has it's own further consequences down the line. If those further consequences later impinge upon your utility (eg help that you need but is not extended to you), then it would be worth considering adopting "my" rule.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-08T15:54:50.746Z · LW(p) · GW(p)
That doesn't seem to be true either. To take an extreme example, being a compulsive liar who enjoys telling falsehoods for fun is one of the "dark triad" of male qualities that are exceptionally attractive for females. The reputational knock-on effects aren't enough to balance things out.
Replies from: NancyLebovitz, Marius, taryneast↑ comment by NancyLebovitz · 2011-04-08T16:11:59.866Z · LW(p) · GW(p)
The reputational knock-on effects aren't enough to balance things out.
How can you be sure? You may well be noticing the successful compulsive liars (at least, the less subtle ones) and not noticing the guys who told one lie too many to the wrong people in their lives.
Replies from: Marius↑ comment by Marius · 2011-04-08T16:13:47.918Z · LW(p) · GW(p)
So this brings up one of the questions I wonder if PUAs can answer: do you have any kind of metric capable of telling whether something is attractive for "females" or for "the kind of females PUAs find easy targets"? It was obvious when I did my psychiatry rotation that the Cluster B personality disorder patients found themselves drawn to one another. Is the Dark Triad good for picking up women who aren't Cluster B, or primarily for picking up women who are? Or at least, are there tools available to actually answer that question?
↑ comment by taryneast · 2011-04-08T16:03:33.481Z · LW(p) · GW(p)
Edit: this comment is in reply to wording that cousin_it has now removed from his comment. You can get the gist from the brief excerpt that I have quoted.
You are assuming that I'm trying to give you advice that benefits you?
Thats interesting - especially when you've made it clear that you don't bother to consider whether your actions benefit other people (in fact you choose to actively work against the benefit of other people). I think I made it pretty clear that, due to this behaviour, I neither respect you nor have any particular reason to help you. I was replying to your stated lack of understanding: "I don't understand why it should be adopted." in case you happened to have anything interesting to say about it.
I never thought that on balance you would choose to adopt the rule. :) I gave you the reasons why people do adopt this rule and let you choose for yourself if that applies to your circumstances.
If you want to give advice that benefits me, try to give advice that's actually been proven to benefit me, not advice that oh-so-conveniently happens to benefit you.
I'm sure we can each find oh-so-convenient examples that match what we choose to believe. I also expect that one person being a defector will probably clean up pretty nicely in a society full of people that choose not to defect (on average).
On average, it looks like it works for people that I've met (male and female)... that's about all I can say. Why do you expect that for my advice to be good advice that it has to be 100% true for you as well as me?
Replies from: wedrifid↑ comment by wedrifid · 2011-04-08T16:29:10.862Z · LW(p) · GW(p)
You are assuming that I'm trying to give you advice that benefits you?
Please reread the following paragraph from your own comment:
That has it's own further consequences down the line. If those further consequences later impinge upon your utility (eg help that you need but is not extended to you), then it would be worth considering adopting "my" rule.
Cousin_it's reply made sense. Yours does not. The context makes it incoherent.
Replies from: taryneast↑ comment by taryneast · 2011-04-08T16:39:42.247Z · LW(p) · GW(p)
Weird... my comment above was a reply to a completely different comment of cousin_its
I can easily see why it looks incoherent.
I'm now going to go looking for the comment that I was actually replying to.
Edit: nope, looks like cousin_it has re-edited his original comment and removed that which I was actually replying to, so that my reply appears to be no longer relevant.
I am leaving my comment in place anyhow.
Oh - and I was giving advice that may or may not benefit people in general - not specifically for cousin_it's benefit. Thus why my reply is not entirely incoherent :)
I was mainly responding to his statement that "I don't understand why it should be adopted." by explaining why it's worth considering as an option - ie why other people adopt it.
↑ comment by NancyLebovitz · 2011-04-07T16:16:15.768Z · LW(p) · GW(p)
I used to feel as though it was low status to believe in a war between the sexes.
Less Wrong convinced me that not only is there a war, I have a side.
Replies from: Vaniver, cousin_it↑ comment by Vaniver · 2011-04-07T22:23:44.074Z · LW(p) · GW(p)
I think a war between the sexes is a misleading perspective. Social life is a war between you and everyone else. Thankfully, the war is very developed (and thus polite) and is predominately over positive-sum opportunities (like, say, mates). But it's still a war.
(For example, many of the things that seem to be men vs. women turn out to primarily shift resources from one kind of men to another, or one kind of women to another. Monogamy vs. serial monogamy vs. polygamy is a great example; the policy of 'one husband one wife' has more impact on the husband side of the equation than the wife side. Fights against pornography, as far as I can tell, are a mostly internal conflict within women. Women serving in combat roles benefits female officers but hurts female enlisted. Not everything maps onto this neatly, but a surprising amount does.)
It also seems to me that, in general, a belief feeling low-status (instead of wrong) is a potent warning of bias. So I guess woo for LW?
Replies from: Gray↑ comment by Gray · 2011-04-09T00:27:36.396Z · LW(p) · GW(p)
Good post, except I disagree with your first point. I think when you say that "social life is a war" but qualify that it's a polite war, and a positive-sum war, I think you're stretching the analogy to the point of breaking.
In my opinion, I think economics is the better model, if you look at social interaction as a sort of market, and people are trading back and forth. People don't like the idea of sex being a commodity, but in a very important sense, it is. Friendships and family are also commodities in this way. Acting out of duty corresponds, as I see it, as investing in your relationships with other people. There's always disutility in acting out of duty, but it's an important part of any relationship.
↑ comment by cousin_it · 2011-04-07T16:23:32.786Z · LW(p) · GW(p)
You mean you didn't know that before? You have a sex, you have a side.
You may wish that there were no war. I wish that too. But there's no use denying the war (except for status reasons, as you point out).
Replies from: Vladimir_Nesov, NancyLebovitz, Zack_M_Davis↑ comment by Vladimir_Nesov · 2011-04-07T16:40:37.475Z · LW(p) · GW(p)
You mean you didn't know that before? You have a sex, you have a side.
What do you mean? Given that your sex doesn't determine which side you're on in this conflict (that is, whether you wish to take the effort for improving status of females, which only applies to the extent and in the sense you believe there is a sufficiently uniform status difference, etc.), it doesn't give a simple heuristic rule for this decision. And without a simple reliable heuristic, it's a nontrivial problem to figure out your own position, and many people won't take the effort to solve it, and of those who did, many would pick a position in significant ignorance of the underlying facts, not even mentioning moral facts.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-07T16:48:55.721Z · LW(p) · GW(p)
Your sex determines whether you benefit from side A winning, or from side B winning.
Replies from: Vladimir_Nesov, NancyLebovitz, TheOtherDave↑ comment by Vladimir_Nesov · 2011-04-07T17:02:17.110Z · LW(p) · GW(p)
Your sex determines whether you benefit from side A winning, or from side B winning.
No. This sounds like you assume egoistic values, which are often incorrect. As a decision-maker, you benefit to the extent your decisions (in this case, goals about female status) are right. Which decisions are right in this situation is a nontrivial moral and factual question, on the moral question side particularly where egoistic motives can be in conflict with altruistic motives.
Replies from: cousin_it↑ comment by cousin_it · 2011-04-07T17:47:53.013Z · LW(p) · GW(p)
Hm, I don't think my argument requires assuming any values. If you have anatomical feature X, and someone pushes a button to increase the utility of all people having feature X, then you win. Altruism or egoism is just a detail of your utility function.
Replies from: JGWeissman, Vladimir_Nesov↑ comment by JGWeissman · 2011-04-07T17:50:37.352Z · LW(p) · GW(p)
If you have anatomical feature X, and someone pushes a button to increase the utility of all people having feature X
This assumes some correlation between anatomical feature X and a term in a person's utility function.
↑ comment by Vladimir_Nesov · 2011-04-08T01:31:25.040Z · LW(p) · GW(p)
Based on your comment and this exchange, it's not clear to anyone what exactly are we talking about (my first question was "What do you mean?" for a reason). The conflict I took as the topic of the conversation is generally the direction of change in balance of influence (i.e. status) between partners or potential partners in a relationship from the default established by social status quo.
If you reason in CDT style, then the only effect of increasing your own influence is improvement of your experience in your present relationship. (Incidentally, I don't see how your sex is relevant to the character of this activity, the salient category seems to be simply your own person.) This way of thinking seems to explain your discussion in this thread best (correct me if you were in fact assuming something else).
Alternatively, if we are talking about influencing the social status quo, then from the (narrow) point of view of any potential heterosexual relationship, you win from the improvement in relevant aspects of background status of your own sex. It would be obvious that the result of such shift is beneficial overall only if you focus primarily on egoistic value effects of its consequences, ignoring the effect lowered background status has on all of women (which is huge scope). This is essentially the sense which I assumed in making this comment. (This works the other way as well, i.e. the effect worsened relationship experience would have on all of men.)
The reason the first looks like the second to me is that from TDT perspective even the personal decisions you make in influencing the course of your own relationship, without intentionally meddling with the global status war, have global effects through decisions made by other people for similar reasons. If you decide to pursue greater influence in your own relationship, this allows you to infer that other people would behave similarly, which makes for a greater damage to opposite sex's values than just your partner's.
So even if we make the reasonable assumption that you hold your own immediate preferences in greater value than other people's, and so you'd be inclined to bargain in your own direction, the combination of possibly greater marginal value of improvement for the other sex with nontrivial scope of the decision makes it non-obvious.
The rest rests on the factual and moral questions of who gets how much greater marginal benefit from shifts in the current default status quo influence. There seem to be convincing arguments for both sexes.
Replies from: wedrifid, Marius↑ comment by wedrifid · 2011-04-08T03:01:54.280Z · LW(p) · GW(p)
Alternatively, if we are talking about influencing the social status quo, then from the (narrow) point of view of any potential heterosexual relationship, you win from the improvement in relevant aspects of background status of your own sex
This is true to the extent that status is your criteria of winning. But while status is an extremely good indicator of what we will act like we wish to maximize it is not always what most satisfies our preferences. In the case of sex, for example, higher relative status tends to reduce interest in sex and makes orgasm more difficult to achieve. (Citation needed - anyone recall the studies in question? Likely also an OB post.)
↑ comment by Marius · 2011-04-08T14:55:29.427Z · LW(p) · GW(p)
The background status is not uniform. When females made progress in terms of ability to be seen as good employees/employers, it reduced the relative status of certain male employers/employees, but also increased the relative status of male stay-at-home husbands.
Even in the status game, it doesn't break down on strict gender lines.
When other factors are looked at, the divisions are even less gender-based. If we have a more formal vs less formal consent process, the winners and losers are probably nearly-evenly divided male/female.
Replies from: wedrifid↑ comment by wedrifid · 2011-04-08T15:09:58.177Z · LW(p) · GW(p)
Even in the status game, it doesn't break down on strict gender lines.
In fact breaking down on strict gender lines is more of an exception than a rule. It is status relative to the others within the same gender that is the most valuable resource.
Replies from: Marius↑ comment by Marius · 2011-04-08T15:17:34.496Z · LW(p) · GW(p)
Yes, I very much agree with this. Changing the status of males vs females is unlikely to change my life much at all. Few (if any) people are likely to change sexual orientation due to that kind of status; the effects on promotion/pay are greatly overemphasized. In contrast, changing the status of various professions, or taking certain people out of the dating pool is extremely relevant.
In third-world countries there is more at stake, of course.
↑ comment by NancyLebovitz · 2011-04-07T17:17:32.038Z · LW(p) · GW(p)
It's not that simple. For example if taking care about consent means that there's less sex, but also less drama and trauma among your associates, you might come out ahead.
Replies from: Marius↑ comment by TheOtherDave · 2011-04-07T17:23:26.621Z · LW(p) · GW(p)
So, just to unpack this a bit... looking at your comment at the top of this thread, I infer that the sides are "the user of pickup" and "the victim," and I presume that the user is male and the victim female, and I infer therefore that I (being male) gain a benefit from the user of pickup winning (which I presume means having sex) but no benefit from the victim of pickup winning (which I presume means not having sex).
Did I get that right?
Can you clarify what benefit it is that I'm getting here?
If it matters: I'm fairly certain that all my female sexual partners were setting out to have sex, I'm not quite so certain of that for all of my male sexual partners, and in any case I'm fairly certain of it for all of my partners in the last 15 years or so.
Replies from: Perplexed, cousin_it↑ comment by Perplexed · 2011-04-07T17:51:23.478Z · LW(p) · GW(p)
I infer that the sides are "the user of pickup" and "the victim,"
That makes no sense in this context (as established by Nancy's first mention of sides and cousin_it's response.) The sides are "those who condemn (so as to discourage) the use of pickup" and "those who see nothing wrong with pickup". With those definitions of the sides, ones sex definitely does not determine which side one is on, but it does have an influence. And "winning" in this war takes the form of marshaling arguments that convince the soldiers of the other side to defect.
Replies from: taryneast, TheOtherDave↑ comment by taryneast · 2011-04-08T14:36:18.082Z · LW(p) · GW(p)
The sides are "those who condemn (so as to discourage) the use of pickup" and "those who see nothing wrong with pickup".
Forgive me, but this is the fallacy of the excluded middle (it's possible you do not ascribe to there being just two sides, but you are unclear on this point).
The "sides" as I see it also include, at the very least:
"those who condemn certain common practices of pickup that deliberately harm other people, but are fine with (or even encourage) other practices that are not"
↑ comment by TheOtherDave · 2011-04-07T18:00:17.470Z · LW(p) · GW(p)
You may be right... the reason I made my inferences explicit was precisely so that we can be clear about this stuff, rather than move forward as if we were talking about the same thing when we aren't.
That said, I was responding to the sentence: "The goal of pickup is to engineer the most desirable outcome for the user of pickup, not the most desirable outcome for the victim," which seemed to be what Nancy was responding to.
↑ comment by cousin_it · 2011-04-07T17:34:19.766Z · LW(p) · GW(p)
If all your female partners were willing, that doesn't change the fact that you like having sex more than not having sex. Otherwise presumably you would opt out of having sex. I'm not sure what you and Zack_M_Davis are arguing with; perhaps it's the connotations of my remark, rather than its content? If that's the case, I assure you I didn't imply any of the usual connotations, I have more weird ones :-)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-07T17:54:10.977Z · LW(p) · GW(p)
Actually, I quite deliberately didn't argue anything, because I wasn't even sure that I understood what claim you were making.
Instead, I attempted to make my inferences explicit, asked for confirmation, and asked for clarification on the piece that remained unclear to me. (I'd still sort of like those things.)
So it's not surprising that you can't figure out what I'm arguing with, though it's a little puzzling that you assumed I was arguing with anything at all. Re-reading my comment I'm not sure where you got that idea from.
Anyway: I agree that the willingness of my partners doesn't change whether I like sex more than no-sex or vice-versa.
And I'm not sure why it matters, but just to be clear: if I take "I like X more than Y" in a situation to mean that I estimate that I prefer the actual X in that situation to the counterfactual Y, then I've had sex I liked more than no-sex, I've had sex I liked less than no-sex, I've had no-sex I liked more than sex, and I've had no-sex I liked less than sex.
Edit: Um. Apparently the comment I was replying to got deleted while I was replying to it. Never mind, then?
Replies from: cousin_it↑ comment by cousin_it · 2011-04-07T18:01:39.757Z · LW(p) · GW(p)
Sorry, I deleted my comment for approximately the same reasons that you listed here. I often say stupid things and then desperately try to clean them up.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-07T18:17:18.328Z · LW(p) · GW(p)
(grin) No worries. I just leave my desperately stupid things out there; I figure in the glorious future when I stop saying stupid things, the contrast will be all the more striking.
↑ comment by NancyLebovitz · 2011-04-07T16:27:12.777Z · LW(p) · GW(p)
It's actually an interesting question-- to what extent it's a war between the obvious interest groups and to what extent it's a conflict between people who are relatively willing to cooperate and those who are relatively willing to defect.
Replies from: wedrifid↑ comment by wedrifid · 2011-04-07T17:35:20.605Z · LW(p) · GW(p)
It's actually an interesting question-- to what extent it's a war between the obvious interest groups and to what extent it's a conflict between people who are relatively willing to cooperate and those who are relatively willing to defect.
I've got a sneaking suspicion that I would come to the direct opposite conclusion about which 'side' is 'defecting' and which is trying to cooperate.
I would also disagree with you regarding what the sides are. There are people who have a gender focussed side, there are people who fight for there not to be sides at all and there are people who think 'if you are not with us you are against us'. The people in the middle, as is often the case, get caught in the crossfire.
↑ comment by Zack_M_Davis · 2011-04-07T17:27:17.528Z · LW(p) · GW(p)
EDIT: never mind
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-04-07T17:31:33.030Z · LW(p) · GW(p)
The worst thing you can do while considering a politically-colored question is make political statements, like declaring a side. It's more productive to strive to consider such questions as intellectual curiosities, ignoring the political impact of the discussion itself, even if you do have a clear side associated with huge stakes or affect. Otherwise, the arguments will more often be chosen for reasons other than their truth and relevance.
Replies from: Nornagest, Zack_M_Davis↑ comment by Nornagest · 2011-04-07T17:58:29.381Z · LW(p) · GW(p)
I think you just neatly encapsulated why I cringe a little whenever I see the pickup controversy rearing its head. I strongly agree -- but gender relations seem like about the hardest possible topic to approach as an intellectual curiosity, if the track record of LW and (especially) OB is anything to go by.
Replies from: Vladimir_Nesov, TheOtherDave↑ comment by Vladimir_Nesov · 2011-04-07T19:53:02.544Z · LW(p) · GW(p)
I believe I can reliably approach absolutely anything as an intellectual curiosity, and I don't believe I'm so much of a mutant that this skill is not reproducible.
(This mode does slip sometimes, and I do need to focus, so it's not purely a character trait. When it slips, I produce thoughts that I judge on reflection slightly to significantly incorrect.)
↑ comment by TheOtherDave · 2011-04-07T21:47:05.864Z · LW(p) · GW(p)
Oh, there are many more difficult. Gender relations are just a difficult one that comes up.
Replies from: Vladimir_M, Nornagest↑ comment by Vladimir_M · 2011-04-08T08:43:22.925Z · LW(p) · GW(p)
Oh, there are many more difficult.
In the mainstream discourse it's undoubtedly so, but on LW, I'm not so sure. On many occasions, I've seen people here make comments about topics that are seen as even more inflammatory or outlandish in respectable mainstream circles, only to get calm, rational, and well-argued responses. Certainly I can't think of any topics that are such a reliable discourse-destroyer on LW as the gender relations/PUA issues. I find it a fascinating question why this is so.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-08T14:09:20.096Z · LW(p) · GW(p)
So, I think about race relations, as a somewhat obvious example.
The impression I've gotten is not that LW is capable of an advanced discussion of race relations but not of gender relations, but rather that race relations simply don't come up in conversation as often as gender relations do. (Which isn't too surprising, given how much more fundamental gender-marking is to our language than race-marking. )
But perhaps I'm not giving the community enough credit. If there have been valuable discussions here about race relations, I'd be interested to read them... pointers are welcomed.
↑ comment by Nornagest · 2011-04-07T22:03:31.047Z · LW(p) · GW(p)
Point taken. Its representation here is probably better explained by calling it one of the more difficult ones that we don't have a taboo against discussing, although it's definitely high up on my canonical list of mind-killing subjects outside of LW as well.
↑ comment by Zack_M_Davis · 2011-04-07T17:46:37.355Z · LW(p) · GW(p)
You're right.
↑ comment by Rings_of_Saturn · 2009-03-14T00:50:06.740Z · LW(p) · GW(p)
Yvain:
You've hit on something that I have long felt should be more directly addressed here/at OB. Full disclosure is that I have already written a lot about this myself and am cleaning up some "posts" and chipping away here to get the karma to post them.
It's tough to talk about meditation-based rationality because (a) the long history of truly disciplined mental practice comes out of a religious context that is, as you note, comically bogged down in superstitious metaphysics, (b) it is a more-or-less strictly internal process that is very hard to articulate (c) has become a kind of catch-all category for sloppy new-age thinking about a great number of things (wrongheaded, pop quantum theory, anyone?)
Nevertheless, as Yvain notes, there is indeed a HUGE body of practice and tried-and-true advice, complete with levels of mastery and, if you have been lucky enough to know some the masters, that palpable awesomeness Eliezer speaks of. I'm sure all of this sounds pretty slippery and poppish, but it doesn't have to be. One thing I would like to help get going here is a rigorous discussion, for my benefit and everyone's, about how we can apply the science of cognition to the practice of meditation and vice versa.
Replies from: Eliezer_Yudkowsky, anonym↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-16T07:53:46.211Z · LW(p) · GW(p)
Think you've got enough karma to post already.
↑ comment by anonym · 2009-03-16T07:08:53.609Z · LW(p) · GW(p)
There has been quite a bit of research in recent years on meditation, and the pace seems to be picking up. For a high level survey of recent research on the two primary forms of Buddhist meditation, I'd recommend the following article: Attention regulation and monitoring in meditation. PDF Here
↑ comment by olimay · 2009-03-16T05:49:57.505Z · LW(p) · GW(p)
Yvain, do check out pjeby's work. I have to admit I some points I found myself reading OB as a self help attempt. I'm glad I kept up, but dirtsimple.org was the blog I was actually looking for.
Your point about mysticism is interesting, because I find pjeby's perspective on personal action and motivation has a strange isomorphism to Zen thought, even though that doesn't seem to be main intention. In fact, his emphasis seems to be de-mystifying. One of his main criticisms of existing psychological/self-help literature is that the relatively good stuff is incomprehensible to the people who need it most, because they'd need to already be in a successful, rational action mindset in order to implement what's being said.
Anyway, I hope pjeby chimes up so he can offer something better than my incomplete summary...
↑ comment by Vladimir_Golovin · 2009-03-13T11:48:15.849Z · LW(p) · GW(p)
It doesn't take a formal probability trance to chart a path through everyday life - it was in following the results
Couldn't agree more. Execution is crucial.
I can come out of a probability trance with a perfect plan, an ideal path of least resistance through the space of possible worlds, but now I have to trick, bribe or force my messy, kludgy, evolved brain into actually executing the plan.
A recent story from my experience. I had (and still have) a plan involving a relatively large chunk of of work, around a full-time month. Nothing challenging, just 'sit down and do it' sort of thing. But for some reason my brain is unable to see how this chunk of work will benefit my genes, so it just switches into a procrastination mode when exposed to this work. I tried to force myself to do it, but now I get an absolutely real feeling of 'mental nausea' every time I approach this task – yes, I literally want to hurl when I think about it.
For a non-evolved being, say an intelligently-designed robot, the execution part would be a non-issue – it gets a plan, it executes it as perfectly as it can, give or take some engineering inefficiencies. But for an evolved being trying to be rational, it's an entirely different story.
Replies from: RobinHanson, Vladimir_Golovin↑ comment by RobinHanson · 2009-03-13T13:23:57.505Z · LW(p) · GW(p)
If one had public metrics of success at rationality, the usual status seeking and embarrassment avoidance could encourage people to actually apply their skills.
Replies from: Vladimir_Golovin, Annoyance↑ comment by Vladimir_Golovin · 2009-03-13T13:52:38.739Z · LW(p) · GW(p)
Shouldn't a common-sense 'success at life' (money, status, free time, whatever) be the real metric of success at rationality? Shouldn't a rationalist, as a General Inteligence, succeed over a non-rationalist in any chosen orderly environment, according to any chosen metric of success -- including common metrics of that environment?
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-03-13T21:38:36.451Z · LW(p) · GW(p)
No.
- If "general intelligence" is a binary classification, almost everyone is one. If it's continuous, rationalist and non-rationalist humans are indistinguishable next to AIXI.
- You don't know what the rationalist is optimizing for. Rationalists may even be less likely to value common-sense success metrics.
- Even if those are someone's goals, growth in rationality involves tradeoffs - investment of time, if nothing else - in the short term, but that may still be a long time.
- Heck, if "rationality" is defined as anything other than "winning", it might just not win for common-sense goals in some realistic environments.
- People with the disposition to become rationalists may tend to also not be as naturally good at some things, like gaining status.
↑ comment by Vladimir_Golovin · 2009-03-14T18:49:56.334Z · LW(p) · GW(p)
Point-by-point:
Agreed. Let's throw away the phrase about General Intelligence -- it's not needed there.
Obviously, if we're measuring one's reality-steering performance we must know the target region (and perhaps some other parameters like planned time expenditure etc.) in advance.
The measurement should measure the performance of a rationalist at his/her current level, not taking into account time and resources he/she spent to level up. Measuring 'the speed or efficiency of leveling-up in rationality' is a different measurement.
The definitions at the beginning of the original post will do.
On one hand, the reality-mapping and reality-steering abilities should work for any activity, no matter whether the performer is hardware-accelerated for that activity or not. On the other hand, we should somehow take this into account -- after all, excelling at things one is not hardware-accelerated for is a good indicator. (If only we could reliably determine who is hardware-accelerated for what).
(Edit: cool, it does numeric lists automatically!)
↑ comment by Annoyance · 2009-03-13T15:14:10.254Z · LW(p) · GW(p)
Public metrics aren't enough - society must also care about them. Without that, there's no status attached and no embarrassment risked.
To get this going, you'd also need a way to keep society's standards on-track, or even a small amount of noise would lead to a positive feedback loop disrupting its conception of rationality.
Everyone has at least a little bit of rationality. Why not simply apply yourself to increasing it, and finding ways to make yourself implement its conclusions?
Just sit under the bodhi tree and decide not to move away until you're better at implementing.
↑ comment by Vladimir_Golovin · 2009-03-13T12:48:17.983Z · LW(p) · GW(p)
An idea on how to make the execution part trivial – a rational planner should treat his own execution module as a part of the external environment, not as a part of 'himself'. This approach will produce plans that take into account the inefficiencies of one's execution module and plan around them.
Replies from: thomblake, Yoav Ravid, Psy-Kosh, Nick_Tarleton↑ comment by thomblake · 2009-03-13T21:15:22.640Z · LW(p) · GW(p)
I hope you realize this is potentially recursive, if this 'execution module' happens to be instrumental to rationality. Not that that's necessarily a bad thing.
Replies from: Vladimir_Golovin↑ comment by Vladimir_Golovin · 2009-03-14T18:32:52.260Z · LW(p) · GW(p)
No, I don't (yet) -- could you please elaborate on this?
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-03-24T01:21:43.526Z · LW(p) · GW(p)
Funny how this got rerun on the same day as EY posted about progress on Löb's problem.
↑ comment by Yoav Ravid · 2019-01-26T13:11:13.158Z · LW(p) · GW(p)
What if first, you just calculate the most beneficial actions you can take (like Scott did), and after that asses each of those using something like piers steel's procrastination equation? then you know which one you're most likely to achieve, and can can choose more wisely.
also, doing the easiest first can sometimes be a good strategy to achieve all of them, steel calls it a success spiral, where you succeed time after time and it increases your motivation.
↑ comment by Psy-Kosh · 2009-03-13T21:12:14.481Z · LW(p) · GW(p)
Well, ideally one considers the whole of themselves when doing the calculations, but it does make the calculations tricky.
And that still doesn't answer exactly how to take it into account. ie, "okay, I need to take into account the properties of my execution module, find ways to actually get it to do stuff. How?"
↑ comment by Nick_Tarleton · 2009-03-13T21:25:02.225Z · LW(p) · GW(p)
However, treating the execution module as external and fixed may demotivate attempts to improve it.
(Related: Chaotic Inversion)
↑ comment by roland · 2009-03-13T07:16:05.759Z · LW(p) · GW(p)
Yvain,
you make a great point here. AFAIK it is common knowledge that a lot of great intelectuals where great procrastinators. Overcoming one's bad habits is key. But I wonder about what can be done in that regard since so much is defined by genetics.
comment by Vladimir_Golovin · 2009-03-13T13:41:39.181Z · LW(p) · GW(p)
Why aren't "rationalists" surrounded by a visible aura of formidability? Why aren't they found at the top level of every elite selected on any basis that has anything to do with thought? Why do most "rationalists" just seem like ordinary people, perhaps of moderately above-average intelligence, with one more hobbyhorse to ride?
Because they don't win? Because they don't reliably steer reality into narrow regions other people consider desirable?
I've met and worked with several irrationalists whose models of reality were, to put it mildly, not correlated to said reailty, with one explicit, outspoken anti-rationalist with a totally weird, alien epistemology among them. All these people had a couple of interesting things in common.
On one hand, they were often dismal at planning – they were unable to see obvious things, and they couldn't be convinced otherwise by any arguments appealing to 'facts' and 'reality' (they universally hated these words).
On the other hand, they were surprisingly good at execution. All of them were very energetic people who didn't fear any work or situation at all, and I almost never saw any of them procrastinating. Could this be because their minds, due to their poor predictive ability, were unable to see the real difficulty of their tasks and thus avoided auto-switching into procrastination mode?
(And a third observation – all these people excelled in political environments. They tended to interpret their surroundings primarily in terms of who is kin to whom, who is a friend of who, who is sexually attracted to whom, what others think of me, who is the most influential dude around here etc etc. What they lost due to their desynchronization with factual reality, they gained back thanks to their political aptness. Do rationalists excel in political environments?)
Replies from: Rings_of_Saturn, Annoyance↑ comment by Rings_of_Saturn · 2009-03-14T00:16:36.638Z · LW(p) · GW(p)
Vladimir:
It seems you are being respectful of the anonymity of these people, and very well, that. But you pique my curiosity... who were these people? What kind of group was it, and what was their explicit irrationality all about?
I can think of a few groups that might fit this mold, but the peculiar way you describe them makes me think you have something very specific and odd in mind. Children of the Almighty Cthulu?
Replies from: Vladimir_Golovin, Vladimir_Golovin↑ comment by Vladimir_Golovin · 2009-03-14T17:30:44.854Z · LW(p) · GW(p)
I’ll describe three most interesting cases.
Number One is a Russian guy, now in his late 40s, with a spectacular youth. Among his trades were smuggling (during the Soviet era he smuggled brandy from Kazakhstan to Russia in the water system of a railway car), teaching in a ghetto college (where he inadvertently tamed a class of delinquents by hurling a wrench at their leader), leading a programming lab in an industrial institute, starting the first 3D visualization company in our city, reselling TV advertising time at a great margin (which he obtained by undercover deals involving key TV people and some outright gangsters), and saving the world by trying to find venture funding for a savant inventor who supposedly had a technology enabling geothermal energy extraction (I also worked together with them on this project). He was capable of totally crazy things, such as harpooning a wall portrait of a notorious Caucasus clanlord in a room full of his followers. He had lots of money during his successful periods, but was unable to convert this into a longer-term success.
Number Two is a deaf-mute woman, now in her 40s, who owns and runs a web development company. Her speech is distorted, she reads people by the lips, and I wouldn’t rate her particularly attractive – but despite all this she is able to consistently score excellent development / outsourcing contracts with top nationwide brands. Unfortunately, she often forces totally weird decisions upon the development team – and when they try to convince her that the decisions are rubbish by appealing to ‘facts’ and ‘reality’, she takes it as a direct attack on her status. A real example – she once actually imposed an official ban on criticizing her company, decisions of her company, employees and management of her company, partners of her company, products of her company and everything else directly or indirectly related to her company in all communication channels (Skype, bug trackers, IMs, phone conversations, forums etc.)!
Number Three is the most spectacular one – a Russian guy of Jewish descent, around 30, an avid status seeker with an alpha-male attitude who owns and runs several web / game outsourcing companies, plus has a high-level, high-status management/consultancy job in a well-known nationwide online company. He is almost always able to somehow secure funding for his companies and projects, including those which I personally wouldn’t consider marketable. He lives in several cities at once, is excellent at remote leadership and hiring, and is quick to act – when he learns of a talented programming or art team he could potentially partner with, he gets on a plane just to meet them in person.
It was this guy who made me seriously wonder about how immensely weird people’s worldviews can be. This guy constructs his worldview by cherry-picking pieces that appeal to his sense of truth from Abrahamic religions (excluding Islam and the Old Testament), Eastern teachings and, if I remember correctly, even fiction. He hates concepts like ‘logic’, ‘science’, ‘fact’ and ‘reality’ with a passion, and believes that they are evil concepts designed by Anglo-Saxons to corrode ‘good’ worldviews (such as the New-Testament Christian one), and he is actively protecting his worldview from being corroded by evil ideas. Here’s an actual example of his reasoning: “Dawkins is an Anglo-Saxon, and all Anglo-Saxons are evil liars, therefore all ideas Dawkins advocates are evil lies, therefore evolution is a lie and is evil.” He believes that The Enemy himself is sponsoring the evolutionary science by actually providing money, fame and other goods to its proponents. He is sincerely unable to understand how people can be genuinely altruistic without a religious upbringing (and of course, he doesn’t want to consider things like mirror neurons).
So, it was this guy who made me ask myself questions like ‘What is my definition of truth?’
Replies from: Rings_of_Saturn, Vladimir_Nesov↑ comment by Rings_of_Saturn · 2009-03-14T18:08:20.480Z · LW(p) · GW(p)
Thanks, Vladimir. You have interesting friends!
↑ comment by Vladimir_Nesov · 2009-03-14T20:54:13.307Z · LW(p) · GW(p)
How do you translate that into a question of definition of truth? The third guy is sufficiently rational to be successful, I guess he's got excellent native intelligence allowing him to correctly judge people or influence their decisions, and that his verbal descriptions of his beliefs are mostly rationalization, not hurting his performance too much. If he was a rationalist, he'd probably be even more successful (or he'd find a different occupation).
Replies from: Vladimir_Golovin↑ comment by Vladimir_Golovin · 2009-03-14T21:39:02.741Z · LW(p) · GW(p)
Yes, the guy is smart, swift-thinking and quick to act when it comes to getting projects up from the ground, connecting the right people and getting funding from nowhere (much less so when it comes to technical details and fine-grained planning). His actual decisions are effective, regardless of the stuff he has in the conscious part of his head.
(Actually quite a lot of people whose 'spoken' belief systems are suboptimal or plain weird are perfectly able to drive cars, run companies, avoid tigers and otherwise deal with the reality effectively.)
But can we call such 'hardware-accelerated' decisions rational? I don't know.
Regarding your question. We had obvious disagreements with this guy, and I spent some time thinking about how can we resolve them. As a result, I decided that trying to resolve them (on a conscious level of course) is futile unless we have an agreement about fundamental things -- what we define as truth, and which methods can we use to derive truths from other truths.
I didn't think much about this issue before I met him (a scientific, or more specifically, Popperian worldview was enough for me), and this was the first time I had to consciously think about the issue. I even doubt I knew the meaning of the term 'epistemology' back then :)
↑ comment by Vladimir_Golovin · 2009-03-14T18:20:01.212Z · LW(p) · GW(p)
I can think of a few groups that might fit this mold
Rings, what groups did you have in mind?
↑ comment by Annoyance · 2009-03-13T15:01:56.808Z · LW(p) · GW(p)
I have also noticed that people who good at manipulating and interacting with people are bad at manipulating and interacting with objective reality, and vice versa.
The key difference is that the politicals are ultimately dependent on the realists, but not vice versa.
comment by orthonormal · 2020-07-08T19:01:07.128Z · LW(p) · GW(p)
Unfortunately, this hasn't aged very impressively.
Despite the attempts to build the promised dojo (CFAR, Leverage/Paradigm, the EA Hotel, Dragon Army, probably several more that I'm missing), rationalists aren't winning in this way. The most impressive result so far is that a lot of mid-tier powerful people read Slate Star Codex, but I think most of that isn't about carrying on the values Eliezer is trying to construct in this sequence - Scott is a good writer on many topics, most of which are at best rationality-adjacent. The second most impressive result is the power of the effective altruism movement, but that's also not the same thing Eliezer was pointing at here.
The remaining positive results of the 2009 rationality community are a batch of happy group houses, and MIRI chugging along its climb (thanks to hard-to-replicate personalities like Eliezer and Nate).
I think the "all you need is to try harder" stance is inferior to the "try to make a general postmortem of 'rationalist dojo' projects in general" stance, and I'd like to see a systematic attempt at the latter, assembling public information and interviewing people in all of these groups, and integrating all the data on why they failed to live up to their promises.
comment by ryleah · 2014-02-28T21:57:41.763Z · LW(p) · GW(p)
Why aren't "rationalists" surrounded by a visible aura of formidability? Why aren't they found at the top level of every elite selected on any basis that has anything to do with thought? Why do most "rationalists" just seem like ordinary people, perhaps of moderately above-average intelligence, with one more hobbyhorse to ride?
I'm relatively new to rationality, but I've been a nihilist for nearly a decade. Since I've started taking developing my own morality seriously, I've put about 3500 hours of work into developing and strengthening my ethical framework. Looking back at myself when nihilism was just a hobbyhorse, I wasn't noticeably moral, and I certainly wasn't happy. I was a guy who knew things, but the things I knew never got put into practice. 5 years later, I'm a completely different person than I was when I started. I've made a few discoveries, but not nearly enough to account for the radical shifts in my behavior. My behavior is different because I practice.
I know a few other nihilists. They post pictures of Nietzsche on Facebook, come up with clever arguments against religion, and have read "the Anti-Christ." They aren't more moral just because they subscribe to an ethos that requires them to develop their own morality, and from that evidence I can assume that rationalists won't be more rational just because they subscribe to an ethos that demands the think more rationally. Changing your mind requires more than just reading smart things and agreeing with them. It requires practice.
In the spirit put up or shut up, I'm going to make a prediction. My prediction is that if we keep track of how often we use a rationalist technique in the real world, we will find that frequency of use correlates to the frequency at which we visualize and act out using that technique. Once we start quantifying frequency of use, we'll be able to better understand how rationalism impacts our abilities to reach our goals. Until we differentiate between enthusiasts and practitioners, we might as well be tracking whether liking a clever article on Facebook correlates to success.
comment by Psy-Kosh · 2009-03-13T03:59:25.723Z · LW(p) · GW(p)
While developing a rationality metric is obviously crucial, I have this nagging suspicion that what it may take is simply a bunch of committed wanna-be rationalists to just get together and, well, experiment and teach and argue, etc with each other in person regularly, try to foster explicit social rules that support rather than inhibit rationality, and so on.
From there, at least use a fuzzy this "seems" to work/not work type metric, even if it's rather subjective and imprecise, as a STARTING POINT, until one can more precisely do that, until one gets a better sense of exactly what to look for, explicitly.
But, my main point is my suspicion that "do it, even if you're not entirely sure yet what you're doing, just do it anyways and try to figure it out on the fly" may actually be what it takes to get started. If nothing else, it'll produce some nice case study in failure that at least one can look at and say "okay, let's actually try to work out what we did wrong here"
EDIT: hrm... maybe I ought reconsider my position. Will leave this up, at least for now, but with the added note that now I'm starting to suspect myself of basically just trying to "solve the problem without having to, well, actually solve the problem"
Replies from: billswift, Regex↑ comment by billswift · 2009-03-13T05:11:34.650Z · LW(p) · GW(p)
Before you consider taking this down, you might want to read Thomas Sowell's "A Conflict of Visions" and "Knowledge and Decisions". Some (Many) problems cannot be "solved" but must be continually "worked on". I suspect most "self-improvement" programs are of this type. (Sowell doesn't address this, his discussion and examples are all from economic and social problems, but I think they're applicable.)
↑ comment by Regex · 2015-10-11T02:32:35.420Z · LW(p) · GW(p)
I've been predicted! This almost exactly describes what I've been up to recently... (Will make a post for it later. Still far too rough to show off. Anyone encountering this comment in 2016 or later should see a link in my profile. Otherwise, message me.)
Edit: Still very rough, and I ended up going in a slightly different direction than I'd hoped. Strange looking at how much my thoughts of it changed in a mere two months. Here it is
comment by BrandonReinhart · 2009-03-13T01:40:32.530Z · LW(p) · GW(p)
Every dojo has its sensei. There is a need for curriculum, but also skilled teachers to guide the earnest student. LessWrong and Overcoming Bias have, to some extent, been the dojo in which the students train. I think that you may find a lot of value in just jumping into a project like this: starting a small school that meets two times a week to practice a particular skill of rationality. A key goal to the budding school is to train the future's teachers.
One of my barriers to improving my rationality is little awareness of what the good reading and study material is. A curriculum of reading material -- rationalist homework -- would help me greatly. Furthermore, I have no friends that are similarly interested in the subject to bounce ideas off of or "train" with.
I train Jiu-Jitsu with several friends. We learn the same lessons, but learn at different rates. We discover different insights and share them. We practice techniques on each other and on opponents more and less skilled than ourselves. This dynamic is something rationalist dojos could benefit from.
Edit, Additional Comments:
The sense that a particular skill should be systematized and trained comes, in part, from the realization that the training conveys a measurable formidability. Problem #1 and Problem #2, then, are entangled: without a way to validate training, one can not say "I have defeated my opponent because of my training" and it is the ability to demonstrate mastery that motivates others to become students.
Many readers of this site and Overcoming Bias are here because of the demonstration of budding formidability found in the insights in OB posts. We read insights and learn techniques -- sloppily, like learning to wrestle from a mail order program -- and we want to be able to produce similar insights and be similarly formidable.
And conversely they don't look at the lack of visibly greater formidability, and say, "We must be doing something wrong."
Is the "lack of visibly greater formidability" actually visible? The wandering master sees the local students' deficiencies that the students are blind to. It is only when those students' established leader is defeated by the wandering master that the greater formidability becomes apparent. Where is rationality's flying guillotine?
Replies from: pjeby↑ comment by pjeby · 2009-03-13T04:09:01.548Z · LW(p) · GW(p)
More precisely, what is rationality's method for scoring matches? If you don't have that, you have no way to know whether the flying guillotine is any good, or whether you're even getting better at what you're doing within your own school.
To me, the score worth caring about most, is how many of your own irrational beliefs, biases, conditioned responses, etc., you can identify and root out... using verifiable criteria for their removal... as opposed to simply being able to tell that it would be a good idea to think differently about something. (Which is why I consider Eliezer "formidable", as opposed to merely "smart": his writing shows evidence of having done a fair amount of this kind of work.)
Unfortunately, this sort of measurement is no good for scoring matches, unless the participants set out to prove at the beginning that they were more wrong than their opponent, at the beginning of the "match"!
But then, neither is any other sort of competitive measurement any good, as far I can see. If you use popularity, then you are subject to rhetorical effects, apparent intelligence, status, and other biasing factors. If you use some sort of reality-based contest, the result needn't necessarily correlate with rationality or thinking skills in general. And if you present a puzzle to be solved, how will you judge the solution, unless you're at least as "formidable" as the competitors?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-09-14T17:12:49.439Z · LW(p) · GW(p)
Any system of measurement is subject to Goodhart's Law. This is really rough when you're trying to engage with reality.
comment by Daniel_Burfoot · 2009-03-13T04:21:16.962Z · LW(p) · GW(p)
For a nice literary description of what it means to have an "aura of awesome" try "The String Theory" by David Foster Wallace. Wallace writes of a mid-level pro tennis player: "The restrictions on his life have been, in my opinion, grotesque... But the radical compression of his attention and sense of himself have allowed him to become a transcendent practitioner of an art."
Perhaps in the future humans will achieve the same level of excellence at the Art of Rationality as some currently do at the Art of Tennis.
http://www.esquire.com/features/sports/the-string-theory-0796
comment by infotropism · 2009-03-13T02:28:49.723Z · LW(p) · GW(p)
On a side note, we have religious schools where a religion, such as Christianism, is part of the cursus. This indoctrinates young minds very early in their life, and leaves them scared, biased in most cases for the rest of their existence.
If we had, on the other hand, schools where even just basics of rationality and related topics, such as game theory, economics, scientific method, probabilities, biases, etc. were taught, what a difference it would make.
The sooner you kickstart rationality in a person, the longer they have to learn and practice it, obviously. But if those teachings are part of their formative experiences, from childhood to early adulthood, where their personalities, dreams and goals are being put together, how differently would they organize their lives ...
Replies from: taryneastcomment by [deleted] · 2009-03-13T11:14:09.153Z · LW(p) · GW(p)
deleted
Replies from: David_Gerard↑ comment by David_Gerard · 2011-02-21T14:14:02.070Z · LW(p) · GW(p)
If rationality is about winning by knowing the truth, and general intelligence is correlated with "positive life outcomes", then a training program should be based on the steps that are typically taken by smart people. So why not just train in probability theory, logic and science, and use regular exams as a measure of your "general rationality"?
Add "chimpanzee tribal politics" and "large-scale human politics" and you might be onto a winner. The PPE curriculum is worth drawing on heavily, for example - PPE plus science and technology, perhaps. But we could easily end up with a ten-year undergraduate degree :-)
comment by zaph · 2009-03-13T10:24:25.702Z · LW(p) · GW(p)
I see what you're saying about rationality being trained in a pure fashion (where engineering, the sciences in general, etc. is - hopefully - "applied rationality"). One thing I don't see you mention here but it was a theme in your 3 worlds story, and which is also a factor in martial arts training, is emotional management. That's crucial for rationality, since it will most likely be our feelings that lead us astray. Look at how the feeling of "trust" did in Madoff's investors. Muay thai and Aikido deal with emotions differently, but each train people to overcome their basic fear reactions with something else. An awesome rationalist, to me, would be someone who can maintain rationality when the situation is one of high emotion.
Replies from: khafra↑ comment by khafra · 2011-04-08T19:17:12.722Z · LW(p) · GW(p)
I wonder if this comment inspired Patrissimo's inagural post on his new Rational Poker site.
Replies from: zaphcomment by PhilGoetz · 2009-03-13T05:30:07.705Z · LW(p) · GW(p)
Just an observation: Few modern American karate schools ever let you hit someone, except when a lot of padding is involved. Fighting is not usually an element in exams below the blackbelt level. Competition is usually optional and not directly linked to advancement. I've seen students attain advanced belts without having any real-life fighting ability.
(The term "dojo" is Japanese, and I think most Japanese martial artists study Judo or Aikido, which are not subject to these criticisms.)
Replies from: roland, Psy-Kosh, ABranco↑ comment by roland · 2009-03-13T08:05:51.473Z · LW(p) · GW(p)
You are looking at the wrong art Phil, go to a boxing or Muay Thai school and you will see real hitting. Btw, as a martial artist myself I don't consider karate a serious martial art and part of that is for the reasons you stated. Although I think there is full-contact Karate which is a serious art.
PS: If you are looking for a good martial art look for one where the training involves a lot of realistic sparring. IMHO there should be sparring almost every time you train.
↑ comment by Psy-Kosh · 2009-03-13T06:22:28.428Z · LW(p) · GW(p)
Which criticism? If you mean to say Aikido is competative, well, depending which flavor, it often doesn't have much in the way of competition... as such. The training method involves people pairing up and basically taking turns attacking and defending, with the "defending" person being the one actually doing whatever technique is the one in question, but the "attacker" is supposed to allow it, or at least not overly resist/fight back.
Or, did I misunderstand?
↑ comment by ABranco · 2009-10-13T04:03:55.475Z · LW(p) · GW(p)
There's a question begging to be made here: what is a good martial art? Is one that brings inner calm and equilibrium in itself? Or one that is effective in keeping aggressions away?
Not that those aren't correlated, but some martial arts excel more in the former and in the environment of feudal Japan. I doubt the exuberance and aesthetics of most of those arts prove effective, however, confronting the dangers of modern cities.
In this sense, something much less choreographic or devoid of ancient philosophy — such as the straightforward and objective Israeli self-defense krav maga — seems to be much more effective.
What is curious here is: a great deal of krav maga training involves lots of restraining, since hitting "for real" would mean fractured necks or destroyed testes. So there's no competition, either.
Can it be that in martial arts there's a somehow inverse correlation between the potential of real-life damage (and therefore effectiveness) and the realism by which the training is executed?
Replies from: Douglas_Knight, PhilGoetz↑ comment by Douglas_Knight · 2009-10-13T04:15:39.187Z · LW(p) · GW(p)
some martial arts excel more in the former and in the environment of feudal Japan.
No empty-handed martial arts are extant from feudal Japan. They were illegal then, thus secret.
Replies from: taryneast↑ comment by taryneast · 2011-04-07T12:17:58.484Z · LW(p) · GW(p)
jujitsu is an empty-handed martial art of the Koryu (or traditional) school. (according to wikipedia) :)
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2011-04-08T00:28:41.494Z · LW(p) · GW(p)
Yes, jiujitsu is an exception. I learned that sometime in the past two years, but failed to update my comment ;-)
The precise statement is that samurai had a monopoly on force and it was illegal for others to learn martial arts. Thus extant feudal Japanese martial arts were for samurai. Sometimes samurai were unarmed, hence jiujitsu, though it assumes both combatants are heavily armored.
What I really meant in my comment was that karate was imported around the end of the shogonate and that judo and aikido were invented around 1900. However, they weren't invented from scratch, but adapted from feudal jiujitsu. They probably have as much claim to that tradition as brand-name jiujitsu. In any event, jiujitsu probably wasn't static or monolithic in 1900, either.
↑ comment by PhilGoetz · 2009-10-13T04:13:59.894Z · LW(p) · GW(p)
Can it be that in martial arts there's a somehow inverse correlation between the potential of real-life damage (and therefore effectiveness) and the realism by which the training is executed?
Yes. Certainly for judo vs. most other martial arts. (Although I wouldn't call judo ineffective - it can be used in many situations where you wouldn't use other martial arts at all.)
Replies from: Mercurial↑ comment by Mercurial · 2011-11-22T01:36:48.289Z · LW(p) · GW(p)
[Judo] can be used in many situations where you wouldn't use other martial arts at all.
I'd be really interested in hearing what those circumstances are. I usually make the same claim about Aikido (e.g., you probably don't want to crush Uncle Mortimer's trachea just because he happened to grab a knife in his drunken stupor).
Replies from: khafra↑ comment by khafra · 2011-11-22T21:28:12.752Z · LW(p) · GW(p)
I'd call the reality-joint-cleaving line the one between adrenaline-trigger training and adrenaline control training. Most training in traditional arts like Kuntao Silat and modern ones like the now-deprecated USMC LINE system involves using fear and stress as a trigger to start a sequence of techniques that end with disabling or killing the attacker. Most training in traditional arts like Tai Chi and (more) modern ones like Aikido involve retaining the ability to think clearly and act in situations where adrenaline would normally crowd out "system 2" thinking.
Any art can be trained in either way. A champion boxer would probably be calm enough to use a quick, powerful jab and knock the knife out of Uncle Mortimer's hand in a safe direction. A Marine with PTSD might use the judo-like moves from the LINE system to throw him, break several bones, and stomp on his head before realizing what he was doing.
A less discrete way to look at it adapts the No Free Lunch theorem: A fighting algorithm built for a specific environment like a ring with one opponent and a limited set of moves, or a field of combat with no legal repercussions and unskilled opponents, can do well in their specific setting. A more general fighting algorithm will perform more evenly across a large variety of environments, but will not beat a specialized algorithm in its own setting unless it's had a lot more training.
Replies from: Mercurial↑ comment by Mercurial · 2011-11-23T04:22:08.980Z · LW(p) · GW(p)
I'd call the reality-joint-cleaving line the one between adrenaline-trigger training and adrenaline control training.
That is an excellent point. My father and I still sometimes get into debates that pivot on this. He says that in a real fight your fight-or-flight system will kick in, so you might as well train tense and stupid since that's what you'll be when you need the skills. But I've found that it's possible to make the sphere of things that don't trigger the fight-or-flight system large enough to encompass most altercations I encounter; it's definitely the harder path, but it seems to have benefits outside of fighting skill as well.
A less discrete way to look at it adapts the No Free Lunch theorem...
Possibly! I think that in the end, what I most care about in my art is that I can defend myself and my family from the kinds of assaults that are most likely. I'm not likely to enter any MMA competitions anytime soon, so I'm pretty okay with the possibility that my survival skills can't compete with MMA-trained fighters in a formal ring.
comment by localdeity · 2024-12-03T20:16:32.595Z · LW(p) · GW(p)
The thing that comes to mind, when I think of "formidable master of rationality", is a highly experienced engineer trying to debug problems, especially high-urgency problems that the normal customer support teams haven't been able to handle. You have a fresh phenomenon, which the creators of the existing product apparently didn't anticipate (or if they did, they didn't think it worth adding functionality to handle it), which casts doubt on existing diagnostic systems. You have priors on which tools are likely to still work, priors on which underlying problems are likely to cause which symptoms; tests you can try, each of which has its own cost and range of likely outcomes, and some of which you might invent on the spot; all of these lead to updating your probability distribution over what the underlying problem might be.
Medical diagnostics, as illustrated by Dr. House, can be similar, although I suspect the frequency of "inventing new tests to diagnose a never-before-seen problem" is lower there.
comment by haig · 2009-03-13T08:33:34.960Z · LW(p) · GW(p)
Isn't this a description of what a liberal arts education is supposed to provide? The skills of 'how to think' not 'what to think'? I'm not too familiar with the curriculum since I did not attend a liberal arts college, instead I was conned into an overpriced private university, but if anyone has more info please chime in.
Replies from: David_Gerard↑ comment by David_Gerard · 2011-02-21T14:15:16.461Z · LW(p) · GW(p)
That's what a liberal arts curriculum was originally intended to teach, yes - it's just a bit out of date. An updated version would be worth working out and popularising.
comment by mike_hawke · 2023-12-23T20:13:01.443Z · LW(p) · GW(p)
Has Eliezer made explicit updates about this? Maybe @Rob Bensinger [LW · GW] knows. If he has, I'd like to see it posted prominently and clearly somewhere. Either way, I wonder why he doesn't mention it more often. Maybe he does, but only in fiction.
[...] I think that recognizing successful training and distinguishing it from failure is the essential, blocking obstacle.
Does this come up in the Dath Ilan stories?
There are experiments done now and again on debiasing interventions for particular biases, but it tends to be something like, "Make the students practice this for an hour, then test them two weeks later." Not, "Run half the signups through version A of the three-month summer training program, and half through version B, and survey them five years later."
Surely there is more to say about this now than in 2009. Eliezer had some idea of the replication crisis back then, but I think he has become much more pessimistic about academia in the time since.
But first, because people lack the sense that rationality is something that should be systematized and trained and tested like a martial art, that should have as much knowledge behind it as nuclear engineering, whose superstars should practice as hard as chess grandmasters, whose successful practitioners should be surrounded by an evident aura of awesome.
I think there's gotta be more to say about this too. Since then we have seen Tetlock's Superforecasting, Inadequate Equilibria[1], the confusing story of CFAR[2][3][4], and the rise to prominence of EA. I can now read retrospectives by accomplished rationalists arguing [LW · GW] over [LW · GW] whether rationality increases accomplishment, but I always come away feeling highly uncertain. (Not epistemically helpless, but frustratingly uncertain.) What do we make of all this?
Eliezer asks:
Why are there schools of martial arts, but not rationality dojos? (This was the first question I asked in my first blog post.) Is it more important to hit people than to think?
My answer, which gets progressively less charitable, and is aimed at no one in particular: thinking rationally appears to be a lower priority than learning particular mathematical methods, obtaining funding and recruits, mingling at parties, following the news, scrolling social media, and playing videogames.
comment by MTGandP · 2012-11-01T17:08:35.944Z · LW(p) · GW(p)
Elizer raises the issue of testing a rationality school. I can think of a simple way to at least approach this: test the students for well-understood cognitive biases. We have tests for plenty of biases; some of the tests don't work if you know about them, which surely these students will, but some do, and we can devise new tests.
For example, you can do the classic test of confirmation bias where you give someone solid evidence both for and against a political position and see if they become more or less certain. Even people who know about this experiment should often still fall prey to it—if they don't, they have demonstrated their ability to escape confirmation bias.
comment by hcutter · 2017-02-22T16:31:55.254Z · LW(p) · GW(p)
As a thought, could it be that one of the major obstacles standing in the way of the creation of a "rationality dojo" is the public perception (however inaccurate) that such already exists in not just one but multiple forms? Take the average high school debate club as one example: participants are expected to learn to give a reasoned argument, and to avoid fallacious reasoning while recognizing it in their opponents. Another example would be maths classes, wherein people are expected to learn how to construct a sound mathematical proof. I very much doubt that most people would understand the distinction between these and the proposed "rationality dojo", which would make it very hard to establish one.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-02-22T17:32:44.224Z · LW(p) · GW(p)
As a thought, could it be that one of the major obstacles standing in the way of the creation of a "rationality dojo" is the public perception (however inaccurate) that such already exists in not just one but multiple forms?
In 2009 there were no rationality dojo's but today there are multiple one's in different cities.
Take the average high school debate club as one example: participants are expected to learn to give a reasoned argument, and to avoid fallacious reasoning while recognizing it in their opponents.
Debate in clubs like this is about finding good arguments, it's not about finding out which side is right.
Replies from: hcutter↑ comment by hcutter · 2017-02-22T18:28:27.609Z · LW(p) · GW(p)
In 2009 there were no rationality dojo's but today there are multiple one's in different cities.
I'm new here, and I still have a great deal of content to catch up on reading, so it would be helpful if you could clarify: are you here referring to the Less Wrong meetup groups as "rationality dojos", or something else which has been created to fill this void since 2009?
Debate in clubs like this is about finding good arguments, it's not about finding out which side is right.
I thought I had been very careful to draw a clear distinction between what such clubs are about and actual rationality, while still contending that the perception of the average person (the non-rationalist) is that they are the same. Was I unclear? And, if so, how could I have been more clear than I was?
Replies from: ChristianKl↑ comment by ChristianKl · 2017-02-22T20:05:37.650Z · LW(p) · GW(p)
are you here referring to the Less Wrong meetup groups as "rationality dojos", or something else which has been created to fill this void since 2009?
I'm not referring to regular meetups.
CFAR started having a weekly event they called a dojo. Given that blueprint other cities have started a similar groups.
In Berlin we have a weekly dojo. Austrialia seems to have a montly dojo in Melbourne and in Sydney. Ohio also has a dojo: http://rationality-dojo.com/
I thought I had been very careful to draw a clear distinction between what such clubs are about and actual rationality, while still contending that the perception of the average person (the non-rationalist) is that they are the same
Okay. I should have been more clear. It doesn't matter what the average person thinks. A group doesn't need the average person to become a member to be successful. There just need to be enough people who care enough about the idea to become a member. If a group provides value to it's members and the members tell other people about it, it can grow.
Replies from: hcutter↑ comment by hcutter · 2017-02-24T13:28:59.870Z · LW(p) · GW(p)
Thank you for clarifying. I wasn't aware of those, and to be honest they seem a bit difficult to find information about via Less Wrong as a new reader. Meetups are publicized in the sidebar, but nothing about these dojos. Not even under the About section's extensive list of links. Which surprises me, if the creation of these dojos was a goal of Eliezer's from his very first blog post here.
If appeal to those who already care about rationality, followed by word of mouth advertising, is the approach that the dojos have decided to take rather than a more general appeal to the populace as part of raising the sanity waterline, then I concede the point.
comment by Annoyance · 2009-03-13T14:57:37.101Z · LW(p) · GW(p)
It's easy to define success in martial arts. Defining 'rationality' is harder. Have you done so yet, Eliezer?
Even in martial arts, many of the schools of thoughts are essentially religions or cults, completely unconcerned with fighting proficiency and deeply concerned with mastering the arcane details of a sacred style passed on from teacher to student.
Such styles often come with an unrealistic conviction that the style is devastatingly effective, but there is little concern with testing that.
See also: http://www.toxicjunction.com/get.asp?i=V2741
I've read a great many comments and articles by people talking about how karate black belts are being seriously beaten by people with real-world fighting experience - pimps, muggers, etc. Becoming skilled in an esoteric discipline is useful only if that discipline is useful.
Do not seek to establish yourself as a sensei. Do not seek to become a "master of the art". Instead, try to get better at fighting - or, in this case, thinking correctly - even you don't get to wear a hood and chant about 'mysteries'.
Replies from: Eliezer_Yudkowsky, Vladimir_Golovin↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-13T17:13:23.990Z · LW(p) · GW(p)
Defining 'rationality' is harder. Have you done so yet, Eliezer?
Already defined "rationality" in passing in the second sentence of the article, just in case someone came in who wasn't familiar with the prior corpus.
You, of course, are familiar with the corpus and the amount of work I've already put into defining rationality; and so I have made free to vote down this comment, because of that little troll. I remind everyone that anything with a hint of trollishness is a fair target for downvoting, even if you happen to disagree with it.
Replies from: Lee_A_Arnold↑ comment by Lee_A_Arnold · 2009-03-13T22:57:05.820Z · LW(p) · GW(p)
Eliezer, what do you say about someone who believed the world is entirely rational and then came to theism from a completely rational viewpoint, such as Kurt Gödel did?
Replies from: Eliezer_Yudkowsky, Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-13T23:00:50.276Z · LW(p) · GW(p)
I'd say, "take it to the Richard Dawkins forum or an atheism IRC channel or something, LW is for advanced rationality, not the basics".
Replies from: Lee_A_Arnold↑ comment by Lee_A_Arnold · 2009-03-13T23:20:30.732Z · LW(p) · GW(p)
Surely Gödel came to it through a very advanced rationality. But I'm trying to understand your own view. Your idea is that Bayesian theory can be applied throughout all conceptual organization?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-13T23:32:10.660Z · LW(p) · GW(p)
My view is that you should ask your questions of some different atheist on a different forum. I'm sure there will be plenty willing to debate you, but not here.
Replies from: Lee_A_Arnold↑ comment by Lee_A_Arnold · 2009-03-14T00:56:30.654Z · LW(p) · GW(p)
I'm not a theist, and so you have made two mistakes. I'm trying to find out why formal languages can't follow the semantics of concepts through categorial hierarchies of conceptual organization. (Because if they had been able to do so, then there would be no need to train in the Art of Rationality -- and we could easily have artificial intelligence.) The reason I asked about Gödel is because it's a very good way to find out how much people have thought about this. I asked about Bayes because you appear to believe that conditional probability can be used to construct algorithms for semantics -- sorry if I've got that wrong.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-13T22:58:54.726Z · LW(p) · GW(p)
"Fat chance."
↑ comment by Vladimir_Golovin · 2009-03-13T15:13:25.079Z · LW(p) · GW(p)
karate black belts are being seriously beaten by people with real-world fighting experience - pimps, muggers, etc
Yes, I heard such stories as well (edit: and recently read an article discussing real-world performance of Chinese and Japanese soldiers in melee/H2H combat). This is one of the reasons why I think that performance in the real world is a better way to measure success at rationality than any synthetic metric.
comment by blacktrance · 2014-02-28T23:04:05.504Z · LW(p) · GW(p)
Why aren't rationalists more formidable? Because it takes more than rationality to be formidable. There's also intelligence, dedication, charisma, and other factors, which rationality can do little to improve. Also, formidability is subjective, and I suspect that more intelligent people are less likely to find others formidable. As for why there isn't an art of rationality, I think it's because people can be divided into two groups: those who don't think rationality is particularly important and don't see the benefits of becoming more rational, and those who see rationality as important but are already rational for the most part, and for them, additional rationality training isn't going to result in a significant improvement.
comment by [deleted] · 2012-01-16T20:08:20.550Z · LW(p) · GW(p)
[Third, because current "rationalists" have trouble working in groups: of this I shall speak more.]
Yes, this is the most important part. Rationalists apply rationality along a hierarchical level. And, they must optimize their goal structure according to a socially-irrational world. (The entire world, where the thinking of large numbers of other people determines the outcome. In perhaps narrow engineering worlds that are solely dependent on the outcomes of the innovator, and there is ALREADY an existing rational and competitive market, the rationalist slips in, and occupies a dominant position. But what happens when he has to beg for a business permit? What happens when he has to get licensed, or prevent his competition from stealing his intellectual property? What happens when he has to defend against the irrational world? Unless he's recognized the threat, and identified a complete goal structure for dealing with the irrational, he must accomplish this task IN ADDITION TO the massive work and time he's spent on his rational goal structure, and its implementation.)
Also: life hands you bad genes. It hands you cancer. There is a lot of "noise." In a community of rationalists, there are enough rationalists who have prioritized their top goal to be the same as your top goal, and you naturally start working with them. ...But if you're poor, you have to spend most of your time at your day job, so your first prioritization has, once again, been to deal with the irrational masses, and part them from their Federal Reserve Notes.
And then there's the problem of personalities amongst rationalists. Many of them aren't going to get along with each other.
This is why libertarianism is so important: if you don't first recognize that it's irrational to attack your neighbor and cofiscate his wealth, he probably isn't going to like you very much. ...And lesswrong has 50 people that identify themselves as communists, with a majority identifying themselves as "liberals." (My guess is that they're not the "Hayek" kind of liberal when they say that. LOL)
What I'm saying amounts to: "To study rationality, you must first master the irrational." Oooh, deep thought du jour.
A round table discussion amongst Libertarian Party ballot access activists/petitioners STARTS with this knowledge, but generally lacks the formalized training in rationality that is available here at less wrong. It's very interesting to see how a group of people who has rationally prioritized their lives can optimize their existences based on the introduction of new information.
As far as being "motivated enough to pick up your phone and call your broker," ...have you joined Casey Research yet? You should allocate your stock-picking decisions to him, because he takes into account the irrationality of the masses --which is your most difficult, time-wasting computation. His mining stock picks have returned something like a 250% return over the last few years. Compounding your initial investment returns with that kind of return would have made you very wealthy, had you simply begun allocating your excesses to his picks, assuming a salary with an excess of $15,000 to invest per year. However, given unlimited investments, you're forced to invest in a noise and irrationality-filled field, especially if you didn't have time to applying your rationality to finding (fellow libertarian, fellow rationalist, natural ally) Doug Casey.
If you haven't even reached out to the most highly-successful other rationalists, and decided who is best to delegate to, it sounds to me like you're irrationally estimating your bandwidth (cranium space) to be greater than what it is. This isn't a slur, you're clearly a smart guy. ...But you're also human, and more limited than what you might believe yourself to be after some successes.
I think this post of yours asks the right questions. I will address it point-by-point later on (since some of my points here seem trivial, out of context to the original statement. I've always hated "general replies" that miss the heart of the most important questions and address the ones the author thought most trivial). Until then, the gist of what I argue for is what lesswrong is already trying to do, but with MUCH more feedback, (or better interactivity programming) to eliminate conflicting personalities: typing is a waste of time, it's hard to quickly see/know who your allies are here. There isn't a +10 or -10 ranking system that is as good as one single phone conversation. You exclude many of Kevin Kelly, Freidrich Hayek's, and Monica Anderson's primary points, regarding the logic of decentralization, and emergent allocation of resources, here with this limited blog format.
(And you ban newbies, who may carry disproportionate value, by "lack of Karma." If you want adoption, you may need to eliminate the barrier to entry to your market.)
For a phone conversation, people are welcome to call me. Jake Witmer: 312-730-4037. That way, you'll be able to figure out more quickly if I'm an idiot, if I'm a useful idiot, or if I'm a potential strong ally in a certain area, or all areas. And, I'll be able to quickly find our the same about you. Naturally, if you're calling from here, you're out of the idiot category, (unless you're an infiltrator whose purposes are contrary to the site's). The last "paranoia part" is not something I think is likely, I just mentioned it because you had 50 communists on here, and over 300 "liberals." One doesn't have to be a knowing infiltrator, to be an infiltrator.
Also: Let's say you've decided to steal for a living, so that you could be in a socially dominant position, with a long term goal of making society more rational. So you joined the NWO, and became a multi-billionaire by trading "carbon credits." This is essentially theft, but it's put you in a position to be a strong rational actor. Well, it would be rational for all the rationalists here, without your additional knowledge, to attack you. They didn't know your long-term goals, or that you profiled yourself as "unable to compete, mathematically," and "not worth competing for less than billions of FRN influence." ...So, natural allies can seem like antagonists, and natural antagonists can seem like allies.
It's a lot to sort out, before you set up a laboratory work environment with someone. Also, since you're not continually filtering people here via feedback, you're not isolating out who is going to wind up working with you on a daily basis. Maybe that's what you need. A hotline full of people trained to profile high-level, highly-motivated rationalists. Perhaps you could even have Peter Voss's AGI do it, based on Watson-like heuristics. (Ie: If they stay on the line more than 20 minutes and direct the conversation to these repeated keywords, we might want to call them back.)
Replies from: Vaniver↑ comment by Vaniver · 2012-01-16T20:56:25.171Z · LW(p) · GW(p)
Welcome to LW!
The post you're responding to is two years old; the post that Eliezer was going to write is here. You could find it by the Article Navigation link at the bottom of the page- just go to the next by this author a few times, and you'll come across it.
You might also find the post Politics is the Mind-Killer interesting. Many of us are libertarians, but libertarianism is mostly off-topic for this site.