Designing Rationalist Projects
post by calcsam · 2011-05-12T03:38:21.206Z · LW · GW · Legacy · 81 commentsContents
My thesis: Some notes and warnings: None 81 comments
Related to: Lessons from Latter-day Saints, Building Rationalist Communities overview, Holy Books (Or Rationalist Sequences) Don't Implement Themselves
My thesis:
It doesn’t matter what ideas are conveyed on Less Wrong, or in LW meetings -- the subset that matters is what group members resolved to do. Discussion of these 'resolves', and people's experience doing them, is useful in creating an expectation that people level up their skills.
Intelligent discussion of ideas is always refreshing. But translating that into action is more difficult.
Our learned reflexes are deep. They need to be overridden. How? Practice.
One woman I taught in India, we’ll call her Girija, was 35 years old, extremely intelligent and really wanted to change her life but had incredibly low levels of self-confidence. Every time we met Girija, we’d have a really sharp discussion, followed by her pouring her heart out to us. It was the same every time, and though we enjoyed the visits, and the food she would feed us, she never seemed to be getting anywhere.
If she really wanted to fundamentally change her life, our weekly meetings weren’t enough. (Similarly, weekly meetups are a good start, but if you really want to be learning rationality you should be practicing every day.)
We felt that if Girija spent some time every day with her 9 year old daughter and live-in boyfriend, reading the scriptures together, they would be happier. We explained this to her frequently, and she said she would start -- but she never did it.
One week, through cleverly calling Girija and chatting for 10 minutes every day, we got her to do it. After the week was over, we asked her how it went.
“You know, it was really good,” she said. “Sandeep and I have been getting along a lot better this week because we did that.”
It was like a light had turned on in her head. Because we followed up, she did it, and was far more motivated to do more things afterwards.[1]
Let me give two simple examples of goal, project, and follow-up.[2]
- GOAL: To become better at noticing logical fallacies as they are being uttered
- PROJECT: A certain Less Wrong group could watch a designated hour of C-SPAN -- or a soap opera, or a TV show -- and try to note down all the fallacies.
- FOLLOW-UP: Discuss this on a designated thread. Afterwards, compile the arguments and link to the file, so that anyone in the LW community can repeat this on their own and check against your conclusions. Reflect communally at your next LW meeting.
- GOAL: To get into less arguments about definitions.
- PROJECT: “Ask, "Can you give me a specific example of that?" or "Can you be more concrete?" in everyday conversations.” Make a challenging goal about how much you will do this – this is pretty low-hanging fruit.
- FOLLOW-UP: Write instances in your journal. Share examples communally at your next LW meeting.
I came up with these in about five minutes. Having spent more time in the community than me, you will all be able to generate more and better possibilities.
Some points about Projects:
- Here are some ideas that can easily be made into Projects. Thanks commenters on the last post.
- Projects don't have to be group-based, but groups motivate doing stuff.
- Projects should be more short than the above linked posts. The above Goal/Project/Follow-Up kernels are 85 and 57 words, respectively. Brevity is key to implementation.
- There is currently no central database of Rationality Projects or people's experiences trying to implement them. (Correct me if I'm wrong here.)
- Feedback on implementation is essential for improving practices.
Finally, a really 'low-cost' way to make a project and follow up. Right before the conclusion of a Less Wrong group, give everyone a slip of paper and ask them to write down one thing they are going to do differently next week as a result of the discussion. For two minutes (total) at the beginning of the next meeting, let people tell what they did.
Some notes and warnings:
Doing this in a fraternalistic manner, not a paternalistic manner, will be a key to success.[3] Community agreement that We Should Do This is important before launching a Project.
Beware of the following tradeoff:
- implementing Projects will alienate some people. Even if projects are determined by consensus, there will be some people who don’t want to do any Project, and they will feel marginalized and excluded.
- not implementing Projects, people will improve their Rationality skills at a far slower pace. [4] You will thus run afoul of Bhagwat’s Law of Commitment: “The degree to which people identify with your group is directly proportional to the amount of stuff you tell them to do that works." But ultimately, commitment drives growth. More leadership to organize stuff, more people bringing friends, and so on.
I will discuss this more later, along with possible solutions. Latter-day Saints, with a large emphasis on doing things, have high levels of commitment; however, there are definitely people who would come to church more if they were expected to do less.
Please post any ideas you have for Projects in the comments.
[1] Even subtracting the religious element, common goals reduce conflict.
[2] Here are some keys to following up that I learned. In two years, I probably applied this on about 600 people:
- Following up is mere nagging (and equally ineffective) unless the person/group actually wanted to do the task in the first place.
- Congratulating people when they did do something was far more important than expressing disappointment when they didn’t do it – the 80/20 rule applies.
- I often felt afraid to ask someone if they had done what they promised to do, because they probably hadn’t, and I didn’t know what I should say then.
- But awkwardness is contagious; if you act awkward when talking to someone, the other person will feel awkward too. Be genuinely excited, and they will also reflect this.
- It’s all about how you ask the question. “How did you like reading X?” is far better than “Did you read X?”. Use humor and make the task seem easy to do.
- Don’t be self-righteous; actively deprecate yourself if necessary.
- Each person has different ways they like – and don’t like – being followed-up with.
[3] Coming from my experience as a Latter-day Saint missionary, my personal examples are all fairly paternalistic. With tweaks, they can all be made fraternalistic. The sentiment has been expressed that “I don’t like people telling me what to do”; this will avoid that pitfall.
[4] I say 'far slower' based on my missionary experience. When people were dedicated to specific projects, they seemed to improve a lot faster.
81 comments
Comments sorted by top scores.
comment by Zvi · 2011-05-13T16:51:53.074Z · LW(p) · GW(p)
Thank you, calcasm, for this sequence, and apologies in advance to everyone for this being a bit of a rant and likely having been said before. I fear that the very practical suggestions are going to be lost because people's brains are being overridden by a combination of:
- He's using examples and techniques from LDS, who are evil cultish religious people using these techniques for evil cultish reasons!
- These techniques are designed to get someone to be inclined to change, or even worse to identify as members of a community and perhaps even adopt beliefs more often or more effectively than they would have by pure logic and reading blog posts. Dark Arts!
This big danger that Less Wrong is going to turn into a cult is a phantom. It has always been a phantom. Starting a cult whose core teaching is essentially "Think for yourself, schmuck!" together with techniques for doing so effectively may be the worst idea for a cult in world history.
If there is a cult here, not that I think there is, it is the cult of pure reason that thinks it is a sin to use any technique that could possibly reinforce a false belief or a behavior we might dislike, the cult of people crying out "Cult!" I'm sick of it because I want there to be more of us, I want us to better embody the ideals of rationality, and to use them to accomplish more and to be effective.
This is exactly the form that this information needs be in, and it's information that is available because of what this man does for a living. Rather than complain that all references to religion be redacted and replaced, we should thank him for turning his insight over to the side of truth, justice and other neat stuff like that.
Replies from: SilasBarta, Nornagest, zntneo↑ comment by SilasBarta · 2011-05-13T17:17:19.101Z · LW(p) · GW(p)
Starting a cult whose core teaching is essentially "Think for yourself, schmuck!" together with techniques for doing so effectively may be the worst idea for a cult in world history.
But still good enough if we're not careful.
↑ comment by Nornagest · 2011-05-19T23:35:44.943Z · LW(p) · GW(p)
Starting a cult whose core teaching is essentially "Think for yourself, schmuck!" together with techniques for doing so effectively may be the worst idea for a cult in world history.
SilasBarta has already pointed out the obvious counterexample; a variety of other vaguely cultish institutions, such as Anton LaVey's Church of Satan, also share goals that are, if not identical, then certainly within spitting distance. More importantly, though, I don't think "think for yourself, schmuck" is actually a core teaching of LW; LW-rationality is aimed at achieving success (however defined) rather than maintaining some ideal of independent thought, and it'll happily discourage any ideas, however independent, that its group consensus agrees are maladaptive. This isn't even a bad thing; independence for its own sake is best left to the Discordians.
I don't think LW is a cult or in serious danger of becoming one in the near term. But its core ideals alone are not sufficient to ensure that it doesn't have the abstract potential to become one.
↑ comment by zntneo · 2011-05-19T23:27:42.370Z · LW(p) · GW(p)
I wonder if we had someone who always disagreed with us if it would help prevent he cultishness. I know there is evidence that doing so increases decisions and i think i remember there being evidence that it helps stop groupthink. So maybe if we implemented what CalcSam said but add that someone be designated as the person who must disagree with us (this person could be different people or the same person)
Replies from: hamnox↑ comment by hamnox · 2011-06-28T18:20:29.238Z · LW(p) · GW(p)
Like the subverter in a paranoid debate? I think that would actually be really useful, or at least a lot of fun (which has a use in and of itself.)
I would stipulate that it NOT be just one person, though. There ought to multiple people, trading off to diffuse attention, or whoever is designated could easily become a strawman effigy to be mocked and denounced.
The story in my head goes: Every once in a while, an ace of spades (if we can get it custom, red or gold on black would be an epic color scheme), will be discretely slipped to a randomly selected acolyte or two before the meeting has begun. These people have license to be as contrary and anti-consensus as they can get away with without being found out. It will be given away at the end of the meeting, unless they'd like the actions and statements they made that day to stand as-is...
Replies from: beoShaffer, juliawise, zntneo↑ comment by beoShaffer · 2011-06-28T18:44:03.558Z · LW(p) · GW(p)
I like this suggestion but might tweak it a bit to say that everybody draws from a deck of cards(or some similar method) instead of trying to slip cards just to a specific person. It seems easier and doesn't create the problem of the person doing the slipping knowing who the subverter is. Also, it is easy to repurpose if we need other randomly assigned positions.
↑ comment by juliawise · 2011-08-28T22:12:50.935Z · LW(p) · GW(p)
There's someone at the meetup I attend who draws that card every week.
Is the purpose of this exercise for the others to guess drew the contrary card? If so, what is this good for?
Replies from: wnoise↑ comment by wnoise · 2011-08-29T06:16:36.313Z · LW(p) · GW(p)
To make sure it is a different someone. It's very easy for us to mentally categorize someone as "Oh, that's just old crazy uncle Eddy. No reason to actually consider his arguments seriously". And it's also useful for people to gain practice at dissenting.
comment by Bongo · 2011-05-12T10:33:52.327Z · LW(p) · GW(p)
not implementing Projects, people will improve their Rationality skills at a far slower pace. [4] You will thus run afoul of Bhagwat’s Law of Commitment: “The degree to which people identify with your group is directly proportional to the amount of stuff you tell them to do that works."
This seems to equate "improving Rationality skills" with "identifying with the group". I find this frightening, and a step towards just using "rationality" as something in the name of which to grub power, influence and followers and as a flag to rally a generic community around. Maybe that's the function of religious teachings for religious communities, but I hope not for LW.
Replies from: Costanza, EStokes↑ comment by Costanza · 2011-05-12T13:07:23.245Z · LW(p) · GW(p)
I would have thought so too, but a lot of people here are obviously loving this stuff.
Replies from: Bongo↑ comment by Bongo · 2011-05-12T13:42:30.094Z · LW(p) · GW(p)
And that's kind of frightening too. I don't think it's too much of an exaggeration to say that this stuff is basically a cult roadmap.
Replies from: gscshoyru, gscshoyru↑ comment by gscshoyru · 2011-05-12T15:42:36.322Z · LW(p) · GW(p)
I think we should be a little careful of using the word "cult" as a mental stop sign, since that does seem to be what's happening here. We need to be a bit more careful about labeling something with all the bad connotations of a cult just because it has some of the properties of a cult -- especially if it only seems to have the good properties. But... that doesn't mean that this good cult property won't lead to the bad cult property or properties that we don't want. You should just be more explicit as to what and how, because I'm wavering back and forth on this article being a really, really good idea (the benefits of this plan are obvious!), and a really, really scary and bad idea (if I do it, it'll make me become part of a groupthinky monster!).
The problem I have is that both sides in my own head seem to be influenced by their own clear cognitive biases -- we have the cult attractor on one hand and the accidental negative connotations and stopsigny nature of the word "cult" on the other. So if you could semi-explicitly show why adopting the idea this article puts forth would lead to some specific serious negative consequences, that would clear up my own indecision and confusion.
Replies from: Bongo, fubarobfusco↑ comment by Bongo · 2011-05-12T16:14:13.249Z · LW(p) · GW(p)
I'm afraid my comments were mostly driven by an inarticulate fear of cults and of association with a group as cultish as Mormons. But one specific thing I already said I'm afraid of is that of LW becoming a "rational" community instead of a rational community, differing from other communities only by the flag it rallies around.
Replies from: gscshoyru, jimrandomh↑ comment by gscshoyru · 2011-05-12T17:42:24.723Z · LW(p) · GW(p)
You know what... I was missing the "look for a third option" bit. There are more options than the two obvious ones -- doing this, and not doing this.
I've been having trouble making myself do the rationalisty projects that I came up with for myself, and this article suggested a way to use group pressures to make me do these things. Since I really wanted a way to make myself do these projects, the article seemed like a really, really good idea. But instead of making the whole community do this, I can actually just ask some fellow rationalists at my meetup to do this, just to me. That way I can use group pressures to help impose my own rationally determined second order desires on myself. The only thing I think I lose this way is the motivation via a sense of community, where everyone else is doing it too...
Of course, this still doesn't resolve the problem of whether or not the community at large should adopt the ideas put forth in this article. I still can't seem to think rationally about it. But at least this is a way for me to get what I want without having to worry about the negative side effects of the whole community adopting this policy.
↑ comment by jimrandomh · 2011-05-12T17:09:51.193Z · LW(p) · GW(p)
But one specific thing I already said I'm afraid of is that of LW becoming a "rational" community instead of a rational community, differing from other communities only by the flag it rallies around.
If you took a typical community and replaced its flag with one that said "be rational", would you expect the effect to be positive, negative, and neutral?
Replies from: Bongo, Costanza, Desrtopa↑ comment by Costanza · 2011-05-12T17:25:03.538Z · LW(p) · GW(p)
You might think that a belief system which praised "reason" and "rationality" and "individualism" would have gained some kind of special immunity, somehow...?
Well, it didn't.
It worked around as well as putting a sign saying "Cold" on a refrigerator that wasn't plugged in.
Rationality flags don't seem to help that much.
↑ comment by fubarobfusco · 2011-05-12T16:23:16.905Z · LW(p) · GW(p)
Conjecture: Sufficiently dedicated groups that do not take measures against "bad cult properties" will fall down the cult attractor. So if you want a group to not fall down the attractor, you have to think about bad cult properties and how to avoid them.
Various folks have come up with lists of just what "bad cult properties" are; one of my favorites is Isaac Bonewits' "ABCDEF". Bonewits' motivation appears to have been to help people be more comfortable involving themselves in unusual groups (he was a neopagan leader) by spelling out what sorts of group behavior were actually worth being worried about.
I won't repeat Bonewits' list here. I think it's worth noting, though, that several of the properties he outlines could be described as anti-epistemology in practice.
Replies from: AdeleneDawner, wedrifid, mutterc↑ comment by AdeleneDawner · 2011-05-12T16:41:23.634Z · LW(p) · GW(p)
Having read the list, LW-as-it-was-last-week-and-presumably-is-now seems to be unsurprisingly good at not being a cult. It does occur to me that we might want to take a close look at how the incentives offered by the group to its members will change if we switch to a more recruitment-oriented mode, though.
↑ comment by wedrifid · 2011-05-12T18:38:15.204Z · LW(p) · GW(p)
Conjecture: Sufficiently dedicated groups that do not take measures against "bad cult properties" will fall down the cult attractor. So if you want a group to not fall down the attractor, you have to think about bad cult properties and how to avoid them.
comment by [deleted] · 2011-05-12T19:41:29.279Z · LW(p) · GW(p)
(I'm not sure where to ask this, so I'll just put it here.)
Do you have any experience with doing this kind of thing online-only? I currently don't have any rational community around and I'm not even sure if I want one, but the benefits seem tremendous and I'm interested in at least trying.
Replies from: handoflixue↑ comment by handoflixue · 2011-05-13T01:33:06.895Z · LW(p) · GW(p)
I'll second the interest in some form of online community / meet-up, and feel a bit silly for not thinking of the idea previously :)
Replies from: None↑ comment by [deleted] · 2011-05-16T22:45:30.624Z · LW(p) · GW(p)
(Not specifically a reply to you, but it feels weird replying to myself. Just some brainstorming.)
Well, there's a Virtual Meetup, but that's video-based. I find that very intimidating, so I'd be interested in something less real-time.
I liked the idea of regular checkups (related talk), so maybe something mail-based like that might work. Incidentally, I recently bought a Zeo and it comes with some basic coaching. They regularly send you mail, helping you come up with some meaningful goals, asking you about your progress and so on. I really enjoyed that and it helped me track my sleep consistently so far.
I'll think about that more and maybe start something myself.
Replies from: handoflixue, erratio↑ comment by handoflixue · 2011-05-18T23:27:49.029Z · LW(p) · GW(p)
Thinking out loud: If the issue is simply video, one could pretty easily run an IRC meetup. It has the advantage of still being somewhat "real time" and thus something you could block out an hour or two for a weekly "meetup."
If you want to avoid real-time entirely, then a forum or mailing list ought suffice. I'd point out that LessWrong is already effectively a forum, so I think you'd get a lot less out of this.
All of these, including the video conference, probably also run in to scaling issues unless you have a very good organizer and a crowd willing to go along with it. I'd expect small groups of 3-6, splintering as needed, would probably be best?
I suppose mostly it depends on what you're looking to get out of it, and why the main LessWrong site doesn't suffice.
↑ comment by erratio · 2011-05-19T06:33:58.056Z · LW(p) · GW(p)
Well, there's a Virtual Meetup, but that's video-based
So far most of them have been voice-based. I'll also add that I'm the kind of person who's typically intimidated by the idea of discussing things in realtime (I feel like I don't generate points or counterpoints fast enough most of the time), but I've found the virtual meetups to be worthwhile and non-intimidating.
Replies from: Nonecomment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-12T12:48:39.659Z · LW(p) · GW(p)
Finally, a really 'low-cost' way to make a project and follow up. Right before the conclusion of a Less Wrong group, give everyone a slip of paper and ask them to write down one thing they are going to do differently next week as a result of the discussion. For two minutes (total) at the beginning of the next meeting, let people tell what they did.
This is a really good idea. I've enjoyed your series of posts and I think you have a lot of really good ideas.
Replies from: AngryParsley↑ comment by AngryParsley · 2011-05-12T15:15:36.834Z · LW(p) · GW(p)
That tactic combines commitment and consistency with social proof. After 5 people have told the group what honorable and high-status things they're going to do, you'd have a hard time saying, "Well I didn't learn anything useful tonight, but it was fun to catch up with some of you guys." even if it were true.
Replies from: fubarobfusco, handoflixue↑ comment by fubarobfusco · 2011-05-12T22:04:00.665Z · LW(p) · GW(p)
This suggests that it would inspire making things up to sound good, regardless of whether they are true. I don't think that's a hugely great result.
Replies from: AngryParsley, mutterc↑ comment by AngryParsley · 2011-05-12T22:30:41.636Z · LW(p) · GW(p)
That's the point I was trying to make. I'm sorry if it came across as endorsing the tactic. "Commitment and consistency" and "social proof" are two of the six "weapons of influence" from Cialdini's Influence: The Psychology of Persuasion.
↑ comment by handoflixue · 2011-05-12T23:31:46.815Z · LW(p) · GW(p)
Hmmmm, that's a good point. I like the idea of hanging out casually and growing at my own pace. I like the idea of learning skills that help me accelerate that pace. I definitely dislike any sort of social pressure to match the group's pacing, though.
comment by Jonathan_Graehl · 2011-05-12T18:07:53.589Z · LW(p) · GW(p)
Reflect communally at your next LW meeting
Share examples communally at your next LW meeting.
For two minutes (total) at the beginning of the next meeting, let people tell what they did.
At first this reminded me of one of the more obnoxious LDS commitment techniques: encouraging everyone to "bear [sic] their testimony". In short, whenever there is a gathering of Mormons, they're periodically (in the context of a monthly meeting) pressured to ALL make some public commitment to belief in the church or value of the community. This pressure is explicitly applied as part of the teaching given to teens. I always declined, because I wanted my testimony to be real, not a perjury.
However, two minutes total sounds different than the hour-plus sessions filled with pregnant silence and expectant looks. As long as people don't feel like nearly everyone has performed the ritual, and they, awkwardly, haven't, fine.
Replies from: alexflint↑ comment by Alex Flint (alexflint) · 2011-05-15T12:44:27.189Z · LW(p) · GW(p)
You seem to be arguing that using a community gathering to motivate people is inherently bad because in the past it has been used by wicked people for wicked deeds. This is a very dangerous stance to take as it can potentially prohibit any technique that has even been used for evil deeds. Some of the early scientology meetings were held on boats - does that mean we should never ever allow ourselves to hold a LW meetup in a boat?
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2011-05-16T04:11:55.978Z · LW(p) · GW(p)
Maybe it seems that way, but I'm not.
I'm saying that if the expectation is close enough toward "everybody affirms the effectiveness of the technique they were told to practice last week" as opposed to "a few people who are most excited can share what worked/didn't about it", you'll distort people's thinking.
comment by jasonmcdowell · 2011-05-12T07:14:33.353Z · LW(p) · GW(p)
This is an excellent plan. Excellent writing, organization, thought. This is a rally-point for implementation.
It makes me uneasy when I see competent missionaries. I don't know if I have the energy to compete against them.
comment by handoflixue · 2011-05-12T23:26:16.537Z · LW(p) · GW(p)
The idea of brevity, giving weekly assignments, and discussing them at the next meeting makes me think of "Agile software development" practices in general. The goals of rational self-improvement and agile software development seem to align fairly neatly, too.
It has the added advantage that it scales very well: You can use these techniques to manage a group, or just yourself. The ability to "go it solo" if needed seems important to this crowd.
I'm going to set a goal of having a post on this by May 22nd, to try and motivate myself to think about it more and see if I can't come up with some applied thoughts :)
comment by Kevin · 2011-05-12T11:42:37.121Z · LW(p) · GW(p)
I really appreciate your advice about doing things. I like doing things almost as much as I like not doing things. Doing is important and we as a community should do more things. But......... ideas! It turns out that one of the weird things about this universe is that ideas might actually be more important than actions. That the very fate of the light-cone -- whether or not orthodox Mormons get their 1+ planets (and whether or not it is "simulated" or "real")-- depends on what may turn out to be slight differences in the things humanity chooses to do, which might boil down to surprisingly small insights from individuals.
Ideas.
Replies from: MartinB, Bongo↑ comment by MartinB · 2011-05-12T12:39:13.686Z · LW(p) · GW(p)
Ideas + Doing. Each on its own is not particularly useful.
Replies from: Costanza↑ comment by Costanza · 2011-05-12T13:03:09.370Z · LW(p) · GW(p)
That's not exactly how I'd put it. If the history of the world for the past hundred years or so teaches anything, it's that energetic, enthusiastic, and active people can be dedicated and loyal to very, very bad ideas. This has had rather unpleasant results on more than one occasion. Getting the ideas right is important.
Replies from: MartinB↑ comment by MartinB · 2011-05-12T13:23:21.968Z · LW(p) · GW(p)
And then they are bright people with correct ideas that never bother to do anything with or about them or even write them up. It is not enough to be right and stop there.
Replies from: Costanza↑ comment by Costanza · 2011-05-12T13:42:52.698Z · LW(p) · GW(p)
Agreed! With that said, I submit that the ideas of the Mormon church are not correct. They are not remotely right. Better they should stop before proceeding to the "doing" phase.
Replies from: MartinB↑ comment by MartinB · 2011-05-12T17:25:47.788Z · LW(p) · GW(p)
That is not completely correct. There is no absolute wrong in what the Mormons do. There is also no way to first become absolute right, and then start action. There is a continuum of wrongness. Sometimes you got to act before being correct, like in some cases where you act against an evil.
With a closer look you might find things the Mormons do that are better than the actions of common society. Even if they do so for mistaken reasons. Not using drugs comes to mind. Some religious groups take that serious. And of course the idea of being awesome to your kids and family. In case that actually applies to a higher degree to Mormons.
If they actually get to 'STOP' what they do at the moment, HOW will that take place? There are many ways to do the break of off a religion wrong.
Replies from: Desrtopa↑ comment by Desrtopa · 2011-05-13T02:00:49.759Z · LW(p) · GW(p)
That is not completely correct. There is no absolute wrong in what the Mormons do.
I would argue otherwise. This may be the case morally speaking, but if you're applying standards by which this is true of everyone, then the claim is fairly vacuous. If you're speaking evidentially, then I would argue that yes, they're processing data in a way that is absolutely wrong.
And of course, there are plenty of wholesome, happy Mormon families. But I've known enough bitter ex Mormons with horror stories that I must treat the idea that Mormonism improves people's family lives in general with extreme skepticism.
If a "closer look" tells us that some norms lead to happier or more productive lives, and some have negative repercussions, isn't that closer look better taken before establishing the norms?
Replies from: MartinB↑ comment by MartinB · 2011-05-13T09:26:44.966Z · LW(p) · GW(p)
The argument also works for Christian families and other religious groups. I am vary to label big parts of the population as inherently evil.
While I would enjoy religion to just disappear there has to be some thinking on what it will be replaced by. It can easily be made worse. The devil you know and such.
Replies from: JamesAndrix↑ comment by JamesAndrix · 2011-05-14T02:30:59.793Z · LW(p) · GW(p)
There is a definition of terms confusion here between "inherently evil" and "processing data absolutely wrong".
I also get the impression that much of Europe is an extremely secular society that does OK.
There is confusion for individuals transitioning and perhaps specific questions that need to be dealt with by societies that are transitioning. But in general there is already a good tested answer for what religion can be replaced by. Getting that information to the people who may transition is trickier.
comment by oscardelben · 2011-05-12T09:48:14.501Z · LW(p) · GW(p)
Good advice, I'm actually looking to start some similar projects. As you said feedback is very important, but for some of us it's difficult to find rationalists in our area to share these experiments. I would like to see some sort of online group where we can share and discuss practical ideas, or get advices from time to time. A forum would probably be enough, and I can create one if there's enough interest.
comment by HonestAbe · 2011-05-15T17:06:23.327Z · LW(p) · GW(p)
Today I will present a coherent and cogent case for Eliezer being a crook and a con-artist. This is not for the purpose of defaming him but to show that he is wasting your money and your time. I realize that SIAI has been evaluated by an ignoramus already I am merely filling in the gaps.
I will present facts and the proper citations in text. Let's begin:
NOTE: all sources are direct quotes from Eliezer's mouth either video or text.
Facts Eliezer (here after referred to as DMF) claims of himself: IQ: 143 (no mention of the test administered if it was Catell then the score can be properly converted to 126) Highest Percentile Score: 9.9998 (no mention of the test that he saw the score on) DMF learned calculus at age 13. Source: http://www.youtube.com/watch?v=9eWvZLYcous
Math Ability: "I was a spoiled math prodigy as a child..."
"[Marcello math work] ...That’s not right" and maybe half the time it will actually be wrong. And when I’m feeling inadequate I remind myself that having mysteriously good taste in final results is an empirically verifiable talent, at least when it comes to math."
Source: http://johncarlosbaez.wordpress.com/2011/03/07/this-weeks-finds-week-311/
Standard Workday: When writing: 2-3 hours writing then a couple hours off When doing FAI work: 2-3 hours work then break then 2-3 hours with a day off before repeating (During down time math may be studied, did not sound like that happened very much.) Blogging: 1 post per day sometimes 2 posts they do not seem to exceed 12 pages from what I have seen. Source: http://www.youtube.com/user/michaelgrahamrichard#p/u/26/9kI1IxOrJAg
Admission by DMF: DMF admits to a weakness of will. Source: http://www.youtube.com/user/michaelgrahamrichard#p/u/26/9kI1IxOrJAg
Publications Officially Listed: "In 2001, he published the first technical analysis of motivationally stable goal systems, with his book-length Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures. In 2002, he wrote "Levels of Organization in General Intelligence," a paper on the evolutionary psychology of human general intelligence, published in the edited volume Artificial General Intelligence (Springer, 2006). He has two papers in the edited volume Global Catastrophic Risks (Oxford, 2008), "Cognitive Biases Potentially Affecting Judgment of Global Risks" and "AI as a Positive and Negative Factor in Global Risk." Source: http://singinst.org/aboutus/team
Claims About the FAI Problem: "My current sense of the problems of self-modifying decision theory is that it won’t end up being Deep Math, nothing like the proof of Fermat’s Last Theorem—that 95% of the progress-stopping difficulty will be in figuring out which theorem is true and worth proving, not the proof." Source: http://johncarlosbaez.wordpress.com/2011/03/07/this-weeks-finds-week-311/
AI Related Projects Started: Flare Source: http://flarelang.sourceforge.net/ Abandoned Flare: JB, ditched Flare years ago. (2008) Source: http://lesswrong.com/lw/tf/dreams_of_ai_design/msj A legacy of pre-2003 Eliezer, of no particular importance one way or another. Source: http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/121t
DMF Discounted LOGI: "LOGI's out the window, of course, as anyone who's read the arc of LW could very easily guess." Source: http://lesswrong.com/lw/1hn/call_for_new_siai_visiting_fellows_on_a_rolling/1av0
Stated Job Description and Plan: "Eliezer Yudkowsky: My job title is Research Fellow, but I often end up doing things other than research. Right now I’m working on a book on human rationality (current pace is around 10,000-13,000 words/week for a very rough first draft, I’m around 150,000 words in and halfway done with the rough draft if I’m lucky). When that’s done I should probably block out a year to study math and then go back to Artificial Intelligence theory, hopefully ever after (until the AI theory is done, then solid AI development until the AI is finished, et cetera)." Source: http://hplusmagazine.com/2010/07/21/simplified-humanism-positive-futurism-how-prevent-universe-being-turned-paper-clips/
How Is He a Crook? DMF claims that he mastered calculus at 13 and is a math prodigy what evidence is there for this claim? Papers: The only paper with any degree of math albeit simple math is "An Intuitive Explanation of Bayes' Theorem" Source: http://yudkowsky.net/rational/bayes What about his quantum physics posts? Source: http://lesswrong.com/lw/r5/the_quantum_physics_sequence/ Never once does DMF solve the wave equation nor does DMF solve a single derivative or integral equation the following list are most of posts with any math in them: http://lesswrong.com/lw/pe/joint_configurations/ http://lesswrong.com/lw/q0/entangled_photons/ http://lesswrong.com/lw/q2/spooky_action_at_a_distance_the_nocommunication/ http://lesswrong.com/lw/q4/decoherence_is_falsifiable_and_testable/ The other posts contain amusing graphs many hand-drawn and pseudo math: http://lesswrong.com/lw/pl/no_individual_particles/ http://lesswrong.com/lw/pk/feynman_paths/ http://lesswrong.com/lw/pj/the_quantum_arena/ http://lesswrong.com/lw/pi/classical_configuration_spaces/ http://lesswrong.com/lw/pp/decoherence/ http://lesswrong.com/lw/pq/the_socalled_heisenberg_uncertainty_principle/ (amusing pseudo math) http://lesswrong.com/lw/pu/on_being_decoherent/ http://lesswrong.com/lw/pz/decoherence_as_projection/ If DMF mastered calculus at 13 then why is there no evidence in any of these posts? If DMF is a math prodigy; who is good at explaining math; why no explanation of the wave equation? He does mention it in his timeless physics post but it appears that he took his description from wikipedia, since there are some striking similarities. It is one thing to talk with math jargon such a derivatives and gradients its another thing entirely to be able to actually use those ideas so solve an equation or model a system. DMF has shown no evidence that he can do such things.
Replies from: Rain, Eliezer_Yudkowsky, gwern, ata, katydee, Cyan, JohnH↑ comment by Rain · 2011-05-15T17:14:18.757Z · LW(p) · GW(p)
Since this is your first post here, I'll temper my response and suggest you take the time to rebuild this comment into something coherent, using the proper link structure of LessWrong and rules of English grammar. You can click 'Help' in the lower right of the comment box for syntax.
It'd also be nice if you could put it in the right place, such as Discussion, instead of as an apparently random reply to an unrelated article.
However, before doing so, I'd further suggest you ensure that you understand what claims you're making and how they are supported or not by available evidence. There are several older articles on the topic of evidence which can be found using search functions.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-15T20:44:21.828Z · LW(p) · GW(p)
I work 4-5 hours at a stretch when writing.
By the way, I think we can all recognize this as the leading criticism of my ideas, to which all newcomers, requesting to know what my critics have said in response to me, should be directed.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-05-15T21:25:32.818Z · LW(p) · GW(p)
This post probably is evidence that HonestAbe isn't secretly Eliezer - unless that's what he wants us to think!
↑ comment by gwern · 2011-05-15T21:13:13.099Z · LW(p) · GW(p)
To single out just one part... I don't understand the point of
Standard Workday: When writing: 2-3 hours writing then a couple hours off When doing FAI work: 2-3 hours work then break then 2-3 hours with a day off before repeating (During down time math may be studied, did not sound like that happened very much.) Blogging: 1 post per day sometimes 2 posts they do not seem to exceed 12 pages from what I have seen. Source: http://www.youtube.com/user/michaelgrahamrichard#p/u/26/9kI1IxOrJAg
4-6 hours is perfectly normal for authors. This is true whether you look at great scientists like Charles Darwin, or merely ordinary contemporary science/engineering faculty. See the quotes from Ericsson 1993, in 'The Role of Deliberate Practice', in http://www.gwern.net/About#fn23
Replies from: David_Gerard↑ comment by David_Gerard · 2011-05-15T22:33:59.745Z · LW(p) · GW(p)
cough DFTT.
Replies from: gwern, Normal_Anomaly↑ comment by gwern · 2011-05-15T22:38:56.599Z · LW(p) · GW(p)
That comment was also an excuse to links/discuss some interesting snippets I found in my reading somewhere more permanent than #lesswrong. (Criticizing the troll was just part of it.)
Replies from: David_Gerard↑ comment by David_Gerard · 2011-05-16T09:00:43.195Z · LW(p) · GW(p)
Fair enough!
(Is -30 a record low score?)
Replies from: Barry_Cotter, katydee↑ comment by Barry_Cotter · 2011-05-16T11:33:30.838Z · LW(p) · GW(p)
The Popper troll's post got to -36 before Eliezer removed it in some way that left it available if you knew the URL, but not in recent posts.
↑ comment by Normal_Anomaly · 2011-05-16T11:49:26.777Z · LW(p) · GW(p)
Trying not to feed the troll by replying to him directly, but I'm too curious not to ask: why does ve refer to EY as "DMF"?
Replies from: David_Gerard↑ comment by David_Gerard · 2011-05-16T14:25:27.877Z · LW(p) · GW(p)
http://lesswrong.com/lw/5o1/designing_rationalist_projects/46em suggests its meaning.
↑ comment by ata · 2011-05-15T17:28:50.190Z · LW(p) · GW(p)
Why "DMF"?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2011-05-15T20:25:19.773Z · LW(p) · GW(p)
DMF.
I'd support booting "HonestAbe" off the site.
Replies from: ata↑ comment by katydee · 2011-05-16T03:23:10.715Z · LW(p) · GW(p)
This critique is so poor that I think there's a nonzero chance that you're a plant.
Replies from: Alicorn, benelliott, wedrifid↑ comment by benelliott · 2011-05-16T13:00:59.811Z · LW(p) · GW(p)
I think an actual pro-SIAI plant would make arguments which were quite a bit better than HonestAbe's, this is too obviously stupid to work as a strawman.
↑ comment by wedrifid · 2011-05-16T10:15:37.130Z · LW(p) · GW(p)
This critique is so poor that I think there's a nonzero chance that you're a plant.
Zero is such a non-probability that I think there is a nonzero chance that you are a plant!
Replies from: katydee↑ comment by katydee · 2011-05-16T10:18:58.483Z · LW(p) · GW(p)
Certainly true. I still haven't found a sufficiently accurate way of describing this sort of situation; "a low chance" would imply that the quality of the critique updated me away from believing the author was a plant, whereas "a significant chance" has too much weight. "Nonzero" works in common parlance but is pseudo-meaningless, since there's a nonzero chance of practically anything.
What would you recommend in this case?
Replies from: Oscar_Cunningham, wedrifid↑ comment by Oscar_Cunningham · 2011-05-16T10:46:11.192Z · LW(p) · GW(p)
Non-negligible?
↑ comment by Cyan · 2011-05-15T18:56:35.074Z · LW(p) · GW(p)
You left The Cartoon Guide to Löb's Theorem out of your assessment.
Replies from: gwern, rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2011-05-15T19:15:22.835Z · LW(p) · GW(p)
He also missed the opportunity to point out that organizational resources have been used to produce escapist fantasy literature. :)
↑ comment by JohnH · 2011-05-15T17:49:23.414Z · LW(p) · GW(p)
Voting you down, even though I sort of agree with some of what you said. This is the wrong place to put this as Rain said, and you should have taken the time to figure out how to present this is an easily readable fashion. Perhaps have included in some more explanation and reasoning. For instance, how is his work schedule that different then what many college professors employed in comparable fields of research follow?
edit: I should also point out that I visit Less Wrong with the explicit purpose of wasting time because it is an interesting waste of time.