Posts
Comments
With respect to power dynamics point one and two, there is another person known to the community who is perhaps more qualified and already running something which is similar in several respects - Geoff Anders of Leverage Research. So I don't think this is precisely the only group making an attempt to hit this sort of thing, though I still find it novel and interesting.
(disclaimer: I was at the test weekend for this house and am likely to participate)
Something like this also happened with Event Horizon, though the metamorphosis is not yet complete...
Once every few days or so.
Broadly agreed - this is one of the main reasons I consider internal transparency to be so important in building effective organizations. in some cases, secrets must exist - but when they do, their existence should itself be common knowledge unless even that must be secret.
In other words, it is usually best to tell your teammates the true reason for something, and failing that you should ideally be able to tell them that you can't tell them. Giving fake reasons is poisonous.
In some cases it can be - and I will discuss this further in a later post. However, there are many situations where the problems you're encountering are cleanly solved by existing paradigms, and looking at things from first principles leads only to reinventing the wheel. For instance, the appropriate paradigm for running a McDonald's franchise is extremely understood, and there is little need (or room) for innovation in such a context.
This is one of the worst comments I've seen on LessWrong and I think the fact that this is being upvoted is disgraceful. (Note: this reply refers to a comment that has since been deleted.)
This post seems better suited for the Discussion section.
So, maybe this is just my view of things, but I think a big part of this conversation is whether you're outside looking in or inside looking out.
I'm on the inside and I think we should get rid of these things for the sake of both insiders and outsiders.
Is that true? I mostly don't notice people scoring cheap points by criticizing religion; I mostly notice them ignoring religion.
See for instance Raising the Sanity Waterline, a post which raises very important points but is so unnecessarily mean-spirited towards religion that I can't particularly show it to many people. As Eliezer writes elsewhere:
Why would anyone pick such a distracting example to illustrate nonmonotonic reasoning? Probably because the author just couldn't resist getting in a good, solid dig at those hated Greens.
In terms of weird fixations, there are quite a few strange things that the LW community seems to have as part of its identity - polyamory and cryonics are perhaps the best examples of things that seem to have little to do with rationality but are widely accepted as norms here.
If you think rationality leads you to poly or to cryo, I'm fine with that, but I'm not fine with it becoming such a point of fixation or an element of group identity.
For that matter, I think atheism falls into the same category. Religion is basically politics, and politics is the mind-killer, but people here love to score cheap points by criticizing religion. The fact that things like the "secular solstice" have become part of rationalist community norms and identity is indicative of serious errors IMO.
For me, one of the most appealing things about EA (as opposed to rationalist) identity is that it's not wrapped up in all this unnecessary weird stuff.
I think LessWrong has a lot of annoying cultural problems and weird fixations, but despite those problems I think there really is something to be gained from having a central place for discussion.
The current "shadow of LessWrong + SSC comments + personal blogs + EA forum + Facebook + IRC (+ Tumblr?)" equilibrium seems to have in practice led to much less mutual knowledge of cool articles/content being written, and perhaps to less cool articles/content as well.
I'd really like to see a revitalization of LessWrong (ideally with a less nitpicky culture and a lack of weird fixations) or the establishment of another central hub site, but even failing that I think people going back to LW would probably be good on net.
My impression significantly differs, though I'm far from confident. I'd be interested in seeing an expanded version of this point because it seems potentially very valuable to me.
I agree, that comment was written during a somewhat silly period of my life. :)
This post seems more suited for the Discussion section (insofar as it is suitable for LW at all).
It went very well - too well, in fact! Writing a LessWrong post did not feel alive to me, so I didn't do it.
Great post! I'd love to see this in the Main section.
Would you avoid making yourself better at thinking because you might start winning arguments by bamboozling your opponent?
I do avoid making myself better at arguing for this reason. Thinking is another story.
Affordances; men with hammers and all that.
I'm considered pretty good in this respect. I think the #1 thing that helps is just paying attention to things a lot and having a high degree of situational awareness, which causes you to observe more interesting things and thus have more good stories to share. Reading quickly also helps.
When it comes to actually telling the stories, the most important thing is probably to pay attention to people's faces and see what sorts of reactions they're having. If people seem bored, pick up the pace (or simply withdraw). If they seem overexcited, calm it down.
One good environment to practice the skill of telling stories is tabletop role-playing games, especially as the DM/storyteller/whatever. In general, I think standards in this field are usually fairly low and you get a good amount of time to practice telling (very unusual) stories in any given session.
While it's important to avoid encouraging political debates on LessWrong, exercising virtues such as moderation and tolerance when such issues do come up is even more important.
I agree. That's why I looked at advancedatheist's comment history before replying. If this were the only such comment, I would not have called it out-- but this user has a history of posting similar comments.
Now, advancedatheist has also posted comments that advocate neoreactionary positions in ways that I consider totally appropriate for LessWrong-- this one, for example. But IMO there's a clear difference in tone and tenor between that and this.
Politics is an important domain to which we should individually apply our rationality—but it's a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.
The purpose of LessWrong is to discuss and learn rationality, so I think politics are almost never appropriate here. But even if we think that civilized discussion of political matters is appropriate, the post I was critiquing was not, IMO, up to our standards of civility and polite discussion.
For discussion of political matters? A bit late for that, I think. This train has left the station.
Has it? Insofar as it has, that's been thanks to our own failure to tend to basic principles. I think that in order to better reach as many people as possible, it's critical that LW avoid politics and the potential biases that can result.
I do agree that having civilized discussions even while disagreeing about politics is important. But there are other venues for that, like Slate Star Codex, and if we indeed need more of this I think it's better to move it off-site.
Yes, ha ha. This is a serious matter, though. I believe that it really truly doesn't matter whether someone's political points are good or not. LessWrong should not allow itself to be a venue for this sort of behavior, especially when it's accompanied by this sort of tone.
In order for the LessWrong community to flourish, I think it is critical that it be divorced from bickering over political matters. So when it comes to posts like this one, I really truly don't care whether their arguments are valid or not-- either way, they shouldn't be on LessWrong
I don't particularly care about whether the points are valid. This kind of discussion isn't what LessWrong is for, especially when it's being posted with this sort of tone.
I disagree with the general concept that LW is an appropriate place to post bizarre, mindkilled political rants.
Please stop making comments like this.
The example that springs to mind most readily is that a few days ago, someone asked me if we had a video cable I hadn't heard of in the office. I didn't recognize the name but knew I'd recognize it by sight, so I searched for the name of the cable online, found a picture of it, and directed the person to the right location.
It doesn't take significantly longer for me (I just did a side-by-side comparison and couldn't tell the difference), though I have Google Fiber at home and pretty fast Internet at work. I also didn't notice the ads until you pointed that out and don't consider them particularly annoying.
That said, if these are persistent stumbling blocks then by all means don't use this service. If Goodsearch took 2-3 seconds more than Bing/Google for me I would certainly not use it.
I have some degree of discipline and a pretty good degree of self-awareness, but in the past-- even the recent past-- I've definitely found myself doing shiny but unfulfilling activities for extended periods. It's possible that I've gained a bunch of skill or willpower without noticing it and that this event caused me to shift into a mode that I didn't know how to access before, but this didn't feel like using discipline to me.
I've experimented with different alarms. For some reason the one that seems to work best is very loud and harsh-- not because it wakes me up, but because my subconscious hates it and consistently wakes me up a few minutes before it goes off. I'm not sure what exactly causes this effect but I've found it extremely useful.
Hmm, depends on what you mean by useful. I think lucid dreaming is:
a) very fun
b) useful for becoming more rational,, but only in a somewhat limited way-- it can be very good for training noticing confusion but doesn't seem to have a huge amount of potential beyond this.
This is a line of development that-- while clearly useful-- seems somewhat hacky and unpromising to me. While I agree that this is likely to yield useful benefits in the short run, it strikes me that fixing one's internal structure in order to produce reliably correct external actions without these sorts of hacks seems more promising in terms of long-term growth and skills.
About a year ago, I thought that lucid dreaming was a great path to rationality. While lucid dreaming is a great way to train the skill of noticing confusion, I no longer recommend it to people asking me for advice on rationality practice, because I think you hit the skill ceiling relatively fast and it doesn't particularly lend itself to further development.
I'm worried that this strategy falls prey to the same flaw-- while it's quite effective in the short run, I think that people using these methods will ultimately have to learn the internal solutions anyway if they wish to progress to more advanced domains. Therefore, it makes more sense to me to just start with the internal solutions.
(Of course, if you need rapid skill growth in the short term, this might well be a useful strategy to adopt-- just be aware of the downsides.)
Extremely good post. I'd love to see more content like this on LessWrong.
"Cake or Death" was part of an Eddie Izzard joke from 1998-- I think it has achieved some kind of widespread memetic success, though, since I've seen it in quite a few places since.
This approach 80/20s the point I want to convey. Writing up a bunch of examples is more work than the entire rest of the post combined and IMO adds substantially less utility, so I'm not doing it here. I'll probably do so when/if I write this up for Main.
If you can reliably emulate a wiser person, why not just be the wiser person?
Yes, I consider this an empirical claim. I have a fair amount of anecdata from people I've shared this with in person about this being a useful approach.
That said, I agree that some may not find this effective or will find it harmful; this is why I wrote "in many cases" rather than "in almost all cases" or "you will find" or similar.
If you do not find this technique effective, I suggest that you don't practice it. I and a few friends found it useful and interesting enough to be worth disseminating.
Wouldn't it be easier to just write down and then assess/ameleriorate the biggest weaknesses in your plans?
In theory, yes; in practice, this seems to work less effectively than we'd like to think.
You even mention an example and then still fail to actually give it. That annoyed me because it would have been nice to see this abstract idea grounded.
In general I think LessWrong cares far too much about this sort of detail. I posted this in Discussion rather than Main precisely because I didn't want to write up a bunch of examples to express a straightforward principle.
Is this intended to be an empirical claim?
I'm confused as to what thought process generated this comment. Can you explain?
I agree that "it's arrogant for you to write a book" is probably not helpful, though "you can't finish a project this long" may or may not be helpful depending on whether you generate that thanks to reference class forecasting (even insulting, biased reference class forecasting) or thanks to negative self-image issues.
In general, I do not advocate this (or any other) technique if it causes damage to your self-concept, intrusive thoughts, etc.
I actually find it more effective than the pre-mortem (and the closely related pre-hindsight technique I learned at CFAR). While those techniques are certainly effective, I think it's easy to be too charitable to oneself, even in failure. This version has explicit safeguards against that possibility.
As for the name, certainly it is not fully accurate. That said, being memorable and salient is quite important. The primary failure mode of this sort of technique is not remembering it in the heat of the moment, so I selected a name optimized for being shocking and memorable rather than fully accurate.
Unfortunately, I don't know of an easy way to do this, aside from writing the post in the LW editor-- that said, I think it looks good now. Thank you for editing!
I would appreciate it if you reformatted this post to match standard LW font/text size conventions.
Very funny-- my own choice for a fun-but-useless martial art was the épée!
I used the term antiskill for parallelism with the concept of an antipattern. While I agree it is somewhat imprecise, I think the parallelism, ease of use in speech, and general aesthetic virtue of the term is enough for it to be better than "disadvantageous skill"-- though I had considered that earlier and certainly think it's a potentially valid choice
The martial arts example I provided in this comment may prove to your liking.
Could you give example where it helped you make decisions against learning a skill?
For a while I was interested in learning martial arts for self-defense. Then I realized that a version of me that had advanced martial arts knowledge would be more inclined to fight people, while a version of me that did not have advanced martial arts knowledge would be more inclined to avoid conflict.
Given that fighting someone-- even with advanced/superior skill-- is likely much more dangerous than avoiding conflict, and that there is a risk of injury in martial arts training, I concluded that self-defense martial arts are largely an antiskill and instead pursued martial arts that are useless for self-defense but much more fun.
Could you give actionable feedback?
I would run this post (and future posts) past an editor/proofreader who is strongly familiar with the English language prior to posting.
While I like the concept, I think this post needs substantial editing and revision prior to posting on LessWrong.
I don't think so. It's interesting that the LW article makes top 10 in google for google win-more card. There one article http://magic.tcgplayer.com/db/article.asp?ID=8867 about the concept.
Hmm, I don't see the LW article on the first page at all. Perhaps this is different search customization?
In any case I see several other articles on this topic, as well as many forum discussions about it, people asking whether specific cards are or aren't win-more, etc.