Meetup : Vancouver, Canada: Backyard BBQ 2014-09-05T22:40:23.012Z
Open Thread, May 12 - 18, 2014 2014-05-12T08:16:58.489Z
Open Thread for February 18-24 2014 2014-02-19T12:57:24.600Z
Meetup Report Thread: February 2014 2014-02-17T03:19:51.622Z
Meetup : Vancouver Biweekly Sequences Discussion: Politics Is the Mind-killer 2014-01-10T10:21:25.364Z
Question on Medical School and Wage Potential for Earning to Give 2013-09-27T05:40:36.234Z
Meetup : Vancouver Open Discussion 2013-08-01T09:43:06.142Z
Meetup : Meetup: Vancouver 2013-07-27T04:51:40.244Z
Grad School? 2012-02-27T11:49:23.079Z
Which College Major? 2012-02-06T01:45:11.719Z


Comment by eggman on Polling Thread · 2014-08-25T20:14:33.016Z · LW · GW

I'm only age 22, and I don't have lots of life experience. So, I don't know how pleasing the rewards of such hardships would be, nor do I have a model of how much pain would go into this. However, reading through the scenarios seemed awful, so I rated my willingness to go through with them very low relative to the median response.

I'd be more interested in the same poll restricted to prime over the age of at least forty, asking along the lines of whether the rewards of hardship were so great they'd be willing to go through the pain again.

Comment by eggman on Open thread, August 4 - 10, 2014 · 2014-08-08T06:25:09.045Z · LW · GW

Apparently, in the days leading up to the Effective Altruism Summit, there was a conference on Artificial Intelligence keeping the research associates out of town. The source is my friend interning at the MIRI right now. So, anyway they might have been even busier than you thought. I hope this has cleared up now.

Comment by eggman on A map of Bay Area memespace · 2014-07-22T08:53:41.273Z · LW · GW

The whole subculture that is the new 'rationality movement' has some nodes, i.e., nodes, and subcultures, which are not included in this map of the Bay Area memespace. I'm sitting here at home with my friend Kytael, and we're brainstorming the following:

  • What nodes are part of the rationalist movement that aren't typical of the Bay Area memespace.
  • What nodes aren't part of the rationalist movement that are still part of the Bay Area memespace.
  • What nodes we as a community might want to add the rationalist memespace.
  • What nodes might enter the rationalist memespace that some parts of the community might consider undesirable.

Nodes Unique to the Rationalist Community

  • Neoreaction
  • Men's Rights Activists/Pick-Up Artists
  • Secular Solstices, Spiritual Naturalism
  • Self-Reflection
  • Hansonian Contrarianism
  • Generalization of Science and Economics to Everyday Life
  • Nerd/Geek Culture

Nodes From the Bay Area Separate From the Rationalist Community

  • Whole Earth community
  • New Age Culture
  • Back-to-the-land movement
  • Kink Culture

Controversial Nodes Within the Rationalist Community

  • Neoreaction
  • Men's Rights Activism, Pick-Up Artists
  • Social Justice

Emerging Subcultures and Memes in the Rationalist Community

  • Post-rationality/Post-rationalism
  • Partnered Dancing
  • (Whatever Is Trending On) Slate Star Codex
  • Applied Rationality=???
  • Psychtropic/Nootropic Use
  • Bitcoin/Cryptocurrency Enthusiasm

New Memes and Groups The Rationalist Community May Want to Explore More

  • Open Borders
  • ...

This list isn't exhaustive, and it could be controversial, so please question, or criticize it below. I will reflexively update this list by editing this comment in response to replies. This was more of a brainstorming exercise than anything, but one I thought other Less Wrong users might consider interesting. If a great discussion results, myself, or someone else, could turn this into a fuller post in its own right.

Comment by eggman on Donating to MIRI vs. FHI vs. CEA vs. CFAR · 2014-07-15T08:47:39.150Z · LW · GW

Is there an update on this issue? Representatives from nearly all the relevant organizations have stepped in, but what's been reported has done little to resolve my confusion on this issue, and I think of myself as divided on it as Mr. Hallquist originally was. Dr. MacAskill, Mr. O'Haigeartaigh, Ms. Salamon have all provided explanations for why they believe each of the organizations they're attached are the most deserving of funding. The problem is that this has done little to assuage my concern about which organization is in the most need of funds, and will have the greatest impact given a donation in the present, relative to each of the others.

Thinking about it as a write this comment, it strikes me an unfortunate case of events when organizations who totally want to cooperate towards the same ends are put in the awkward position of making competing(?) appeals to the same base of philanthropists. This might have been mentioned elsewhere in the comments, but donations to which organization do any of you believe would lead to the biggest return of investment in terms of attracting more donors, and talent, towards existential risk reduction as a whole? Which organization will increase the base of effective altruists, and like individuals, who would support this cause?

Comment by eggman on Donating to MIRI vs. FHI vs. CEA vs. CFAR · 2014-07-15T08:37:00.568Z · LW · GW

If anything, I could use more information from the CEA, the FHI, and the GPP. Within effective altruism, there's a bit of a standard of expecting some transparency of the organizations, purportedly effective, which are supported. In terms of financial support, this would mean the open publishing of budgets. Based upon Mr. O'Heigeartaigh's report above, the FHI itself might be strapped for available time, among all its other core activities, to provide this sort of insight.

Comment by eggman on Bragging Thread, July 2014 · 2014-07-14T15:58:15.531Z · LW · GW

I recently started my career as an effective altruist earning to give by making my first big splash with a $1000 USD unrestricted donation to Givewell last month.

Comment by eggman on Bragging Thread, July 2014 · 2014-07-14T15:56:38.502Z · LW · GW

Uh, I've trawled through Wikipedia for the causes, and symptoms, of mental illnesses, and, according to my doctors (general practitioner, and psychiatrist), I've been good at identifying what I'm experiencing before I've gone to see them about it. The default case is that patients just go to the doctor, report their symptoms, answer questions about their lifestyle lately, and the doctors take care of diagnoses, and/or assigning treatment. I choose to believe that I have such clarity about my own mental processes because my doctors tell me how impressed they are when I come to them seeming to already know what I'm experiencing. I don't know why this is, but my lazy hypothesis is chalking it up to me being smart (people I know tell me this more than I would expect), and that I've become more self-reflective after having attended a CFAR workshop.

Of course, both my doctors, and I, could be prone to confirmation bias, which would be a scary result. Anyway, I've had a similar experience of observing my own behavior, realizing it's abnormal, and being proactive about seeking medical attention. Still, for everyone, diagnosing yourself by trawling Wikipedia, or WebMD, seems a classic example of an exercise prone to confirmation bias (e.g., experiencing something like medical student's disease). This post is a signal that I've qualified my concerns through past experience, and that I encourage you to both seek out a psychiatrist, as I don't expect that to result in a false negative diagnosis, and also to still be careful as you think about this stuff.

Comment by eggman on This is why we can't have social science · 2014-07-14T15:42:01.579Z · LW · GW

Scientists as community of humans should expect there research to return false positives sometimes, because that is what is going to happen, and they should publish those results. Scientists should also expect experiments to demonstrate that some of their hypotheses are just plain wrong. It seems to me replication is only not very useful if the replications of the experiment are likely prone to all the same crap that currently makes original experiments from social psychology not all that reliable. I don't have experience, or practical knowledge of the field, though, so I wouldn't know.

Comment by eggman on Confused as to usefulness of 'consciousness' as a concept · 2014-07-12T23:13:23.486Z · LW · GW

Insofar as it's appropriate to post only about a problem well-defined rather than having the complete solution to the problem, I consider this post to be of sufficient quality to deserve being posted in Main.

Comment by eggman on Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild · 2014-07-09T10:11:18.099Z · LW · GW

You're welcome.

Comment by eggman on Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild · 2014-07-09T08:54:14.555Z · LW · GW

I figure I would do my due diligence for the sake of the community, or whatever, so I downvoted this post. Note that I'm a newer user of Less Wrong who isn't very familiar with Mr. Newsome's history of shenanigans on this website. So, I didn't have an automatic reaction to cringe, or something, when I encountered this piece. I downvoted this post based upon its own, singular lack of merit.

Mr. Newsome, here is some criticism I hope you appreciate.

Nothing about this first chapter here is enticing me to care about 'post-rationality', whatever that is. Eliezer Yudkowsky took a premise everyone was familiar with, and turned it on its head during the first chapter. He used a narrative format that was familiar, and actually wrote well. While the first chapter of Harry Potter and the Methods of Rationality didn't immediately begin with a introduction of what the "methods of rationality" as applied to magic would be, per se, there was enough of that in the first chapter to keep others reading.

In hindsight, Mr. Yudkowsky couldn't have expected his fan fiction to become so popular, or so widely read. The fact that it has might be biasing me into thinking that his first crack at writing the fan fiction was better than it really is.

Anyway, it seems you're trying too much with this piece. Harry Potter and the Methods of Rationality is the premise everyone here is familiar with, but you've done more than just turn it on its head. You've turned the very idea of one having a deep familiarity with the tropes on Less Wrong on its head. The first paragraph is just a blast of memes; I'm familiar with all of them, but I don't understand what all of them mean. The first part is incoherent, and is signaling that you have the knowledge to mock (in jest) the Less Wrong community. That in itself isn't clever, and the rest of the piece isn't clever enough as a parody to keep us, the readers, engaged.

I perceive the second part of this chapter to be a bit funny, but it doesn't build upon anything to get me to care. I don't believe it will be sustainable to have Potter-Yudkowsky be aware that he is in a meta-fan-fiction. If the protagonist confronts you, the author, as the controller of the world he is simulated within, he can at best only engage with a caricature of yourself as you've written it. It's difficult for me to think of how you would handle that without it becoming boring, lest you're very talented, and creative. If Potter-Yudkowsky realizes he can use his awareness to gain superpowers, that destroys the suspension of disbelief in the fantasy world the reader immerses themselves in quickly, which would also be boring. Finally, based upon how this chapter has played out, it would be difficult to maintain great continuity into the next chapter, which I would personally find frustrating, and challenging, as a reader.

This reads as the first part of some absurdist fiction. Still, it contains little foresight. The fact that you were drunk at the time this chapter was written, and posted, leads me to suspect that such an aspect made you want to post something which would be entertaining to yourself, but wasn't crafted with much thought to how it would be received by whatever readership you were hoping for.

In short, this doesn't strike me as a direct parody of Harry Potter and the Methods of Rationality, but a parody of the rationalist community itself(?). That's such an odd thing to do that I find it off-putting, and I consider it this piece's undoing.

If you think I'm being unfair, note that HPMOR isn't posted here, just referenced to it. If you actually want to work on writing as you've claimed, rather than trolling, maybe is the better place.

It seems to me you're aware of your own writing, compared to the body of fiction you're already familiar with, such that you know how to write in the typical style, or cadence, of long-form narratives. That is, you can, or could, write good fiction. I don't even know that you need to work on your style. Maybe what you need to hone is the broader strokes of planning a piece with a consistent theme, or structure, that would be appealing to the readership you're hoping for. Obviously, from among the rationalist community is the readership you're aiming for. Presumably, you have the knowledge to produce funny content that would be better appreciated. Starting on might be the place to start.

Comment by eggman on [Meta] The Decline of Discussion: Now With Charts! · 2014-06-09T22:28:33.174Z · LW · GW

Upvoted. My thoughts:

  • For full disclosure, I don't consider myself very successful in real life either, and my ambitions are also much higher than where I am now. This is a phenomenon that my friends from the Vancouver rationalist meetup have remarked upon. My hypothesis for this is that Less Wrong selects for a portion of people who are looking to jump-start their productivity to a new level of lifestyle, but mostly selects for intelligent but complacent nerds who want to learn to think about arguments better, and like reading blogs. Such behavioral tendencies don't lend themselves to getting out an armchair more often.

  • Mr. Bur, I don't know if you're addressing myself specifically, or generally the users reading this thread, but, like Mr. Kennaway, I agree wholeheartedly. I personally don't feel extremely qualified to rewrite the core of Less Wrong canon, or whatever. I want to write about the stuff I know, and it will probably be a couple of months before I start attempting to generate high-quality posts, as in the interim I will need to study better the topics which I care about, and which I perceive to not have been thoroughly covered by a better post on Less Wrong before. I believe the best posts in Discussion in recent months have been based on specific topics, like Brienne Strohl's exploration of memory techniques, or the posts discussing the complicated issues of human health, and nutrition. With fortuitous coincidence, Robin Hanson has recently captured well what I believe you're getting at.

  • My prior comment got a fair number of upvotes for the hypothesis about why there was an exodus from Less Wrong of the first generation of the most prominent contributors to Less Wrong. However, going forward, my impression of how remaining users of Less Wrong frame the purpose of using it is a combination of Mr. Bur's comment above, and this one.

Note: edited for content, and grammar.

Comment by eggman on [Meta] The Decline of Discussion: Now With Charts! · 2014-06-06T20:01:16.910Z · LW · GW


I became part of much of the meatspace rationalist community before I started more frequently using Less Wrong, so I integrate my personal experience into how I comment on here. That's not to mean that I use my personal anecdotes as evidence for advice for other users of this site; I know that would be stupid. However, if you check my user history on Less Wrong, you'll notice that I primarily use Less Wrong myself as a source for advice for myself (and my friends, too, who don't bother to post here, but I believe should). Anyway, Less Wrong has been surprisingly helpful, and insightful. This has been all since 2012-13, mostly, well after when it seems most of you consider Less Wrong to have started declining. So, I'm more optimistic about Less Wrong's future, but my subjective frame of reference is having good experiences with it after it hits its historical peak of awesomeness. So, maybe the rest of you users here concerned (rightfully so, in my opinion) about the decline of discussion on Less Wrong have hopped on a hedonic treadmill that I haven't hopped on yet. I believe the good news from this is that I feel excited, and invigorated, to boost Less Wrong Discussion in my spare time. I like these meta-posts focused on solving the Less Wrong decline/identity-crisis/whatever-this-problem-is, and I want to help. In the next week, I'll curate another meta-post summarizing, and linking to, all the best posts in Discussion in the last year. Please reply to me if this idea seems bad, or unnecessary, to stop me from wasting my time writing it up, if you believe that's the case.

Comment by eggman on [Meta] The Decline of Discussion: Now With Charts! · 2014-06-06T19:45:01.633Z · LW · GW

My friend kytael (not his real name, but his Less Wrong handle) has been on Less Wrong since 2010, has been a volunteer for the CFAR, and lived in the Bay Area for several months as part of the meatspace rationalist community there. For a couple of years, I was only a lurker on Less Wrong, and occasionally read some posts. I didn't bother to read the Sequences, but I already studied cognitive science, and I attended lots of meetups where the Sequences were discussed, so I understand much of the canon material of Less Wrong rationality, even if I wouldn't use the same words to describe the comments. It's only in the last year, and a bit, that I got more involved in my local meetup, which motivated me to get involved in the site. I find myself agreeing with lots of the older Sequence posts, and the highest quality posters (lukeprog, Yvain, gwern, etc.) from a few years ago, but I too am deeply concerned about the decline of vitality on Less Wrong, as I have only started to get excited about it's online aspects.

Anyway, when I too asked kytael:

What should the purpose of this site be? Is it supposed to be building a movement or filtering down the best knowledge?

(I asked him more, or less, the same question)

He replied: "I think the best way to view Less Wrong is as an archive."

Since he was tapped into the Bay Area rationalist community, but was a user of Less Wrong from outside of it as well, he was in an especially good position to provide better hypotheses as to why use on this website has declined, due to his observation.

First of all, the most prominent figures of Less Wrong have spread their discussions across more websites than this one, where much discussion from those popular users who used to spend more time on Less Wrong now discuss things. Scott's/Yvain's Slate Star Codex is probably the best example of this, another being the Rationalist Masterlist. Following a plethora of blogs is much more difficult than just going through this one site, so for newer users to Less Wrong, or those of us who haven't had the opportunity to know users of this site more personally, following all this discussion is difficult.

Second of all, the most popular, and common, users of Less Wrong have integrated publicly more, and now use social media. Ever since the inception of the CFAR workshops, users of Less Wrong have flocked to the Bay Area in throngs. They all became fast friends, because the atmosphere of CFAR workshops tends to do that (re: anecdata from my attendance there, and that of my friends). So, everyone connects via the private CFAR mailing lists, or Facebook, or Twitter, or they start businesses together, or form group homes in the Bay Area. Suddenly, once these people can integrate their favorite online community, and subculture, with the rest of their personal lives, there isn't a need to only communicate with others via the, the awkward blog/forum-site.

Finally, since the inception of Less Wrong, Eliezer Yudkowsky, and others, started Less Wrong having already reached the conclusion that the best, 'most rational' thing for them to do was to reduce existential risk. Eliezer Yudkowsky wrote the Sequences as an exercise for himself to re-invent clear thinking to the point where he would be strong enough to start tackling the issue of existential risk reduction, because he wasn't yet prepared for it in 2009. Secondarily, he hoped the Sequences would serve as a way for others to catch up his speed, and approach his level of epistemology, or whatever. The instrumental goal of this intent was obviously to get more people to become awesome enough to tackle existential risk alongside him. That was five years ago. As a community goal, Less Wrong was founded as dedicated to 'refining the art [and (cognitive) science of human rationality'. However, the personal goal for its founders from what was the SIAI, and is now the MIRI, is provide a platform, a springboard, for getting people to care about existential risk reduction. Now, as MIRI enters its phase of greatest growth, the vision of a practical 'rationality dojo' finally exists in the CFAR, and with increased mutual collaboration with the Future of Humanity Institute, the effective altruism community, and global catastrophic risk think tanks, those who were the heroes of Less Wrong use the website less as they've gotten busier, and their priorities have shifted.

They wanted to start a community around rationality, to improve their own lives, and those of others. Now they have it. So, those of us remaining can join these other communities, or try something new. The tools for those who want this website to flourish again remain here in the old posts: Eliezer, Luke, and Scott, among others, laid the groundwork for us to level up as they have. So, aside from everything else, a second generation, a revival of Less Wrong, where new topics that aren't mind-killing, either, can be explored. If those caring among us do the hard work to become the new paragon users of Less Wrong, we can reverse its Eternal September.

After this primary exodus from Less Wrong, others occurred as well. I personally know one user who had some of the most upvoted, and some featured, posts on Less Wrong until he stopped using this website, and deleted his account. Now, he interacts with other rationalists via Twitter, and is more involved with the online Neoreaction community. It seems like a lot of Less Wrong users have joined that community. My friend mentioned that he's read the Sequences, and feels like what he is thinking about is beyond the level of thinking occurring on Less Wrong, so he no longer found the site useful. Another example of a different community is MetaMed: Michael Vassar is probably quite busy with that, and brought a lot of users of Less Wrong with him in that business. They probably prioritize their long hours there, and their personal lives, over taking time to write blog posts here.

Personally, my friends from the local Less Wrong meetup, and I, are starting our own outside projects, which also involve students from the local university, and the local transhumanist, and skeptic, communities as well. Send me a private message if you're interested in what's up with us.

Comment by eggman on Brainstorming for post topics · 2014-06-01T07:25:50.662Z · LW · GW

In addition to my upvote, this comment is confirmation I, for one, would be interested in this.

Comment by eggman on Open Thread, May 26 - June 1, 2014 · 2014-05-30T10:58:04.460Z · LW · GW

A Proposal to Organize A Pharmocology of Nootropics Lecture, and Presentation

Comment by eggman on Strawman Yourself · 2014-05-21T03:49:55.403Z · LW · GW

I'd suggest just being slightly more suspicious of insulting arguments that make claims about your character sucking (immutably) than ones about the way you've laid out the plan.

It seems katydee may have made a mistake in choice of language here by conflating "yourself" with "your plans". To nitpick, it might better to consistently refer to the thing to be strawmanned as "your plan(s), and not use "you" at all. If one wants to generate an argument to point out flaws in their own plans, strawmanning yourself is like launching an ad hominem attack upon oneself. When somebody is looking to improve only one plan targeted for a (very) specific goal, strawmanning the plan rather than one's own character would seem to illuminate the relevant flaws better.

Of course, if somebody wants to prevent mistakes in a big chunk of their lives, or their general template for plans, strawmanning themselves, then might be the time strawmanning one's own character is more worthwhile.

Comment by eggman on Open Thread, May 12 - 18, 2014 · 2014-05-13T10:20:33.637Z · LW · GW

It doesn't seem like the webmasters, or administrators, of Less Wrong receive these requests as signals. Maybe try sending them a private message directly, unless the culture of Less Wrong already considers that inappropriate, or rude.

Comment by eggman on Open Thread, May 12 - 18, 2014 · 2014-05-12T08:20:09.917Z · LW · GW

Does anyone understand how the mutant-cyborg monster image RationalWiki uses represents Less Wrong? I've never understood that.

Comment by eggman on LessWrong as social catalyst · 2014-05-08T03:33:44.559Z · LW · GW

It's a weird phenomenon, because even those lurkers with accounts who barely contribute might not state how they've not socially benefited from Less Wrong. However, I suspect the majority of people who mostly read Less Wrong, and are passive to insert themselves deeper into the community are the sorts of people who are also less likely to find social benefit from it. I mean, from my own experience, that of my friends, and the others commenting here, they took initiative upon themselves to at least , e.g., attend a meatspace Less Wrong meetup. This is more likely to lead to social benefit than Less Wrong spontaneously improving the lives of more passive users who don't make their presence known. If one is unknown, that person won't make the social connections which will lead to fruition.

Comment by eggman on LessWrong as social catalyst · 2014-05-08T03:28:18.221Z · LW · GW

Yeah, I'd second that. Someone could make a Google survey form, or comment thread poll, asking which users commenting here would be open to having their success stories published in some capacity, whether here on the blog, or a more widely shared piece of literature.

Comment by eggman on LessWrong as social catalyst · 2014-05-08T03:24:41.103Z · LW · GW

Mr. Hurford, I know you're a prominent writer within the effective altruist community, among other things (e.g., producing software, and open-source web, products, through running .impact). As someone who initially encountered effective altruism, and then Less Wrong, do you have a perspective on how, or how much, Less Wrong has amplified the success of effective altruism as a social movement within the last couple of years?

Comment by eggman on LessWrong as social catalyst · 2014-05-08T03:19:06.376Z · LW · GW

I'm curious if there is any other variables that might account for you not achieving what you hoped you might by connecting through Less Wrong. For example, many regular attendees of the Vancouver meetup have wanted to get great jobs, move into a house with their rationalist friends, or move to the Bay Area to be part of the central party. However, they haven't done much of this yet, despite having wanted to with other local rationalists for a couple of years. The fact that most of us are university students, or have only recently launched our careers, throws a wrench into ambitious plans to utterly change our own lives because the effort my friends might have directed towards that is already taken up by their need to adapt to regular responsibilities of fully-fledged adulthood. On our part, I figure the planning fallacy, and overconfidence, caused us to significantly overestimate what we would really achieve as members of a burgeoning social subculture, or whatever.

Comment by eggman on May Monthly Bragging Thread · 2014-05-06T21:35:57.093Z · LW · GW

I figured upvotes in the monthly bragging thread would solely be a function of how much of a heap of utility can be demonstrated to be achieved. However, this is my second-most upvoted comment of all time, with the first-most upvoted being similar: a terse comment with just enough data to make it seem substantial, but is full of warm fuzzies. So, writing 'Yay! I'm winning!' for a mundane goal, like doing minimal exercise, might be at least as powerful as providing a long, and modest, explanation for doing something which signals much more greatness in real life. Below mine, other users have commented that they've:

cemented an academic career with a lifestyle they love. gave a technical presentation to hundreds of people. became adequately competent in Python to start a fully-fledged web project. made substantial advancement in launching a career as a statistician. *made a regular habit of building skills that are more crucial to success than 'walking around'.

To me, all of the above seems more impressive than my 'walking around a bunch'. My hypothesis is that I signaled my success in a simpler package, so it was easier to process, and so there was an easier, and lazier, investment in pressing the 'upvote' button. If you upvoted me, why? What's going on?

Comment by eggman on May Monthly Bragging Thread · 2014-05-04T19:44:40.176Z · LW · GW

I bought a pedometer to track my steps, so I could achieve my goal of taking 10000 steps everyday, and have a motivation to go outside, and do some light exercise. This is from before I bought the pedometer when I was doing no regular exercise. I've met my goal of 10000 steps everyday for the last week since I bought the pedometer, so I've increased my goal to 12000 steps everyday.

Comment by eggman on LessWrong as social catalyst · 2014-04-29T07:27:43.520Z · LW · GW

Location: Vancouver, Canada

I was introduced to Less Wrong by a long-time friend who had been reading the website for about a year before I first visited it. Over time, I've generally become more integrated with the community. Now, a handful of my closest friends are ones I've met through the local meetup. Also, with related communities, the meetup does a lot to give presentations between people, and facilitate skill-sharing, and knowledge bases.

I know that several of my fellow meetup attendees also made great friends through the meetup. There has been at list one instance of two of them becoming roommates, and now a few of my friends are trying to put together a 'rationalist house' this summer.

For those not in the know, a 'rationalist house' is a group home based around intentional meatspace communities that have risen around this website, so as to create a better living environment where new domestic norms can be tried. There are several in the Bay Area, at least one in Melbourne, probably one in New York(?), etc...

The founder of our meetup, who doesn't visit this website much anymore, but is generally in contact with the meetup otherwise, made connections with a successful financial manager who more-or-less became a mentor for him. Based upon the mentor's advice, this friend of mine is now trying to launch his own software company.

Several of us from the meetup have attended a CFAR workshop, including myself, and my friend who introduced me to Less Wrong has done continually ongoing volunteer work for them for the last year. As a result, we've become friends, and acquaintances, of much of the rationalist community in San Francisco. Additionally, a few of my friends have been spurred involvement with other organizations based in the Bay Area (e.g., YCombinator, the MIRI, Landmark). He also started an ongoing swing dancing community in Berkeley while he lived there, because memes.

Less Wrong introduced my friends, and I, to the effective altruism community, which infected a few of us with new memes for doing good, spurring at least one of us so far to have donated several thousand dollars to organizations, and projects, like the global prioritization research currently being jointly executed by the Future of Humanity Institute, and the Centre for Effective Altruism.

For the sake of their privacy, I'm not posting the names of these individuals, or their contact information, directly here on the public Internet, but if you'd like to get in touch with them to ask further questions, send me a private message, and I can put you in touch with them.

Comment by eggman on Evaluating GiveWell as a startup idea based on Paul Graham's philosophy · 2014-04-21T05:28:36.559Z · LW · GW

I agree with you, so I've edited my comment a bit to account for your nitpick. See above. Thanks for making the point.

Comment by eggman on Be comfortable with hypocrisy · 2014-04-12T18:39:39.362Z · LW · GW

Yes, it's a joke.

Note: edited for grammar.

Comment by eggman on Evaluating GiveWell as a startup idea based on Paul Graham's philosophy · 2014-04-12T17:01:52.944Z · LW · GW

Disclosure: the following point is tangential to Givewell, and is more about start-ups.

It strikes me as paradoxical that users of Less Wrong, and the rationalist community, endorse founding a start-up as great 'rationality training', and view very successful entrepreneurs as paragons of rationality in the practical world, yet Paul Graham notes in his essays that it may often be only in hindsight that entrepreneurs can assess the strategies they implemented as good, such that they 'got lucky' with their success. 'Getting lucky', that is, maybe[1] implying that the entrepreneurs in question might not be such paragons of practical rationality after all.

Mr. Graham's partial solution to this problem is stating that if you're the right sort of person, you'll have the right sort of hunches. I believe what Mr. Graham is referring to here is what Luke Muehlhauser has identified as, and labeled, "tacit rationality".

If you're an entrepreneurial type looking to start a business, or even an effective altruist looking to start an especially effective non-profit organization, or research foundation, you probably want to know if you're the "right sort of person who has the right sort of hunches". Simply believing so, and betting on that, I believe, is prone to the sorts of biases which are common knowledge around here, so we shouldn't expect the outcome in such a case to be very favorable. So, the options come down to one of the following:

*Figuring out if you already are tacitly rational, like Mark Zuckerburg, or Oprah Winfrey, apparently.

*Transforming yourself from a geek who knows about biases, but does nothing about them, to someone who achieves practical success at an increasing, and predictable, rate, due to their own efforts.

From the conclusion of his post on explicit, and tacit, rationality, here are Mr. Muehlhauser's tips for performing the above tasks:

If someone is consistently winning, and not just because they have tons of wealth or fame, then maybe you should conclude they have pretty good tacit rationality even if their explicit rationality is terrible. The positive effects of tight feedback loops might trump the effects of explicit rationality training. Still, I suspect explicit rationality plus tight feedback loops could lead to the best results of all. If you're reading this post, you're probably spending too much time reading Less Wrong, and too little time hacking your motivation system, learning social skills, and learning how to inject tight feedback loops into everything you can.

[1] Due diligence: the comment below points out well how my original use of language in this sentence was a universal claim, which isn't justified. So, I've retroactively edited this sentence to make my claim only an existential one.

Note: edited for formatting, nuance, and grammar.

Comment by eggman on Polling Thread · 2014-04-11T18:13:34.551Z · LW · GW

It could very well be phony information. My point is that I'm an absurd nerd, because Less Wrong, so I want to ground my beliefs as well as possible, but I'm very ambiguous about the issue of vegetarianism because there is so much noise about diets, and economics, and ethics, and aaahhh...

Comment by eggman on Polling Thread · 2014-04-08T07:06:57.282Z · LW · GW

I gave a full explanation of my reasons for part-time vegetarianism above, but Lumifer's statement generally accounts fully for what I choose to eat.

+1 to his comment.

Comment by eggman on Polling Thread · 2014-04-08T07:04:47.578Z · LW · GW

I identify as a flexitarian, meaning I'm a part-time vegetarian. When it's convenient, I will avoid eating meat. This is usually at restaurants, almost all of which in my city have a vegetarian, if not vegan, option on their menu, or when I'm cooking at home, and there is something in the fridge other than animal flesh, or byproducts, available. In this regard, my biggest 'vice' is that I don't make much effort to restrict my consumption of dairy products, since I'm under the impression that dairy products don't cause much harm to cattle relative to how much suffering is incurred upon other animals used to generate food for humans.

One major reason I try to reduce my meat consumption when it's a convenient option is because I don't exercise much, so I counteract the negative affects this might have on my health in the meantime by consuming fewer calories. Otherwise, my part-time vegetarianism is motivated by my ethics, although I feel very ambiguous about my diet. For example:

  • I commonly encounter reports from mainstream media about how, to prevent environmental degradation, via climate change, humans must reduce the amount of meat we eat per capita. E.g., the UN has published reports in this regard in the last couple of years. However, on the other hand, I've encountered counter-arguments about how if we all became full-time vegetarians in North America, we would have to import resource-intensive soy products from the other side of the world, causing a bunch of pollution in the process anyway.

  • I'm well aware of how, if they have the capacity to suffer, animals on factory farms indeed suffer very much. That pulls at my heartstrings, or makes me sad, and empathetic, to their suffering, or what have you, so I would like to be a part of easing that suffering. However, my life is full of moral uncertainty, because all the wisest people I turn to in life are split between eating meat, or not. Also, I'm not extremely confident in how sentient non-primates are, or what their capacity to suffer is. Furthermore, if societies moved to abstaining from animal (by)products, we might end up processing more land that results in the deaths of small vertebrates, e.g., rodents, and insects, who might also suffer, and die horribly. So, I find the argument of erring on the side of caution by not eating animals even in the face of uncertainty of their moral relevance appealing, but I'm not so confident in the truth of that that I forgo eating meat entirely.

  • I fear being in a maligned out-group like vegetarians, and what moral fortitude I tell myself I have falls prey to the same convenient biases everyone experiences in the face of brains getting what they want now, and damn our ideals, so I eat more meat than I would otherwise believe is morally acceptable of myself. In this regard, I might be a hypocrite.

Comment by eggman on Open thread, 18-24 March 2014 · 2014-03-23T02:02:17.487Z · LW · GW

Thanks for the information. In that case, I hope in the future there is another opportunity to ask what blogs are featured on the side panel. I don't know what anyone else is looking for, but as far as I'm concerned, I check these other rationality blogs as often as I check things posted directly to Less Wrong. I find Slate Star Codex, and Overcoming Bias, particularly interesting. Anyway, if other people gain similar such value from these other blogs, perhaps other blogs could be added in the future. I understand if each of us freely suggested what blogs we individually considered 'rational', there would be lots of noise, redundancy, and swamping the forum with poor suggestions. So, I may start a poll in the future asking which blogs the community as a whole would like to see added.

Comment by eggman on Open thread, 18-24 March 2014 · 2014-03-22T06:54:25.425Z · LW · GW

What's the process for selecting what 'rationality blogs' are featured in the sidebar? Is it selected by the administrators of the site?

I'm surprised some blogs of other users with lots of promoted posts here aren't featured as rationality blogs.

Comment by eggman on Open thread, 11-17 March 2014 · 2014-03-17T05:37:03.970Z · LW · GW

I'm wanting to apply to be a conversation notes writer at Givewell, as they have an open position for it. The application seems quite straightforward, but I'm wondering if there is anything I should consider, because I would love to be hired for this job.

Do you have any suggestions for how I could improve an application?

For the application, I must submit a practice transcription of a Givewell conversation. I'm wondering, specifically, if there are any textbooks, guides to style, or ways of writing I should consult in preparation. Obviously, I must write the transcription myself, and not plagiarize, or whatever.

Comment by eggman on Polling Thread · 2014-03-03T04:43:27.858Z · LW · GW

Disclosure: my votes for the above poll are not anonymous. I want people to be aware of how I voted, because I state the following: my votes for this poll are limited to my perception of Less Wrong over only the last few months, as of the date of this comment, which is the period of time in which I have started checking Less Wrong on a semi-daily basis.

Comment by eggman on Is love a good idea? · 2014-03-02T05:20:28.346Z · LW · GW

I know, I know...I tend to write in a superfluous, and long-winded manner. Like, longer than the above comment. It was about 20% longer, so I edited out the material that I didn't believe would actually clarify the questions I was asking, or that I believed wouldn't be at all valuable to adamzerner. I was at a lack of words other than 'edited for brevity'. In terms of writing, I believe I'm decent at getting my thoughts out of my head. However, my ability to write more compactly is a skill I need to improve upon, and I intend to do so.

Also, I aim to be quite precise with my language, so I tend to provide more detail in my examples than I believe might be necessary, in an attempt to prevent as much confusion for the reader as I can.

Comment by eggman on Open Thread for February 18-24 2014 · 2014-02-25T09:12:33.140Z · LW · GW


Comment by eggman on Is love a good idea? · 2014-02-24T23:21:19.006Z · LW · GW

While others have remarked that you're responding to a "Hollywood" conception of romance, I also want to point out that you aren't the only person who perceives romance this way. The surface perfection of romance is something people would like to signal about their relationships. Like, even in the cases where people are cheating on one another, or the relationship is falling apart, or mired by abuse, or conflict, they like to publicly signal that things are still going well, or at least not going horribly. If you searched for 'romance', or 'relationships', on Overcoming Bias, you could find some decent material on signaling within sexual relationships. Additionally, media besides Hollywood movies are shoving an archetype of romantic relationships down our throats all the time. So, only mostly perceiving all this, there are a great many people who view relationships in this manner. This is probably skewed towards younger people, although it's also been remarked in this thread that some people go through this for decades.

Mr. Zaman's comment seems to point out that a key to finding a relationship that avoids all these things about love which would frustrate you is that you can find the right person to do so. I don't know how to do that myself, per se, other than suggesting you try OKCupid, or altering your social circle to include more people who have a similar mindset, and then dating from within there.

I believe you're correct in that a substantial portion of relationships, one partner coming out to another, and stating (realistically) that they're not the best possible person, and that it could be quite possible to find another one, would be hurtful. I believe that might be hurtful in some relationships only because the other interlocutor won't understand why you're stating obvious but hurtful facts, like you're signaling something mysterious. I wouldn't worry about that, though. So, there are people who have fooled themselves into thinking relationships ought to be like an idealized romance. Perhaps you could try observing other relationship styles where you can, or read about them on some blog which is, I don't know, contra-romantic, and that could change your perception of people practically love one another.

Comment by eggman on Open Thread for February 18-24 2014 · 2014-02-24T06:28:52.551Z · LW · GW

This made me laugh out loud a lot. I never expect that in a thread on Less Wrong. It was charming.

Comment by eggman on Open Thread for February 18-24 2014 · 2014-02-24T06:18:09.172Z · LW · GW

I just want to thank all of you, as both individuals, and as a community, for being a decent place for discourse. In the last few months, I've been actively engaging with Less Wrong more frequently. Prior to that, I mostly tried asking for opinions on an issue I wanted analyzed on my Facebook. On Facebook, there has been typically been one person writing something like 'ha, this is a strange question! [insert terrible joke'here]. Other than that, radio silence.

On Less Wrong, typical responses are people not thinking I'm weird because I want to analyze stuff outside of the classroom, or question things outside of a meeting dedicated to airing one's skepticism. On Less Wrong, typical responses to my queries are correcting me directly, without beating around the bush, or fearing of offending me. All of you ask me to clarify my thinking when it's confused. When you cannot provide an academic citation, you seem to try to extract what most relevant information you can from the anecdotes from your personal experiences. I find this greatly refreshing.

I created this open thread to ask a specific question, and then I asked some more. Even just from this open thread, the gratification I received from being taken seriously has made me eager to ask more questions, and read other threads in discussion. Reading responses to posters besides myself, and further responses to my comments, have made me feel the samey way. For all I know, there are big problems with Less Wrong to be fixed. However, I'm surprised I found somewhere this non-awful on the Internet at all. So, thanks.

Comment by eggman on Open Thread for February 18-24 2014 · 2014-02-24T05:55:38.820Z · LW · GW

I'm closer to personally experimenting with nicotine, so I'd like to gloss over these meta-analyses. I can access academic, and medical, journals myself, but I don't know which ones to search in. In which journal can I find this citation? (please and thanks).

Comment by eggman on Is love a good idea? · 2014-02-24T05:44:02.197Z · LW · GW

If you try starting such a conversation, I suggest using more examples than you have thus far. If you don't feel comfortable providing personal anecdotes as examples, feel free to PM me. In that case, I'll start the conversation, because I do have anecdotes/examples I am willing to, and can, share.

Comment by eggman on Is love a good idea? · 2014-02-24T05:30:11.445Z · LW · GW

"I like you a lot. You make me happy. But there's probably at least tens of thousands of people in the world that can provide me with what you're providing me. So you're replaceable, and if we broke up, I'd get over it after a few days/weeks and find someone else.

Mr. Zerner, a problem with your counter-argument is that you aren't actually going to meet the tens of thousands of hypothetical people who could satisfy all the same desires and needs as a current romantic partner is meeting. You won't even meet one hundred, or, like, thirty. If you're lucky, you could meet a dozen other people who satisfy you romantically as much as any one romantic partner you loved the most satisfied you. . You could take an approach of shallowly connecting with as many women you think are very compatible with you as you can find. However, unless you're some Casanova, that seems like a poor strategy for creating a loving relationship, or several of them (rapidly). So, that doesn't seem like a sound argument for "sorry, my darling, but you're replaceable". I believe this would hold true for almost everyone who would make this case.

Also, it doesn't seem like you qualify how much you (would) love a given significant other. Depending on how much value the person brought into your life, it could take months, or even years, to overcome the loss of their companionship, rather than days, or weeks. It could also take you that long to find someone to replace her with, who provides just as much value to your life. I could generalizing too much from the example of my own experience, but it's the rare person who replaces the most significant romantic partner they've had previously with another in the span of only a few weeks, or a couple of months. I wouldn't be surprised that some people are quicker, or better, at this task, than the average. So, Mr. Zerner, unless you have great reason to believe you're above-average in this regard, don't discount the expected costs of finding a new partner so much.

Also, I do like you and care about you (more than a big majority of things), but not nearly as much as I do about myself. And not nearly as much as I do about knowledge, understanding, and my ambitions. So I might get obsessed about these things, stop caring about you as much, and consequently, break up with you. In fact, there's a lot I'm confused about and for that reason, I can't make anything close to a commitment. I don't know if I should be trying to be happy, being an effective altruist, combatting death, or trying to seek knowledge and understanding of the world we live in.

You're signaling that you have ambitions in life which are more praiseworthy, or laudable, or of a higher caliber, than just pursuing purely selfish ends. You're explaining to a hypothetical romantic partner what else you want to do in life. It seems like you're also trying to explain that to us as readers as well. The way you're asking if love is a 'good idea' seems to be about if committing all the time and effort to something like marriage would require is worth the opportunity cost of not being able to spend that time and effort (trying) to save the world.

I'm suspecting you're asking "how do I balance a commitment to such a lifestyle, while still appearing and being normal enough to do (many of) the typical things typical humans do to be happy?" I suspect you're asking these questions, not only in the interest of playing out an argument, but because, probably judiciously so, you don't have a 'gung-ho', confident solution to this personal conundrum.

You're not the only person with such concerns. I'm a nerd interested in saving the world while being awesome as well. I have similar concerns about committing too much to a single person, or to my family, at the expense of saving thousands of other lives, or whatever. Your concerns are shared by others in this community, and we don't have all the answers. It seems that other folks 'well on their way' to saving the world have encountered this problem as well, yet they haven't given up on making commitments to love others, or without giving up other things which don't broadly benefit others. We could learn much from them.

Ideally, I would prefer that the practical conclusions resulting from discussions on Less Wrong could generalize to, and be implemented within, as many of its readers' lives as possible. So, I don't mean for this response to be critical of your personality, and I hope me raising these points hasn't offended you. I believe it would be better if you were to clarify what your true concern here is though,, and summon the gumption to address it to us more directly. This is because we could have a clearer discussion, that benefits, and interests, more of us.

they stop being strategic because they stop considering alternatives to the monogamous relationship they have with their SO. None of this seems rational to me.

Maybe you really will need a significant other to rely on you less, lest they meet stringent conditions, or you cannot commit to them deeply. A small minority of people who commit themselves greatly to a cause are capable of that. It seems most people don't, not because they hate the idea, but because forgoing strong social bonds that most everyone else acquires makes them miserable. So, maybe loving someone so much seems irrational when that effort could be spent on other ideas which seem so much more valuable, on paper, than just loving one person.

We can discuss committing to both personally love others, and to making great accomplishments. That seems like a different discussion than this one, though.

Note: edited for brevity.

Comment by eggman on Open Thread for February 18-24 2014 · 2014-02-22T23:29:24.085Z · LW · GW

Yeah, possibly. Anyway, I'll avoid these mistakes in the future.

Comment by eggman on Open Thread for February 18-24 2014 · 2014-02-20T20:34:12.065Z · LW · GW

Thanks for replying, TylerJay. Did you notice they became addictive immediately, or after a graduated period of use? If the latter, what was the frequency, or quantity, of your daily consumption of e-cigarettes? Is there anyway you believe one might be able to avoid addiction to e-cigarettes?

Comment by eggman on Open Thread for February 18-24 2014 · 2014-02-20T20:32:14.745Z · LW · GW

Gwern noted in his analysis of nicotine that to overcome dependency effects, the user could cycle between different nootropics they use. For example, a three day cycle of nicotine, then caffeine, then modafanil, then repeat and start over with nicotine.

Over the course of several months, I could trial different methods of consuming nicotine, i.e., patches, e-cigarettes, and gum. I would space each of these trials out over the course of several months because I wouldn't want each of the trials to be spaced too close together, and I wouldn't want to mess with my body by consuming too much nicotine anyway. As a protection against my subjective experience being useless, I would read more of Gwern's reviews on nootropics, and perhaps consult online nootropics communities on their methods for noting how they feel. Their could be trials I could run, or ways of taking notes, which would allow me to make the information gleaned in that regard more useful.

Comment by eggman on Open Thread for February 18-24 2014 · 2014-02-20T20:24:50.755Z · LW · GW

I believe I have a better ability to scrutinize my internal states. Like, when consuming alcohol, marijuana, or caffeine, more often than not I notice my subjective experience changing, i.e., I feel drunk/high/whatever. However, this might only be the case when I've taken greater, or stronger, doses of a given drug. If that is the case, it wouldn't be helpful. Perhaps if I have a history of needing to consume more of a drug for me to notice its affects, that could be net harmful, because I could be tempted to consume enough nicotine for me to notice its affects. By the time I've done that, it could already be that I've consumed too much. I imagine that would be bad for me because I might be more likely to develop, or experience, dependency effects, nausea, and other ill side effects of nicotine use, at higher doses.

I'll keep that in mind if I elect to trial the use of some form of nicotine.

Comment by eggman on Managing your time spent learning · 2014-02-20T20:13:37.773Z · LW · GW

Well, I'm currently an undergraduate, so I haven't started a career yet. For myself, personally, I would like to create a website in the future. Also, web design is useful in a wide variety of contexts. For coding, I'm not set on a career trajectory yet, but I may want to transition into one which would require a heavier use of information technology.

I've read on Less Wrong that learning how to code, or program, is a worthwhile skill to learn, even if one is not going on to become a computer programmer.

I don't know statistics very well, but I would like to participate, or follow, scientific, and technical, discourse in the world more thoroughly, so learning statistics might be for my overall efficacy as a person, rather than just what I do at the workplace.

Comment by eggman on Open Thread for February 18-24 2014 · 2014-02-20T20:08:52.378Z · LW · GW

Noted. I will update my voting behavior on this basis. Thanks.