Posts
Comments
Would move into one if it was where I wanted to live, but I'm tied to Canberra for the next couple of years. If Melbourne did this I'd be really tempted.
LINK: https://hangouts.google.com/call/lxcrencg4z4rtznfbrvhjffk7ia
I like the sound of that strategy, although here I must admit I'm inexperienced in actually using it.
Another death: Leonard Nimoy
I'm reminded of Ozy's posts on radical acceptance, specifically this one.
gjm's interpretation is what I was going for. Chronological age only! (Warning: link to TVTropes) I wasn't sure how to keep the same form and still have it flow nicely.
I want to grow old and not die with you.
Want children in maybe ten years, might work on me.
Some things that may or may not be obvious:
There may well be a few rationalists in your area you don't know about, who would likely turn up to a meetup if you announced one on LessWrong. I fit that description when some random people I'd never met started a regular meetup in my city. (A second, borderline case: The guy in my math tutorial who noticed I was reading Thinking Fast and Slow, turned out to read LessWrong and HPMOR, and who I mentioned the local meetups to and dragged along.)
If there's an established group in a nearish area, such that you're not in the area but might travel out there occasionally, I'd recommend checking it out. It's not the same as being able to hang out in meatspace more frequently, but is still awesome. See: Australia, Europe.
At the Australian camp, one of our attendees ended up coming through putting his name on the HPMOR wrap party site, a couple of months before, and someone making contact with him. So people interested in HPMOR would be a good bet, if you can find any. Assuming you yourself have read some of and like HPMOR, another angle for proselytising is pestering people you know to read that.
If you happen to also be into Effective Altruism, I'd recommend those groups as well. General EA meetups? GWWC chapter? Random visiting EA philosophers? Aside from the ones who find it through LW in the first place, people wanting to think through their altruistic actions, check if things actually work, and so on may be interested in LW topics.
Scenario 2 sound like it would be bad for me as well as scenario 1. I'm fairly uncomfortable talking about weight goals with most people - it feels like it would be saying I'm too fat or something negative like that, so unless they've revealed a similar problem to me I don't go there. So in that situation I'd expect to feel insulted. It's not a failure mode that I fall into any more, but where I was expecting that scenario to go is "When you read all the posts your brain goes; yeah this is too hard, I feel bad, I want chocolate. And at the end of the month you've gained a kilo."
Might be gender-related. Women experiencing that sort of discussion to go in the direction of judging appearance along with a greater negative affect from being judged unattractive. Men experiencing it being treated as just another health-related goal and being less concerned with judgment if they admit failure.
It's possible that if I did made such a post and read those responses it would go better than that, but it would be anxiety-inducing for me to go about testing that. Tentative suggestion: sharing goals I feel like I "should" be achieving is bad, sharing goals I just want to achieve is variable but expected positive.
Thank you for the insight.
I just have to become the person they would do that thing for - and my self is flexible in ways most people couldn't imagine.
To all those who've read some HPMoR, I find it interesting that that's basically how Quirrel describes his and Harry's... differences from most people.
From the title of the post, I thought it would be about how not signing up gives you certainty. I've read someone who doesn't want to sign up say that dying in a normal way would give their family peace of mind.
In terms of whether it's a benefit, if it does motivate you then it's a good Dark Arts way to stop putting off signing up. However, cryonics companies changing their image to take advantage of it strikes me as a really bad idea for the reasons in Ander's post.
You'd have to want to signal very strongly to overcome the inconvenience of doing the paperwork and forking over cold hard cash. Self-signalling seems to be a plausible motivation, but I'm not sure how much benefit you'd get from being able to tell other people about it. Not to mention the opposite pressure that most people have because they have to convince their close family members to respect their wishes.
Today, I was using someone else's computer and typed "lesswrong" into the search/address bar. Apparently the next most popular search is "lesswrong cult". I started shrieking with laughter, getting a concerned reaction from the owner, which doesn't help our image much.
Evan - I am also involved in effective altruism, and am not a utilitarian. I am a consequentialist and often agree with the utilitarians in mundane situations, though.
drethelin - What would be an example of a better alternative?
Proponents of both have the same attitude of "this is a thing that people ocassionally give lip service to, that we're going to follow to a more logical conclusion and actually act on".
Is your rule about distances actually a base part of your ethics, or is it a heuristic based on you not having much to do with them? I'm assuming that you take it somewhat figuratively, e.g. if you have family in another country you're still invested in what happens to them.
Do you care whether the unknown people are suffering more? If donating $X does more than donating Y hours of your time, does that concern you?
If everyone did that, there's a non-negligible chance the human race will die out before bringing about a Singularity. I care about a reasonably nice society with nebulous traits that I value existing, so I consider that a bad outcome. But I do worry about whether it's right to have children who may well posess my far-higher-than-average (or simply higher than most people are willing to admit?) aversion to death.
(If under reflection, someone would prefer not to become immortal if they had the chance, then their preference is by far the most important consideration. So if I knew my future kids wouldn't be too fazed by their own future deaths, I'd be fine with bringing them into the world.)
Data point: Assuming there are any gendered pronouns in the examples, I find it weirder when the same one is used consistently for the entire article.
Has anyone gotten their parents into LessWrong yet? (High confidence that some have, but I haven't actually observed it.)
This reminds me of a CBT technique for reducing anxiety: when you're worried about what will happen in some situation, make a prediction, and then test it.
In-group fuzzes acquired, for science!
I've also used the "think of yourself as multiple agents" trick at least since my first read of HPMOR, and noticed some parallels. In stressful situations it takes the form of rational!Calien telling me what to do, and I identify with her and know she's probably right so I go along with it. Although if I'm under too much pressure I end up paralysed as Brienne describes, and there may be hidden negative consequences as usual.
Also two redundant sentences:
I have a few ideas so far. The aim of these techniques is to limit the influence motivators have on our selection of altruistic projects, even if we allow or welcome them once we're onto implementing our plans.
The aim of these techniques is to limit the influence of motivators have when we are deciding which actions to take, even if we allow or welcome then once we’re onto implementing our plans.
Hi, I'm another former lurker. I will be there!
Hi LW. I'm a longtime lurker and a first-year student at ANU, studying physics and mathematics. I arrived at Less Wrong three years ago through what seems to be one of the more common routes: being a nerd (math, science, SF, reputation as weird, etc.), having fellow nerds (from a tiny US-based forum) recommend HPMOR, and following EY's link to Less Wrong.