Meetup : Moscow meet up 2014-08-02T14:12:18.488Z


Comment by marchdown on 2017 LessWrong Survey · 2017-09-23T02:38:49.994Z · LW · GW

Survey taken.

Comment by marchdown on The Strangest Thing An AI Could Tell You · 2015-07-17T16:34:15.536Z · LW · GW

So basically the whole universe is a Boltzmann brain.

Comment by marchdown on Open thread, Nov. 3 - Nov. 9, 2014 · 2014-11-05T01:07:52.512Z · LW · GW

Please, no. The world already has a sickening amount of steampunk.

Does it now? Care to recommend some?

Comment by marchdown on Open thread, Nov. 3 - Nov. 9, 2014 · 2014-11-05T01:07:18.732Z · LW · GW

It may be hard to rob you, but easy to shoot you down.

Comment by marchdown on Open thread, Nov. 3 - Nov. 9, 2014 · 2014-11-05T01:06:06.683Z · LW · GW

It would be fun to have corporations build space stations, ostensibly for technological benefits, but not disclosing details, so that your question would remain unanswered inside the story.

Comment by marchdown on Open thread, Nov. 3 - Nov. 9, 2014 · 2014-11-05T01:03:51.072Z · LW · GW

I would also mention Deborah Anapol's "Polyamory: the new love for the 21st century". I think about it as a survey of polyamorous practices, struggles, communities. It was crucial for me to get the sense of normality. Haven't read Taormino.

Comment by marchdown on Open thread, Nov. 3 - Nov. 9, 2014 · 2014-11-05T00:57:16.787Z · LW · GW

This may be a case of regression to the mean, with the thing which parameters regress being conscious and not caring about these particular parameters.

Comment by marchdown on 2014 Less Wrong Census/Survey · 2014-11-04T23:31:51.119Z · LW · GW

And... done. I would like to point out that X-Risk question may be confusing when skimming. P(X-Risk) looks as if it were asking for probability of catastrophe coming to pass, but the explanations spells out that the probability of humanity successfully avoiding catastrophe should be entered.

Comment by marchdown on 2014 Less Wrong Census/Survey · 2014-11-04T23:28:39.709Z · LW · GW

It would have been a nice insurance agains possible future PR shitstorms. Was that your primary reason for suggesting it?

Comment by marchdown on 2014 Less Wrong Census/Survey · 2014-11-04T23:26:40.228Z · LW · GW

This is weird. I haven't noticed that until you've pointed it out, but I believe that my masculinity score was only a little lower than all the benchmarks and not extremely low only because I've considered how my partner would gauge BSRI questions. They seem to push me towards expressing masculine traits. Isn't it interesting that a sex-role inventory doesn't make allowances for situations priming different sex roles in people?

Comment by marchdown on Meetup : Saint Petersburg meetup - "the lonely one" · 2014-10-31T12:28:51.831Z · LW · GW

Ooh, missed the announcement. I won't make it in time now. Anyhow, I'll keep it in mind to get in touch with you next time I'm there. Have a blast!

Comment by marchdown on Meetup : Moscow meetup: Quantum physics is fun · 2014-10-25T13:15:03.190Z · LW · GW

WHEN: 26 October 2014 03:00:00PM (+0400)

We start at 14:00 and stay until at least 19-20.

So which one is it?

Comment by marchdown on Why humans suck: Ratings of personality conditioned on looks, profile, and reported match · 2014-08-10T23:43:08.022Z · LW · GW

It looks as if it might be worth to manually disable pictures (e.g. with HTTP switchboard) and browse profiles only seeing text.

Comment by marchdown on Learning languages efficiently. · 2014-03-03T00:02:27.011Z · LW · GW

Immersion is not an option for me currently.

Whatever you do, immerse yourself as much as possible in your circumstances. This most likely means having radio blaring in Hebrew most of the time when it's not actively obstructing whatever you're trying to do; plastering your living space with labels, adding Hebrew blogs to your blogroll, seeking social activities outside your comfort zone such as volunteering at a retirement home with lonely seniors or attending insipid school plays at your local center for Hebrew language and culture.

Comment by marchdown on Self-Study Questions Thread · 2014-02-13T12:20:01.148Z · LW · GW

Yes, I have had similar issues and I can't say that I did manage to overcome them successfully, but I'm committed to continue, and I'm in the middle of reorganizing my life so that I could direct more resources there.

I'm not offering any specific advice for now, beside the obvious:,, but I'm responding here to start the dialogue and to nudge us both in the right direction.

So yeah, I wish for you to untangle your motivations and follow through.

Comment by marchdown on Calorie Restriction: My Theory and Practice · 2014-02-12T07:06:38.352Z · LW · GW

In what way were their hormone levels affected? I can't even begin to guess.

Comment by marchdown on How can I spend money to improve my life? · 2014-02-10T08:08:03.242Z · LW · GW

Aren't psychostimulators, such as amphetamine and its derivatives or modafinil¹, legitimate means for augmenting mood, cognition and productivity? Or are they seriously dangerous? Can you point out some relevant research?

¹ Can modafinil be lumped together with other psychostimulators?

Comment by marchdown on Tricky Bets and Truth-Tracking Fields · 2014-02-10T07:54:22.479Z · LW · GW

It's as if you're participating in a prediction market such as PredictionBook or The Good Judgement Project.

Comment by marchdown on Group Rationality Diary, May 16-31 · 2013-06-19T04:24:31.574Z · LW · GW

It seems that you could capture benefits of both having the material online and searchable and retaining interested readers with regular updates by publishing everything at once, and then regularly posting your analytical readings of Eliezer's material. If you have the audience already, those could naturally grow into discussion posts.

Comment by marchdown on Group Rationality Diary, May 16-31 · 2013-06-19T04:19:13.517Z · LW · GW

May I suggest applying CSS skills to styling Anki cards? A personal anecdote: I have a lot of decks for various languages, and having cards styled differently helps with switching context. It's also nice to have them look pleasant.

Comment by marchdown on Group Rationality Diary, June 1-30 · 2013-06-19T04:09:31.125Z · LW · GW

Do you have a citation for 15-30 minutes being a reasonable time for blood glucose levels changing in response to consuming a banana? I remember reading that it takes significantly longer than that, up to 150 minutes, but I can't find a proper source at the moment. The closest I can find is the 4-hour body, and I don't know how trustworthy it is. It also says that fructose may lower blood glucose levels.

Comment by marchdown on Group Rationality Diary, June 1-30 · 2013-06-19T04:02:36.753Z · LW · GW

I've tried using HabitRPG before, but didn't stick with it. I've started using Lift, working out every day following the Somehow the expectation of checking off habits for today keep me going through the motions, and the automated timer reduces friction of changing into the mental state appropriate for exercising.

Comment by marchdown on Post ridiculous munchkin ideas! · 2013-05-19T04:29:07.013Z · LW · GW

There's even a special page on the Amazon website for the express purpose of cancelling ebook purchases within the last 7 days:

Comment by marchdown on Post ridiculous munchkin ideas! · 2013-05-17T19:49:40.687Z · LW · GW

Could you name some actual writer's IRC channels? I've never seen any.

Comment by marchdown on Real-world examples of money-pumping? · 2013-04-28T01:23:08.152Z · LW · GW

Sounds like a case of extreme discounting or a very close planning horizon.

Comment by marchdown on Imitation is the Sincerest Form of Argument · 2013-02-19T00:06:05.306Z · LW · GW

On Will Newsome's IRC channel someone mentioned the idea that you could totally automate the ITT into a mass-league game with elo ratings and everything (assuming there was some way to verify true beliefs at the beginning.) Make it happen, somebody.

Ooh, this would be so great!

Comment by marchdown on Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87 · 2013-01-28T22:30:18.934Z · LW · GW

What if Bludgers, being modelled after naive physics, have inherent knocking-people-out property? Wouldn't that be in line with how canon is being dealt with in HPMOR?

Comment by marchdown on AI box: AI has one shot at avoiding destruction - what might it say? · 2013-01-25T23:02:22.296Z · LW · GW

If we're taking seriously the possibility of basilisks actually being possible and harmful, isn't it your invitation really dangerous? After all, what if Axel has thought of an entirely new cognitive hazard, different from everything you may already be familiar with? What if you succumb to it? I'm not saying that it's probable, only that it should warrant the same precautions as the original basilisk debacle, which led to enacting censorship.

Comment by marchdown on Want to help me test my Anki deck creation skills? · 2013-01-25T20:17:50.458Z · LW · GW

Aye. If you need another nudge, I'd like to say that it's a great idea, and yes, I would help you test resulting decks.

Comment by marchdown on AI box: AI has one shot at avoiding destruction - what might it say? · 2013-01-24T10:03:49.499Z · LW · GW

I'm not so sure that AI suggesting murder is clear evidence of it being unfriendly. After all, it can have a good reason to believe that if it doesn't stop a certain researcher ASAP and at all costs, then humanity is doomed. One way around that is to give infinite positive value to human life, but can you really expect CEV to be handicapped in such a manner?

Comment by marchdown on AI box: AI has one shot at avoiding destruction - what might it say? · 2013-01-24T08:24:56.133Z · LW · GW

It may be benevolent and cooperative in its present state even if it believes FAI to be provably impossible.

Comment by marchdown on AI cooperation is already studied in academia as "program equilibrium" · 2012-07-31T00:16:19.842Z · LW · GW

That's what I figured, but I hoped I was wrong, and there's still a super-secret beer-lovers' club which opens if you say "iftahh ya simsim" thrice or something. Assuming you would let me in on a secret, of course.

Comment by marchdown on AI cooperation is already studied in academia as "program equilibrium" · 2012-07-30T21:23:54.990Z · LW · GW

Wait, I thought that was shut down back in the spring. What am I missing?

Comment by marchdown on Irrationality Game II · 2012-07-07T01:54:01.748Z · LW · GW

This is an interesting way to look at things. I would assert a higher probability, so I'm voting up. Even a slight tweaking (x+ε, m-ε) is enough. I'm imagining a continuous family of mappings starting with identity. These would preserve the structures we already perceive while accentuating certain features.

Comment by marchdown on Irrationality Game II · 2012-07-04T09:33:48.835Z · LW · GW

Fairly certain (85%—98%).

Comment by marchdown on Irrationality Game II · 2012-07-04T09:32:18.626Z · LW · GW

I was confused about getting several upvotes quickly, but without prompting debate. I began wondering if my proposition pattern-matched something not as interesting to discuss.

Comment by marchdown on Irrationality Game II · 2012-07-04T03:58:02.957Z · LW · GW

What a fun game! I notice that I'm somewhat confused, too. I see a couple of different approaches; maybe some of the upvoters would step in and explain themselves.

Comment by marchdown on Irrationality Game II · 2012-07-04T02:36:32.685Z · LW · GW

Irrationality game

Moral intuitions are very simple. A general idea of what it means for somebody to be human is enough to severely restrict variety of moral intuitions which you would expect it to be possible for them to have. Thus, conditioned on Adam's humanity, you would need very little additional information to get a good idea of Adam's morals, while Bob the alien would need to explain his basic preferences at length for you to model his moral judgements accurately. It follows that the tricky part of explaining moral intuitions to a machine is explaining human, and it's not possible to cheat by formalizing moral separately.

Comment by marchdown on Irrationality Game II · 2012-07-04T00:05:09.520Z · LW · GW

Dark arts are very toxic, in the sense that you naturally and necessarily use any and all of your relevant beliefs to construct self-serving arguments on most occasions. Moreover, once you happen to successfully use some rationality technique in a self-serving manner, you become more prone to using it in such a way on future occasions. Thus, once you catch other people using dark arts and understand what's going on, you are more likely to use the same tricks yourself. >80% sure (I don't have an intuitive feeling for amounts of evidence, but here I would need at least 6dB of evidence to become uncertain).

Comment by marchdown on Learn Power Searching with Google · 2012-07-02T23:31:33.836Z · LW · GW


Comment by marchdown on Fallacies as weak Bayesian evidence · 2012-03-19T00:46:41.217Z · LW · GW

This is clear, entertaining and to the point. Thank you.

A nitpick:

So a strong slippery slope argument is one where both the utility of the outcome, and the outcome's probability is high

You may have meant "disutility".

Comment by marchdown on Harry Potter and the Methods of Rationality discussion thread, part 10 · 2012-03-15T00:22:48.509Z · LW · GW

They don't need many aurors, it's just that aurors come in trios.

Comment by marchdown on Harry Potter and the Methods of Rationality discussion thread, part 10 · 2012-03-14T21:18:23.646Z · LW · GW

Draco doesn't even need to know about this.

Comment by marchdown on Help! Name suggestions needed for Rationality-Inst! · 2012-02-07T21:02:42.471Z · LW · GW

A good first step in optimizing the world according to your wishes is noticing and acknowledging that you've got a problem.

With that in mind, why should the rational community frame its core activities — development of epistemic and acquisition of instrumental rationality, plus public advocacy of sanity — simply as another fun game to engage in, with an added benefit of warm fuzzies and making oneself feel smart?

Wouldn't it be better to provoke a question, or, better yet, an acknowledgement — yes, I am (neurotypical) human, I am fallible (irrational), I wish I could become less wrong.

That's why I'm proposing to have /Ir/rationality in the name. As for the name itself, I am at loss. I don't see the benefit or adhering to some common naming convention as being substantial. How about "We, the Irrational Humans"? Humans, because I don't envision any AGIs, swarms, extraterrestrials, chimps, squids, or dolphins joining us any time soon.

Comment by marchdown on The Singularity Institute needs remote researchers (writing skill not required) · 2012-02-06T06:00:33.022Z · LW · GW

Email sent.

This is a very nice way to close the feedback loop between the practice of research and the sort of theory preached here.

Comment by marchdown on The problem with too many rational memes · 2012-01-19T18:14:27.615Z · LW · GW

This is all well and good, but imagine that, instead of living in a word where people generally don't communicate optimally and tend to irrationally cling to their memes, we live in the world of rational discourse, where truths are allowed to naturally bubble up to the surface and manifest as similar conclusions from disparate experiences.

In this hypothetical world you would benefit from arguing with a crackpot — you would supply xem with the evidence xe overlooked (because from within xyr model it felt irrelevant, so xe didn't pursue it — that's how I imagine one could end up with crackpot beliefs in a rational world), and xyr non-obvious truth would come up as a reason for xyr weird world-view. In that situation marginal benefit of engagement is high, because behind most crackpot theories there would be an extremely rare, and thus valuable experience (= piece of evidence about a nature of your common world), and the marginal cost of engagement is diminished because your effort is expended on adjusting both your and xyr map, and not on defeating their cognitive defenses.

With me so far? It gets better. There's no hard and fast boundary between our world and the one painted above. And there are different kinds of crackpots. I'm pretty sure that there are many people with beliefs that you have good enough reasons to dismiss, yet which make total sense to somebody with their experiences. And many of them can be argued with. They may be genuinely in interested in finding the truth, or winning at life, or hearing out contrarian opinions. They may be not shunned by society enough to develop thick defenses. They may be smart and rational (as far as humans go, which is not very far.)

So finding the right kind of crackpots becomes a lucrative problem — source of valuable insights and debating practice.

Weakly related: and

Comment by marchdown on The problem with too many rational memes · 2012-01-19T13:14:42.684Z · LW · GW

An innocent person is a lot more receptive than someone who has heard the retarded version of an idea. To paraphrase Schopenhauer, it is not weakness of the cognitive faculties that leads people astray, it is preconception, prejudice.)

How do we know that the situation with various crackpot ideas is any different? We don't actually go and spend weeks seeking out and dissecting the most sane version of every conspiracy we've caught wind of. How can we be so certain that if we did that we wouldn't find some non-obvious truths?

Comment by marchdown on The problem with too many rational memes · 2012-01-19T13:06:29.815Z · LW · GW

What was the subject of their argument?

Comment by marchdown on New SI publications design · 2012-01-15T12:08:39.867Z · LW · GW

I'm familiar with LaTeX and willing to help. I'd love to discuss some of the papers in more detail, too.

Edit: email sent.

Comment by marchdown on Welcome to Less Wrong! (2012) · 2012-01-06T02:17:01.135Z · LW · GW

That study sounds interesting, could you post a link if you happen to find it?