Posts

Announcing Squiggle Hub 2023-08-05T01:00:17.739Z
Kocherga's leaflet 2019-02-19T12:06:28.257Z
Rationalist Community Hub in Moscow: 3 Years Retrospective 2018-08-25T00:28:36.796Z
Meetup : Moscow: regular meetups 2017-03-08T18:37:07.872Z
Meetup : Moscow: unconference 2017-02-15T18:02:32.312Z
Meetup : Moscow: unconference 2017-01-25T18:55:09.038Z
Meetup : Moscow: TDT, paranoid calibration, prediction party 2017-01-05T19:22:10.587Z
Meetup : Moscow: pedagogy goals, willpower research, community growth 2016-12-08T21:05:10.457Z
Meetup : Arrow theorem, LW goals and plans, rational review 2016-11-17T19:39:28.043Z
Meetup : Moscow: rational review, status quo bias, interpersonal closeness 2016-10-27T13:58:31.795Z
Meetup : Moscow: rational review, bias busters, Kolmogorov and Jayes probability 2016-10-05T19:32:22.651Z
Meetup : Moscow: keynote for the first Kocherga year, text analysis, rationality applications discussion 2016-09-15T16:49:53.639Z
Meetup : Moscow: rationalist culture, applied consequentialism, Stanovich 2016-08-24T20:02:41.990Z
Meetup : Moscow: Non-Omniscience, On rationalist communication, ACH training, games 2016-08-06T00:34:59.982Z
Meetup : Moscow: Ernst Mach philosophy, Analysis of competing hypotheses, Paranoid Zendo and other games 2016-07-14T18:41:14.430Z
Meetup : Moscow: Words of estimative probability, transparency illusion, belief investigation, rational games 2016-06-24T00:23:34.331Z
Meetup : Moscow: Utilitarianism, Good Judgement Project, Order team meeting, rational games 2016-06-02T14:52:49.625Z
Meetup : Moscow: Fermi calculations, VNM-rationality, paranoid calibration game 2016-05-12T22:45:24.908Z
Meetup : Moscow: paranoid calibaration, EA criticism, deliberate practice dojo 2016-04-20T16:32:10.911Z
Meetup : Moscow meetup: inspection paradox, common dual process theory fallacies, preparing summer outreach programs 2016-03-30T18:13:56.142Z
Meetup : Moscow meetup: Aristotle project, reference class practice, order team meeting 2016-03-10T15:36:33.000Z
Meetup : Moscow meetup: Hamming circle, Fallacymania, Tower of Chaos 2016-02-18T12:06:24.651Z
Meetup : Moscow meetup: guest talk on why LW community is lazy and cowardly; double crux game 2016-01-08T22:02:19.896Z
Meetup : Moscow meetup: discussion on community evolution; power and culture talk; rational games 2015-11-19T18:22:38.442Z
Meetup : Moscow meetup: science research issues, global risks, fallacymania 2015-10-29T14:37:44.852Z
Meetup : Regular Moscow meetup: copyright debates, culture keynote, working on beliefs, attribution error 2015-08-26T19:15:13.816Z
Meetup : Regular Moscow meetup: effective altruism, debates, hypothesis formulation 2015-07-29T22:21:25.180Z
Meetup : Regular Moscow meetup: Löb's theorem, ways of mind improvement, DEL group, Zendo 2015-07-15T16:32:48.890Z
Meetup : Moscow: existential risks, case-method, epistemic logic 2015-06-10T18:48:30.087Z
Meetup : Moscow: epistemology, framing, new project announcement 2015-05-29T11:58:48.056Z
Meetup : Moscow: game theory, test for solving mind-body problem, brainstorm on rationality exercises 2015-05-13T19:54:12.391Z
Meetup : Moscow: meta-model, epistemology, tabooing, etc. 2015-04-23T12:08:36.043Z
Meetup : Moscow meetup: communication practice and three short talks 2015-03-26T14:27:55.722Z
Meetup : Moscow meetup: phenomenological analysis, methods of creativity, lightning talks 2015-02-26T12:44:50.656Z
Meetup : Moscow meetup: The Toulmin model of argument, belief investigation exercise, psycholinguistics 2015-02-12T11:58:16.324Z
Meetup : Regular Moscow meetup: self-esteem in CBT, psycholinguistics, philosophy of mind 2015-01-29T11:20:41.554Z
Meetup : Moscow Meetup: biology, CBT and something mysterious 2014-12-18T15:48:39.169Z
Meetup : Regular Moscow Meetup 2014-08-13T18:59:40.889Z
Meetup : Moscow, Now 2 Sigma More Awesome 2013-11-05T10:33:46.570Z

Comments

Comment by berekuk on Rationalist Community Hub in Moscow: 3 Years Retrospective · 2024-12-18T23:12:46.844Z · LW · GW

Nope.

We closed the physical space during COVID, then continued for a two years online in various forms, then after the Ukraine war started I left the country and the project was mostly dead since then. A few months ago we finally shut down all remaining chats and archived the website.

Sometimes I think that it'd be nice to do a final write-up/postmortem, but I'm not sure it'll actually happen.

Comment by berekuk on March Coronavirus Open Thread · 2020-03-11T01:10:01.421Z · LW · GW

Okay, SARS-CoV-2 is pretty different from SARS-2003 ("~76% amino acid identity in the spike protein"), this might be the reason it won't work. OTOH, I don't know how different HCoV-OC43 is from both SARS strains.

Comment by berekuk on March Coronavirus Open Thread · 2020-03-10T22:32:36.821Z · LW · GW

Two facts:

  1. HCoV-OC43 (one of human coronaviruses causing common cold) can generate cross-reactive antibodies against SARS.
  2. Immunity to HCoV-OC43 appears to wane appreciably within one year.

Here's the paper which mentions both of these facts. (The actual paper is not important, I expect these facts to be well-known to coronavirus researchers, if the paper itself is not terribly mistaken and if I haven't misread anything.)

Even if cross-immunity is mild, won't it make sense to intentionally infect people with HCoV-OC43? Downside seems quite small compared to the number of deaths, and intuitively it seems that "mild cross-immunity" = "less severe SARS-CoV-2 cases", which is extremely valuable.

I notice I'm confused, since these facts should be well-known to pretty much everyone who's working on the vaccine. What's the explanation for why it's not a good idea?

Possible explanations, but I'm probably missing something:

  1. Vaccines which cause the actual illness are considered unethical. (Probably not? I don't expect humanity to be that stupid.)
  2. Mass-producing HCoV-OC43 virus is too hard for some reason. (Possible? I don't know much about vaccine production, and I'm clueless about whether it's even possible to mass-produce and store a "live" virus; but this seems solvable through organized infection parties, etc.)
  3. Researchers or medical organizations don't want to rely on expected utility. Related hypothesis: time and productivity wasted by infecting many people with HCoV-OC43 is too valuable, and infecting everyone with HCoV-OC43 at the same time would hurt economy too much. (I don't believe this, but I haven't really tried to estimate this. If the alternative would be "wait for the real vaccine which is just around the corner", then yes, let's wait, but if the alternative is waiting for 12-18 months, then it doesn't feel right.)
  4. Maybe I don't understand what "mild immunity" means and it's not that valuable of a perk to intentionally cause it? (But the same paper I quoted talks about HCoV-OC43 importance for predicting future SARS-CoV-2 outbreaks.)
  5. Maybe being infected with HCoV-OC43 is too risky because getting two viruses at the same time is dangerous? Or because it would confuse the situation and complicate diagnoses of the real SARS-CoV-2? (Maybe... If everyone is sick with common cold then it would help SARS-CoV-2 to spread since everyone would be sneezing and coughing. But it also seems like a question of good timing and at least worth considering.)

So, what am I missing here?

Comment by berekuk on Games in Kocherga club: Fallacymania, Tower of Chaos, Scientific Discovery · 2019-03-01T21:25:12.530Z · LW · GW

Yes! We have an English club each Saturday at 5 PM.

Comment by berekuk on Kocherga's leaflet · 2019-02-20T12:02:17.782Z · LW · GW

Whoa, for some reason I thought that LTF fund is not relevant to us, but looks like I was wrong. Thank you!

For context: in the last few months I applied for two CEA grants.

  1. Community Building grants (in December, outside of a funding round, so they warned me that the bar will be higher); they decided not to fund and asked to reapply. In the current Feb 2019 round there's a $150,000 budget cap, and since there'd be a risk of Kocherga competing against EA Russia team (which is separate from Kocherga), I decided not to reapply this time.
  2. I also applied to the EA Meta Fund, since it seemed like a closest match for what we're doing. They responded that they're not interested for now and that I should apply to the Community Building grant instead.

We could work more on improving our reputation on LW and EA forum (I have a few long posts in mind, e.g. our community building strategy which we've developed recently and which we're very hopeful about), but that's a costly strategy and there's a lot of uncertainty on whether that would be useful (for us or for the international community).

Comment by berekuk on Rationalist Community Hub in Moscow: 3 Years Retrospective · 2018-08-29T15:01:34.843Z · LW · GW

Thanks! I wonder if there'd be legal issues because Kocherga is not a non-profit (non-profits in Russia can be politically complicated, as I've heard). But it's defnitely worth trying.

Comment by berekuk on Rationalist Community Hub in Moscow: 3 Years Retrospective · 2018-08-26T10:43:06.576Z · LW · GW

One more thing: unlike the other stuff, I feel like developing EA movement in Russia is more talent-constrained: it could be much more active if we had one enthusiastic person with managerial skills and ~10 hours/week on their hands. I'm not sure we have such a person in our community - maybe we do, maybe we don't.
(Sometimes I consider taking on this role myself, but right now that's impossible, since I'm juggling 3 or 4 different roles already.)

OTOH, I'm also not sure how much better things would be if we had more funding and could hire such people directly. I might significantly underestimate this course of action because I don't have much experience yet with extending organizational capacity through hiring.

Comment by berekuk on Rationalist Community Hub in Moscow: 3 Years Retrospective · 2018-08-26T10:17:30.046Z · LW · GW

We've tried to start a local EA movement early on and had a few meetups in 2016. Introductory talks got stale quite quickly, so we put together a core EA team, with Trello board and everything.

It wasn't very clear what we were supposed to do, though:

  • We wanted to translate EA Handbook (and translated some parts of it), but there were some arguments against this (similar to this post which was released later).
  • Those of us who believed that AI Safety is the one true cause mostly wanted to study math/CS, discuss utilitarianism issues and eventually relocate to work for MIRI or something.
  • Some others argued that you shouldn't be a hardcore rationalist to do the meaningful job and also maybe we should focus on local causes or at least not to discourage this.
  • Earning to give (which I feel had more emphasis in EA 3 years ago than it has now) isn't very appealing in Russia, since the average income here is much lower than in the US

So, we had ~5-6 people on the team and were doing fine for a while, but eventually it all fizzled out due to the lack of time, shared vision and organizational capacity.

We tested several approaches to reboot it a few times since then. Haven't succeeded yet, but we'll try again.

---

Currently, EA movement in Russia is mostly promoted by Alexey Ivanov from Saint-Petersburg. He takes care of online resources and organizes introductory EA talks and AI Safety meetups. He's doing a great work.

Another guy is working on a cool project to promote EA/rationality among the talented students, but that project is still in its early stages and I feel like it's not my story to tell.

Comment by berekuk on Rationalist Community Hub in Moscow: 3 Years Retrospective · 2018-08-26T09:34:16.022Z · LW · GW

Thank you!

I've applied to CFAR's workshop in Prague myself (and asked for financial aid, of course); they haven't contacted me yet.

I'll explain about EA in reply to this comment.

Comment by berekuk on Rationalist Community Hub in Moscow: 3 Years Retrospective · 2018-08-25T23:09:09.827Z · LW · GW

Thanks! I'm planning to write a separate post with more details on our community, activities and accumulated experiences; there's much more stuff I'd like to share which didn't fit in this one. It might take a few weeks, though, since my English writing is quite sluggish.

Comment by berekuk on Rationalist Community Hub in Moscow: 3 Years Retrospective · 2018-08-25T22:43:19.422Z · LW · GW

Thank you!

Comment by berekuk on Rationalist Community Hub in Moscow: 3 Years Retrospective · 2018-08-25T22:43:05.020Z · LW · GW

Yes, it'd be interesting to compare our experiences.

If you want to chat in a lower-latency channel, I'm @berekuk on Lesswrongers Slack (my preferred medium for chatting) or https://www.facebook.com/berekuk if you dislike Slack for some reason.

Comment by berekuk on Rationalist Community Hub in Moscow: 3 Years Retrospective · 2018-08-25T22:37:23.145Z · LW · GW

Thank you!

Comment by berekuk on Rationalist Community Hub in Moscow: 3 Years Retrospective · 2018-08-25T11:10:10.130Z · LW · GW

Well, we actually had various versions of a "discuss and challenge your beliefs" exercise for a long time. (Previous names: "Belief Investigation" and "Structuring".)

Here's how it goes: split participants into pairs, ask one person in each pair to declare any of their beliefs that they want to investigate (compare: reddit.com/r/changemyview) and then allow them to discuss it for a predetermined period of time with their partner.

We used this kind of activity on LW meetups a lot, because it's easy to organize, can give you valuable updates and can be repeated for pretty much unlimited number of times without losing value.

Then last year two people from the community who were interested in Street Epistemology proposed to run SE as a regular meetup, expanding on these discussions a lot more and turning it into an actual craft. You can find plenty of information about SE on its website (check out The Complete SE Guide), but basically it's a set of best practices for how to investigate a belief in a dialogue.

SE seems very aligned with LW values. They talk a lot about "doxastic openness" (being open to revising your own beliefs), probabilities ("On a scale from zero to one hundred, how confident are you that your belief is true?"), etc. People at Kocherga meetups also often incorporate Double Crux technique in these discussions.

SE's traditional discussion topics usually include religion and pseudo-science (although you can take anything as a topic), and they refer to logical fallacies more often than LW, so they are conceptually related to the classical skeptics and critical thinking communities. Which means SE is often more approachable than LW and Sequences, and SE meetups are currently our largest event, drawing ~20 visitors consistently every week.

Comment by berekuk on 2017 LessWrong Survey · 2017-10-05T13:11:08.264Z · LW · GW

So, what happened?

This post is hidden from Main and the survey "is expired and no longer available", even though the post mentions that it should run for 10 more days. I wanted to share it with Russian LW community, will it be back in some form later?

Comment by berekuk on Meetup Discussion · 2017-01-27T01:06:07.824Z · LW · GW

Moscow

We expanded a lot since we opened our own rationality-aligned time club Kocherga in September 2015.

  • General LW meetups every 3 weeks on Sundays with talks, discussions and games
  • "Rationality for beginners" lectures every 3 weeks on Sundays
  • (the third Sunday slot is reserved for EA meetups)
  • Dojos on Fridays
  • Sequences reading group started two weeks ago on Mondays
  • Rationality-related games once a month
  • CFAR-style weekend workshops (we ran 4 of these in 2016)

I really should write a separate post about all that's happened since 2013 when the last report from our group was posted.

Comment by berekuk on On the importance of Less Wrong, or another single conversational locus · 2016-12-01T01:51:18.238Z · LW · GW

For the Russian LessWrong slack chat we agreed on the following emoji semantics:

  • :+1: means "I want to see more messages like this"
  • :-1: means "I want to see less messages like this"
  • :plus: means "I agree with a position expressed here"
  • :minus: means "I disagree"
  • :same: means "it's the same for me" and is used for impressions, subjective experiences and preferences, but without approval connotations
  • :delta: means "I have changed my mind/updated"

We also have 25 custom :fallacy_*: emoji for pointing out fallacies, and a few other custom emoji for other low-effort, low-noise signaling.

It all works quite well and after using it for a few months the idea of going back to simple upvotes/downvotes feels like a significant regression.

Comment by berekuk on CFAR fundraiser far from filled; 4 days remaining · 2015-01-27T23:27:42.853Z · LW · GW

Donated $100.

Comment by berekuk on Meetup : Moscow, Now 2 Sigma More Awesome · 2013-11-05T20:35:50.355Z · LW · GW

Willpower group is our long-running project, coming to an end soon. People were working through Kelly McGonigal's "Willpower Instinct", one chapter per week. I guess I should write up about it.

Don't know much about terminal values exercise yet. I'll let its maker know that you're interested.

We all speak Russian, so the stream isn't going to be useful to the general lesswrong.com community, unfortunately.

Comment by berekuk on Meetup : Moscow: Applied Rationality · 2013-01-10T23:29:13.556Z · LW · GW

There were 8 people at the last session. I expect to see a slight increase next time.

Topics included:

  • general introductions;
  • conjunction fallacy and planning fallacy (discussing in 2 subgroups);
  • anthropic trillemma / permutation city argument;
  • organizational issues;
  • discussion about how to expand our local presence, including one practical case of "how to touch on rationality topics at a dentist conference".

I'm not sure how representative this list is, it was my first LW meetup.

I hope me or someone else will post more detailed reports for future sessions.