Posts
Comments
Nope.
We closed the physical space during COVID, then continued for a two years online in various forms, then after the Ukraine war started I left the country and the project was mostly dead since then. A few months ago we finally shut down all remaining chats and archived the website.
Sometimes I think that it'd be nice to do a final write-up/postmortem, but I'm not sure it'll actually happen.
Okay, SARS-CoV-2 is pretty different from SARS-2003 ("~76% amino acid identity in the spike protein"), this might be the reason it won't work. OTOH, I don't know how different HCoV-OC43 is from both SARS strains.
Two facts:
- HCoV-OC43 (one of human coronaviruses causing common cold) can generate cross-reactive antibodies against SARS.
- Immunity to HCoV-OC43 appears to wane appreciably within one year.
Here's the paper which mentions both of these facts. (The actual paper is not important, I expect these facts to be well-known to coronavirus researchers, if the paper itself is not terribly mistaken and if I haven't misread anything.)
Even if cross-immunity is mild, won't it make sense to intentionally infect people with HCoV-OC43? Downside seems quite small compared to the number of deaths, and intuitively it seems that "mild cross-immunity" = "less severe SARS-CoV-2 cases", which is extremely valuable.
I notice I'm confused, since these facts should be well-known to pretty much everyone who's working on the vaccine. What's the explanation for why it's not a good idea?
Possible explanations, but I'm probably missing something:
- Vaccines which cause the actual illness are considered unethical. (Probably not? I don't expect humanity to be that stupid.)
- Mass-producing HCoV-OC43 virus is too hard for some reason. (Possible? I don't know much about vaccine production, and I'm clueless about whether it's even possible to mass-produce and store a "live" virus; but this seems solvable through organized infection parties, etc.)
- Researchers or medical organizations don't want to rely on expected utility. Related hypothesis: time and productivity wasted by infecting many people with HCoV-OC43 is too valuable, and infecting everyone with HCoV-OC43 at the same time would hurt economy too much. (I don't believe this, but I haven't really tried to estimate this. If the alternative would be "wait for the real vaccine which is just around the corner", then yes, let's wait, but if the alternative is waiting for 12-18 months, then it doesn't feel right.)
- Maybe I don't understand what "mild immunity" means and it's not that valuable of a perk to intentionally cause it? (But the same paper I quoted talks about HCoV-OC43 importance for predicting future SARS-CoV-2 outbreaks.)
- Maybe being infected with HCoV-OC43 is too risky because getting two viruses at the same time is dangerous? Or because it would confuse the situation and complicate diagnoses of the real SARS-CoV-2? (Maybe... If everyone is sick with common cold then it would help SARS-CoV-2 to spread since everyone would be sneezing and coughing. But it also seems like a question of good timing and at least worth considering.)
So, what am I missing here?
Yes! We have an English club each Saturday at 5 PM.
Whoa, for some reason I thought that LTF fund is not relevant to us, but looks like I was wrong. Thank you!
For context: in the last few months I applied for two CEA grants.
- Community Building grants (in December, outside of a funding round, so they warned me that the bar will be higher); they decided not to fund and asked to reapply. In the current Feb 2019 round there's a $150,000 budget cap, and since there'd be a risk of Kocherga competing against EA Russia team (which is separate from Kocherga), I decided not to reapply this time.
- I also applied to the EA Meta Fund, since it seemed like a closest match for what we're doing. They responded that they're not interested for now and that I should apply to the Community Building grant instead.
We could work more on improving our reputation on LW and EA forum (I have a few long posts in mind, e.g. our community building strategy which we've developed recently and which we're very hopeful about), but that's a costly strategy and there's a lot of uncertainty on whether that would be useful (for us or for the international community).
Thanks! I wonder if there'd be legal issues because Kocherga is not a non-profit (non-profits in Russia can be politically complicated, as I've heard). But it's defnitely worth trying.
One more thing: unlike the other stuff, I feel like developing EA movement in Russia is more talent-constrained: it could be much more active if we had one enthusiastic person with managerial skills and ~10 hours/week on their hands. I'm not sure we have such a person in our community - maybe we do, maybe we don't.
(Sometimes I consider taking on this role myself, but right now that's impossible, since I'm juggling 3 or 4 different roles already.)
OTOH, I'm also not sure how much better things would be if we had more funding and could hire such people directly. I might significantly underestimate this course of action because I don't have much experience yet with extending organizational capacity through hiring.
We've tried to start a local EA movement early on and had a few meetups in 2016. Introductory talks got stale quite quickly, so we put together a core EA team, with Trello board and everything.
It wasn't very clear what we were supposed to do, though:
- We wanted to translate EA Handbook (and translated some parts of it), but there were some arguments against this (similar to this post which was released later).
- Those of us who believed that AI Safety is the one true cause mostly wanted to study math/CS, discuss utilitarianism issues and eventually relocate to work for MIRI or something.
- Some others argued that you shouldn't be a hardcore rationalist to do the meaningful job and also maybe we should focus on local causes or at least not to discourage this.
- Earning to give (which I feel had more emphasis in EA 3 years ago than it has now) isn't very appealing in Russia, since the average income here is much lower than in the US
So, we had ~5-6 people on the team and were doing fine for a while, but eventually it all fizzled out due to the lack of time, shared vision and organizational capacity.
We tested several approaches to reboot it a few times since then. Haven't succeeded yet, but we'll try again.
---
Currently, EA movement in Russia is mostly promoted by Alexey Ivanov from Saint-Petersburg. He takes care of online resources and organizes introductory EA talks and AI Safety meetups. He's doing a great work.
Another guy is working on a cool project to promote EA/rationality among the talented students, but that project is still in its early stages and I feel like it's not my story to tell.
Thank you!
I've applied to CFAR's workshop in Prague myself (and asked for financial aid, of course); they haven't contacted me yet.
I'll explain about EA in reply to this comment.
Thanks! I'm planning to write a separate post with more details on our community, activities and accumulated experiences; there's much more stuff I'd like to share which didn't fit in this one. It might take a few weeks, though, since my English writing is quite sluggish.
Thank you!
Yes, it'd be interesting to compare our experiences.
If you want to chat in a lower-latency channel, I'm @berekuk on Lesswrongers Slack (my preferred medium for chatting) or https://www.facebook.com/berekuk if you dislike Slack for some reason.
Thank you!
Well, we actually had various versions of a "discuss and challenge your beliefs" exercise for a long time. (Previous names: "Belief Investigation" and "Structuring".)
Here's how it goes: split participants into pairs, ask one person in each pair to declare any of their beliefs that they want to investigate (compare: reddit.com/r/changemyview) and then allow them to discuss it for a predetermined period of time with their partner.
We used this kind of activity on LW meetups a lot, because it's easy to organize, can give you valuable updates and can be repeated for pretty much unlimited number of times without losing value.
Then last year two people from the community who were interested in Street Epistemology proposed to run SE as a regular meetup, expanding on these discussions a lot more and turning it into an actual craft. You can find plenty of information about SE on its website (check out The Complete SE Guide), but basically it's a set of best practices for how to investigate a belief in a dialogue.
SE seems very aligned with LW values. They talk a lot about "doxastic openness" (being open to revising your own beliefs), probabilities ("On a scale from zero to one hundred, how confident are you that your belief is true?"), etc. People at Kocherga meetups also often incorporate Double Crux technique in these discussions.
SE's traditional discussion topics usually include religion and pseudo-science (although you can take anything as a topic), and they refer to logical fallacies more often than LW, so they are conceptually related to the classical skeptics and critical thinking communities. Which means SE is often more approachable than LW and Sequences, and SE meetups are currently our largest event, drawing ~20 visitors consistently every week.
So, what happened?
This post is hidden from Main and the survey "is expired and no longer available", even though the post mentions that it should run for 10 more days. I wanted to share it with Russian LW community, will it be back in some form later?
Moscow
We expanded a lot since we opened our own rationality-aligned time club Kocherga in September 2015.
- General LW meetups every 3 weeks on Sundays with talks, discussions and games
- "Rationality for beginners" lectures every 3 weeks on Sundays
- (the third Sunday slot is reserved for EA meetups)
- Dojos on Fridays
- Sequences reading group started two weeks ago on Mondays
- Rationality-related games once a month
- CFAR-style weekend workshops (we ran 4 of these in 2016)
I really should write a separate post about all that's happened since 2013 when the last report from our group was posted.
For the Russian LessWrong slack chat we agreed on the following emoji semantics:
- :+1: means "I want to see more messages like this"
- :-1: means "I want to see less messages like this"
- :plus: means "I agree with a position expressed here"
- :minus: means "I disagree"
- :same: means "it's the same for me" and is used for impressions, subjective experiences and preferences, but without approval connotations
- :delta: means "I have changed my mind/updated"
We also have 25 custom :fallacy_*: emoji for pointing out fallacies, and a few other custom emoji for other low-effort, low-noise signaling.
It all works quite well and after using it for a few months the idea of going back to simple upvotes/downvotes feels like a significant regression.
Donated $100.
Willpower group is our long-running project, coming to an end soon. People were working through Kelly McGonigal's "Willpower Instinct", one chapter per week. I guess I should write up about it.
Don't know much about terminal values exercise yet. I'll let its maker know that you're interested.
We all speak Russian, so the stream isn't going to be useful to the general lesswrong.com community, unfortunately.
There were 8 people at the last session. I expect to see a slight increase next time.
Topics included:
- general introductions;
- conjunction fallacy and planning fallacy (discussing in 2 subgroups);
- anthropic trillemma / permutation city argument;
- organizational issues;
- discussion about how to expand our local presence, including one practical case of "how to touch on rationality topics at a dentist conference".
I'm not sure how representative this list is, it was my first LW meetup.
I hope me or someone else will post more detailed reports for future sessions.