Posts
Comments
Interesting arguments going on on the e/acc Twitter side of this debate: https://x.com/khoomeik/status/1799966607583899734
Awesome, congratulations for the start of your networking journey!
Even though it can be really disheartening, remember that failure is an inevitable part of the journey. Remember the Edison quote: "I have not failed. I've just found 10,000 ways that won't work."
Yep, I'm currently finding the balance between adding enough examples to posts and being sufficiently un-perfectionistic that I post at all.
My current main criterion is something like "Do these people make me feel good, empowered, and give me a sense of community?" I expect that to change over time.
If a simple integer doesn't work for you, maybe split the two columns into several different categories? If you want to go fancy, weighted factor modelling might be a good tool for that.
Feel free to adapt it however it makes sense for you. :)
It's all about the difference: If they are the same, leave everything as is. If "want" is higher than "is", make some intentional decisions to invest into that relationship more. If "want" is lower than "is", ask yourself wtf is going on there and how to change it.
I actually told the most hippie human on my list (spending months on rainbow gatherings-level hippie) that she's on it. To my surprise, she felt unambiguously flattered. Seems like the people who know me trust that I can be intentional without being objectifying. :)
Yea, but I don't remember claiming anywhere that I can cure anybody's depression, and don't really intend to ever do that...?
I did not recommend any particular intervention in my post. I just tried to explain some part of my understanding of how new psycho- and social technologies are generated, and what conclusions I draw from that.
If you expect most if not all established therapeutic interventions to not survive the replication crisis - what would you consider sufficient evidence for using or suggesting a certain intervention?
For example, a friend of mine felt blue today and I sent them a video of an animated dancing seal without extensively googling for meta-analyses on the effect of cute seal videos on peoples' moods beforehand. Would you say I had sufficient evidence to assume that doing so is better than not doing so? Or did I commit epistemic sin in making that decision? This is an honest question, because I don't yet get your point.
Agreed. But sitting around and sulking is a bummer, so I rather keep learning, exploring, and sometimes finding things that work for me.
So, in other words - I am wrong, hippies are wrong, and most if not all therapies that look so far like they are backed by evidence are likely wrong, too.
Who or what do you suggest we turn to for fixing our stuff?
Yep, added a reference to survivorship bias to the text. Thanks.
Well, there goes that bit of overconfidence. Thanks.
Agreed - I added the 7th point to the list now to account for this.
Response on the EA Forum.
Thanks for adding clarity! What does "support" mean, in this context? What's the key factors that prevent the probabilities from being >90%?
If the key bottleneck is someone to spearhead this as a full-time position and you'd willingly redirect existing capacity to advise/support them, I might be able to help find someone as well.
It's not the same thing; the link was broken because Slack links expire after a month. Fixed for now.
Flagged the broken link to the team. I found this, which may or may not be the same project: https://www.safeailondon.org/
I'm not in London, but aisafety.community (the afaik most comprehensive and way too unknown resource on AI safety communities) suggests the London AI Safety Hub. There are some remote alignment communities mentioned on aisafety.community as well. You might want to consider them as fallback options, but probably already know most if not all of them.
Let me know if that's at all helpful.
That's one of the suggestions of the CanAIries Winter Getaway where I felt least qualified to pass judgment. I'm working on finding out about their deeper models so that I (or them) can get back to you.
I imagine that anyone who is in a good position to work on this has existing familial/other ties to the countries in debate though, and already knows where to start.
Yep, the field is sort of underfunded, especially after the FTX crash. That's why I suggested grantwriting as a potential career path.
In general, for newcomers to the field, I very strongly recommend booking a career coaching call with AI Safety Support. They have a policy of not turning anyone down, and quite a bit of experience in funneling newcomers at any stage of their career into the field. https://80000hours.org/ are also a worthwhile address, though they can't make the time to talk with everyone.
Hah, this makes a lot of sense. Thanks!
An addition to that: If we look through the goggles of Sara Ness' Relating Languages, the rationalist style of doing conversations is at the far end of the internal-focusing dialects Debater/Chronicler/Scientist. In my experience, more gooey communities have way more Banterer/Bard/Spaceholder-heavy types of interactions, which focus more on peoples' needs in the situation than on forming and communicating true beliefs. People don't necessarily know which dialects they speak themselves, because their way of interacting just feels normal to them, and everyone else weird. It's hard to learn speak in dialects that are not your natural default. For example, I didn't even notice myself slipping into Bard/Banterer during writing this post, but in hindsight it's fairly obvious how it digresses from the LessWrong language game.
I think the LW-way is ideal for its purpose, but I'm realizing that there's a whole lot of tacit knowledge and implicit norms involved in understanding and doing it. This strong selection for a particular style of communication may be responsible for a significant chunk of the difficulty I'm perceiving in interfacing between the rationalist and other memeplexes. In both directions, both for the rationalist community learning from other memeplexes, and for useful memes getting from rationalist circles into the outside world.
Thanks for the input!
It wasn't my intention to reinforce this dichotomy. Instead, I hoped to encourage people to name things that break the rationalist community's Overton window, so that others read them and think "Whoopsie, things like that can actually be said here?!" I suspect that way more people here picked up useful heuristics and models in their pre-rationalist days than realize it, because they overupdate on the way of the Sequences being the One True Way. I've learned in other communities that breaking taboos with questions like these is a useful means for breaking conformity pressure. My hope was that eventually, this helps a little to reduce the imbalance towards prickliness I perceive in the rationalist community, and with that this dichotomy.
Apparently, I haven't yet figured out how to express and enact intentions like these in a way that fits the rationalist language game.
This is a rallying flag: Respond/message me if you can imagine working on the Superconnecting project. Especially if you are based in Europe, but not exclusively then.
The larger part of Ithaka Berlin's expected impact comes from fulfilling this function. However, I'd also be super keen to help build non-co-living-versions of the Superconnecting project, whether as co-founder, advisor, or the person who connected the people who end up building the thing.
This is one of the points I'm less sure about because often enough, the rest of the message will implicitly answer it. In addition, what to include is highly dependent on context and who you are writing to.
Two very general recommendations:
- Something that helps the other person gauge how long the inferential distances between you two are, so that communication can be as quick as possible and as thorough as necessary.
- Something that helps them gauge your level of seniority. It's unfortunate but true that the time of people a couple levels of seniority above your own is extremely valuable. For example, it would hardly make sense for a Nick Bostrom to make time for helping a bright-but-not-Einstein-level high school student he never met decide which minor to choose in university. If people can't gauge your level of seniority, they might misjudge whether they are the right person for you to talk to, and then you might end up in a conversation that is extremely awkward and a waste of time for either side.
Some examples:
- "Hi! I'm xyz, Ops lead at Linear."
- "Hi! I'm a computer science undergrad at Atlantis University and a long-time lurker on LessTrue."
- ...
Thanks for your comments! I corrected point 7 now.
I hereby join in, too.
Thanks, I didn't take into account that people might read this as an encouragement to randomly message people on Lesswrong. And thanks for giving me more clarity about the implicit norms here.
To clarify: The person likely found my mail address on my homepage, where it is exactly for the reason that I'm generally happy to be contacted by strangers.
Highly depends on your role and personality I guess.
As a community builder and someone pretty high on extraversion, I'm generally happy to add more people to my loose network. If there's just a bit of overlap between my and a stranger's interests, I expect there to be a far higher upside than downside risk to us knowing that the other exists and what they work on. Of course, I may change my opinion on this over time while my time becomes more valuable and my loose network larger.
Any generalizable rules you can think of about whom better not to cold message at all?
Paraphrasing is particularly useful for finding out whether you understood the other person correctly. For example, if a person says "I'm a cellular biologist.", you could paraphrase that as "You currently work in cellular biology. Right?"