Posts
Comments
i think i agree that this does justified harm, but maybe for some subgroups or communities the justified harm is worth the benefits of such an event? our local rationality community has developed to a point where i think people are comfortable talking about "controversial" statements with their real faces on because the vibes are one where any attempt at cancellation instead of dialogue will be met with eyerolls and social exclusion but like, you know, it took a pretty long time and sustained effort for us to get here. (and maybe im wrong and there are people in the group with opinions they are still afraid to voice!)
im modelling this as something kind of like authentic relating - you're hacking the group's intimacy module and ratcheting up the feeling of closeness with a shortcut. it's not going to be as good as the genuine thing, but maybe it's a lot better than what one would have general access to. it's not everyone's thing, people with enough access to the genuine goods are likely to be like "wtf this is weird", sometimes it can go catastrophically wrong if the facilitator drops the ball... but despite all of that, for some people it's a good thing to do occasionally bc otherwise they will never get enough of that social nutrient naturally
I have a fun crowd where half the people who showed up already read the entire thing in their own time as it came out, that was helpful :p
I'm interested if you're still adding folks. I run local rationality meetups, this seems like a potentially interesting way to find readings/topics for meetups (e.g. "find me three readings with three different angles on applied rationality", "what could be some good readings to juxtapose with burdens by scott alexander", etc.)
happened to run this two days in a row, first at my regular meetup and then at a normal board games night. i was expecting it to be a pretty serious workshop exercise for some reason, but it turned out to be very fun!
in the rat meetup people were very aware about the 1/3 chance that the group was trying to deceive them. actually, at some point one person was like "i know you're trying to help me, but i'm going to be dumb and dissent anyways", and then did so.
at the board game night most people seemed to feel like it was very rude to bring collusion up as a possibility, which I was really surprised by - it was like they didn't want to think about it, and it was comparatively much easier to lead them to false conclusions.
i found that fermi estimate questions worked best for this game (allowing reasonable error margins), because it let the collective strategize on how to go in a specific direction (try to get the number too high or too low). and also you get collaborative fermi estimate practice in for free in most rounds :]
i came with a list of pre-generated questions, but we actually found that it was quite fun to tailor the question to the specific lonesome (e.g. we knew that one person was into climbing, so the question we asked was "how many climbing gyms exist in the world". we knew another person knew too many facts about space, so we asked them about ancient history instead). so instead of sending the lonesome away for 3 minutes, we decided on a question first, and then rolled the dice, and then started the timer and began strategizing.
some good questions we used:
- how many climbing gyms exist in the world?
- how many Canadians die to auto accidents every day?
- how many years did it take to build the great pyramids of giza? (this is one where we were trying hard to mislead but accidentally led the lonesome to the right answer lol)
- how many oreos are produced every year?
- how many countries are in the UN?
- when was the first Nobel prize awarded?
a question that was almost good was "what is the chubby bunny world record" - we were unable to find any conclusive information on this on the internet :{
how would you go about doing that?
thanks for the suggestions! and huh, I did not know this about textbooks, I think that makes it more viable as a partitioned book club feature.
That's it! Thank you.
I'm trying to remember the name of a blog. The only things I remember about it is that it's at least a tiny bit linked to this community, and that there is some sort of automatic decaying endorsement feature. Like, there was a subheading indicating the likely percentage of claims the author no longer endorses based on the age of the post. Does anyone know what I'm talking about?
thanks for writing this! can you say a little bit more about the process of writing notes on a scribe? I've been interested in getting one, but my understanding is that e-ink displays are good for mostly static displays, and writing notes on it requires it to update in real-time and will drain the battery fairly quickly? my own e-reader is from like, 2018, so idk if there's been significant updates. how often do you need to charge them when you're using them?
your points about taking the time to think through problems and how you can do this across many contexts is definitely what i was going for subtextually. so, thanks for ruining all of my delicate subtlety, adam :p
standing on others' shoulders is definitely a reasonable play as well, although this is not something that works great for me as a Canadian - international shipping is expensive and domestic supply of any recommended product isn't guaranteed.
counterpoint: I run a weekly meetup in a mid-size Canadian city and I think it's going swimmingly. It is not trivial to provide value but it is also not insurmountably difficult: I got funding from the EA Infrastructure Fund to buy a day off me per week for running meetups and content planning, and that's enough for me to create programming that people really like, in addition to occasional larger events like day trips and cottage weekends. 8-12 people show up to standard meetups, I'd say around 70% are regulars who show up ~weekly and then you have a long tail of errants. Lots of people move away since it's a university town, but when they visit they make sure to come to a meetup and catch up.
re: constraining, filling a new niche, etc - i feel like your POV is a bit doomered and this is pretty easy for a rationalist meetup to do - just enforce rules for good discourse norms and strongly signal that any topic is allowed as long as the dialogue remains constructive. make it a safe space for the people that will run their mouths in favor of the truth even if it kills the vibe at other parties and everyone else is glaring daggers at them, and people will show up. They'll show up because they can't get a community like that anywhere else in the city, as long as the city in question isnt in the bay area :P
heh, thanks, I was going to make a joke about memorizing the top 10 astrology signs but then I didn't think it was funny enough to actually complete
leaving out obvious things like religious garb/religious symbols in jewlery, engagement rings/wedding bands, various pride flag colours and meanings etc:
- semicolon tattoos: indicates that someone is struggling with or has overcome severe mental health challenges such as suicidal depression. You see them fairly often if you look for them. i've heard that butterflies and a few other tattoos mean similar things, but you'll run into false positives with any more generic tattoos.
- claddagh rings: learned about this while jewelry shopping recently; it's a ring that looks like a pair of hands holding a heart. it's an irish thing, the finger you wear it on and whether or not it's inverted indicates your relationship status.
- iron rings: In Canada, engineers wear an iron ring on the little finger of their working hand, made from the remains of a bridge that collapsed catastrophically. a decent number of my engineer friends wear the ring.
- lace code: basically entirely dead, but if someone is dressed like a punk and they're wearing black boots with red laces, there's enough of a chance that they're a nazi that i'd avoid them. there's like a whole extended universe of lace colours and their meanings but red is the most (in)famous one.
- astrology jewlery: astrology obviously isn't real but if someone is wearing jewlery with their astrological sign, that tells you that 1) they are into astrology (or homestuck if you're lucky) and 2) they likely have some affinity with their designated star sign, which you can ask them about.
- teardrop tattoo right under the eye: this person killed someone or was in prison at some point, or want to pretend that that's true for them (e.g. if they're a soundcloud rapper from the suburbs). also see other prison tattoos
- puzzle piece tattoo or jewelry: this person likely has an autistic child or close family member, and is not super up to date on the most uh, progressive thoughts on the topic. autistic people themselves are more likely to dislike the puzzle piece symbolism for autism
Thanks for writing this piece; I think your argument is an interesting one.
One observation I've made is that MIRI, despite its first-mover advantage in AI safety, no longer leads the conversation in a substantial way. I do attribute this somewhat to their lack of significant publications in the AI field since the mid-2010s, and their diminished reputation within the field itself. I feel like this serves as one data point that supports your claim.
I feel like you've done a good job laying out potential failure modes of the current strategy, but it's not a slam dunk (not that I think it was your intention to write a slam dunk as much as it was to inject additional nuance to the debate). So I want to ask, have you put any thought into what a more effective strategy for maximizing work on AI safety might be?
Thanks for writing this up! We tried this out in our group today and it went pretty well :-)
Detailed feedback:
Because our venue didn't have internet I ended up designing and printing out question sheets for us to use (google docs link). Being able to compare so many responses easily, we were able to partner up first and find disagreements second, which I think was overall a better experience for complete beginners. The takes that you were most polarized on with any random person weren't actually that likely to be the ones that you feel the most strongly about, and there were generally a few options to choose from. So we got a lot of practice in with cruxing without getting particularly heated. I'd like to find a way to add that spice back for a level 2 double crux workshop, though!
We repurposed using the showing fingers for agreement/disagreement for coming up with custom questions; we had quite a few suggestions but only wrote down the ones that got a decent spread in opinion. This took a while to do, but was worth it, because I was actually really bad at choosing takes that would be controversial in the group, and people were like "wtf Jenn how can we practice cruxing if we all agree that everything here is a bunch of 3s." (slightly exaggerated for effect)
I didn't realize this until I was running the event, but this write-up was really vague on what was supposed to happen after step 3! I ended up referencing this section of the double crux post a lot, and we ended up with this structure:
- partner up and identify a polarized opinion from the question sheet that you and your partner are both interested in exploring.
- spend 5 minutes operationalizing the disagreement.
- spend 5 minutes doing mostly independent work coming up with cruxes.
- spend 15 minutes discussing with your partner and finding double cruxes. (in our experience, it was actually quite rare for the cruxes to have overlapped!) you'll very likely have to do more operationalizing/refining of the disagreement here. (I'm not sure if that's normal or if we're doing it slightly wrong.)
- come back together in a large group, discuss your experience trying to find a double crux and one learning from your attempt to convey to the rest of the group so everyone learns from others' experiences/mistakes. I did this in lieu of the checking in, because the discussions all seemed pretty tame.
- repeat from step 1, with a different partner and different opinion.
We did two rounds in total. People unfortunately did not report that the second round was generally easier than the first, but seemed to overall find the workshop a valuable experience! One person commented that it led to much more interesting conversation than most readings-based meetups, and I'm inclined to agree.
The question is rather, what qualities do EAs want themselves and the EA movement to have a reputation for?
Yes, I think this is a pretty central question. To cross the streams a little, I did talk about this a bit more in the EA Forums comments section: https://forum.effectivealtruism.org/posts/5oTr4ExwpvhjrSgFi/things-i-learned-by-spending-five-thousand-hours-in-non-ea?commentId=KNCg8LHn7sPpQPcR2
I get a sense that the org is probably between 15 and 50 years old
Yep, close to the top end of that.
It's probably been through a bunch of CEOs, or whatever equivalent it has, in that time. Those CEOs probably weren't selected on the basis of "who will pick the best successor to themselves". Why has no one decided "we can help people better like this, even if that means breaking some (implicit?) promises we've made" and then oops, no one really trusts them any more?
That's a really great observation. Samaritans has chosen to elide this problem simply by having no change in leadership throughout the entire run of the organization so far. They'll have to deal with a transition soon as the founders are nearing retirement age, but I think they'll be okay; there are lots of well aligned people in the org who have worked there for decades.
Have they had any major fuck ups? If so, did that cost them reputationally? How did they regain trust?
If not, how did they avoid them? Luck? Tending to hire the sorts of people who don't gamble with reputation? (Which might be easier because that sort of person will instead play the power game in a for-profit company?) Just not being old enough yet for that to be a serious concern?
They haven't had any major fuck ups, and there's two main reasons for that imo:
- The culture is very, very hufflepuff, and it shows. When you talk to people from Samaritans it's very obvious that the thing they want to do the most is to do as much good as possible, in the most direct way as possible, and they are not interested in any sort of moral compromise. They've turned down funding from organizations that they didn't find up to snuff. Collaborating orgs either collaborate on Samaritan's stringent terms, or not at all.
Doing the work this way has become increasingly easier as working with Samaritans has gotten to be an increasingly stronger and valuable signal of goodness, but they didn't make compromises even as a very young and cash strapped organization. - They have a very very slow acculturation process for staff. It's very much one of those organizations where you have to be in it for over a decade before they start trusting you to make significant decisions, and no one who is unaligned would find working there for a decade tolerable, lol. So basically there are no unaligned rogue actors inside it at all.
[reputation and popularity] probably have overlapping causes and effects, but they're not the same.
I'm inclined to think that this is a distinction without a difference, but I'm open to having my mind changed on this. Can you expand on this point further? I'm struggling to model what an organization that has a good reputation but is unpopular, or vice versa, might look like.
If EA as a whole is unpopular, that's also going to cause problems for well-reputed EA orgs.
Yes, I think that's the important part, even though you're right that we can't do much about individual orgs choosing to associate itself with EA branding.
I share your sense that EAs should be thinking about reputation a lot more. A lot of the current thinking has also been very reactive/defensive, and I think that's due both to external factors and to the fact that the community doesn't realize how valuable an actually good reputation can be - thought Nathan is right that it's not literally priceless. Still, I'd love to see the discourse develop in a more proactive position.
Thanks for your super thought out response! I agree with all of it, especially the final paragraph about making EA more human-compatible. Also, I really love this passage:
We can absolutely continue our borg-like utilitarianism and coldhearted cost-benefit analysis while projecting hospitality, building reputation, conserving slack, and promoting inter-institutional cooperation!
Yes. You get me :')
I don't think the answer is super mysterious; a lot of people are in the field for the fuzzies and it weirds them out that there's some weirdos that seem to be in the field, but missing "heart".
It is definitely a serious problem because it gates a lot of resources that could otherwise come to EA, but I think this might be a case where the cure could be worse than the disease if we're not careful - how much funding needs to be dangled before you're willing to risk EA's assimilation into the current nonprofit industrial complex?
The meeting rooms are in the basement! If you come in through the main entrance, do a U turn to the left of the vestibule and go down the stairs. It'll be the first door to your right
Sort of related, everything studies wrote this essay in 2017 and now "wamb" is a term that my friends and I use all the time.
https://everythingstudies.com/2017/11/07/the-nerd-as-the-norm/
i'm a tag wrangler for the archiveofourown, so if you're interested in learning more about human-assisted organizational structures, feel free to slide into my dms (although I might take a while to respond).
here's an explainer put out by wired on what i and other volunteers do: https://www.wired.com/story/archive-of-our-own-fans-better-than-tech-organizing-information/
i don't think it's a stretch to say that ao3 has the best tagging system on the internet from a user perspective, but you don't get a system that good unless you pay the price, and take the tradeoffs. but yeah, just putting this on your radar if it wasn't :)
eta: I don't expect this to be a feasible solution for lw, this is more to broaden your scope on what's out there so you can make a better informed decision at the end.
Instrumentally, upgrading your class seems like a powerful intervention, so it is really surprising when someone allegedly trying to "optimize their life" is selectively ignorant about this. Moving to a higher class would probably have more impact that all meditation and modafinil combined.
I think it depends on what exactly you're optimizing your life for. Generally, being surrounded by people who are not in your class is very unpleasant, so you find the class that you belong to and settle in there.
Isusr mentioned previously, for example, that intellectualism is a middle class trait. Moving upwards into a class that doesn't value intellectualism would make my life significantly worse. Instead, I strive for status within my class and have no intention of surpassing it.
So this is a write-up of discussion points brought up at a meetup, it's not intended to be a comprehensive overview about every single thing about social class.
That being said, we did go into Marxist theory a little, but mostly to talk about how it's now pretty common to be wealthy without owning any productive capital, whether or not actors and athletes can be said to own any productive capital, and the new kerfuffle surrounding California's new bill to allow college athletes to earn an income.
It's hard to find good English sources on this, but Africa's great green wall might count