Posts
Comments
I think the expenses for the website look high in this post because so much of it goes into invisible work like mod tools. Could you say more about that invisible work?
it looks like you're taking the total amount spent per employee as the take-home salary, which is incorrect. At a minimum that amount should include payroll taxes, health insurance, CA's ETT, and state and federal unemployment insurance tax. It can also include things things like education benefits, equipment, and 401k bonuses. Given the crudeness of the budget, I expect there's quite a bit being included under "etc".
(note for readers: I effectively gave >$10k to LW last year, this isn't an argument against donating)
This seems quite modest by EA COI standards.
Doesn't EAIF give to other EVF orgs? Seems weird that you would be a conflict of interest but that isn't.
I was part of the 2.0 reboot beta: there are no posts of mine on LW before that
Comments on my own blog are almost non existent, all the interesting discussion happens on LW and Twitter.
(Full disclosure: am technically on mod team and have deep social ties to the core team)
Yes. This is not unusually bad for a medical paper but that's not exactly a defense.
Perplexity is still my daily driver, due to the superior UI. I go to elicit or you.com for harder problems.
Because I don't believe the papers saying that iodine doesn't alter the thyroid.
can you elaborate on "this format"?
see also: https://www.lesswrong.com/posts/Wiz4eKi5fsomRsMbx/change-my-mind-veganism-entails-trade-offs-and-health-is-one
There’s a lot here and if my existing writing didn’t answer your questions, I’m not optimistic another comment will help[1]. Instead, how about we find something to bet on? It’s difficult to identify something both cruxy and measurable, but here are two ideas:
I see a pattern of:
1. CEA takes some action with the best of intentions
2. It takes a few years for the toll to come out, but eventually there’s a negative consensus on it.
3. A representative of CEA agrees the negative consensus is deserved, but since it occurred under old leadership, doesn’t think anyone should draw conclusions about new leadership from it.
4. CEA announces new program with the best of intentions.
So I would bet that within 3 years, a CEA representative will repudiate a major project occurring under Zach’s watch.
I would also bet on more posts similar to Bad Omens in Current Community Building or University Groups Need Fixing coming out in a few years, talking about 2024 recruiting.
- ^
Although you might like Change my mind: Veganism entails trade-offs, and health is one of the axes (the predecessor to EA Vegan Advocacy is not Truthseeking) and Truthseeking when your disagreements lie in moral philosophy and Love, Reverence, and Life (dialogues with a vegan commenter on the same post)
Seeing my statements reflected back is helpful, thank you.
I think Effective Altruism is upper case and has been for a long time, in part because it aggressively recruited people who wanted to follow[1]. In my ideal world it both has better leadership and needs less of it, because members are less dependent.
I think rationality does a decent job here. There are strong leaders of individual fiefdoms, and networks of respect and trust, but it's much more federated.
- ^
Which is noble and should be respected- the world needs more followers than leaders. But if you actively recruit them, you need to take responsibility for providing leadership.
I'm curious why this feels better, and for other opinions on this.
How much are you arguing about wording, vs genuinely believe and would bet money that in 3-5 years my work will have moved EA to something I can live with?
The desire for crowdfunding is less about avoiding bias[1] and more that this is only worth doing if people are listening, and small donors are much better evidence on that question than grants. If EV gave explicit instructions to donate to me it would be more like a grant than spontaneous small donors, although I in general agree people should be looking for opportunities they can beat GiveWell.
ETA: we were planning on waiting on this but since there's interest I might as well post the fundraiser now.
- ^
I'm fortunate to have both a long runway and sources of income outside of EA and rationality. One reason I've pushed as hard as I have on EA is that I had a rare combination of deep knowledge of and financial independence from EA. If couldn't do it, who could?
there are links in the description of the video
Maybe you just don't see the effects yet? It takes a long time for things to take effect, even internally in places you wouldn't have access to, and even longer for them to be externally visible. Personally, I read approximately everything you (Elizabeth) write on the Forum and LW, and occasionally cite it to others in EA leadership world. That's why I'm pretty sure your work has had nontrivial impact. I am not too surprised that its impact hasn't become apparent to you though.
I've repeatedly had interactions with ~leadership EA that asks me to assume there's a shadow EA cabal (positive valence) that is both skilled and aligned with my values. Or puts the burden on me to prove it doesn't exist, which of course I can't do. And what you're saying here is close enough to trigger the rant.
I would love for the aligned shadow cabal to be real. I would especially love if the reason I didn't know how wonderful it was was that it was so hypercompetent I wasn't worth including, despite the value match. But I'm not going to assume it exists just because I can't definitively prove otherwise.
If shadow EA wants my approval, it can show me the evidence. If it decides my approval isn't worth the work, it can accept my disapproval while continuing its more important work. I am being 100% sincere here, I treasure the right to take action without having to reach consensus- but this doesn't spare you from the consequences of hidden action or reasoning.
This is a good point. In my ideal movement makes perfect sense to disagree with every leader and yet still be a central member of the group. LessWrong has basically pulled that off. EA somehow managed to be bad at having leaders (both in the sense that the closest things to leaders don't want to be closer, and that I don't respect them), while being the sort of thing that requires leaders.
If people in EA would consider her critiques to have real value, then the obvious step is to give Elizabeth money to write more [...] If she would get paid decently, I would expect she would feel she's making an impact.
First of all, thank you, love it when people suggest I receive money. Timothy and I have talked about fundraising for a continued podcast. I would strongly prefer most of the funding be crowdfunding, for the reason you say. If we did this it would almost certainly be through Manifund. Signing up for Patreon and noting this as the reason also works, although for my own sanity this will always be a side project.
I should note that my work on EA up through May was covered by a Lightspeed grant, but I don't consider that EA money.
Reading this makes me feel really sad because I’d like to believe it, but I can’t, for all the reasons outlined in the OP.
I could get into more details, but it would be pretty costly for me for (I think) no benefit. The only reason I came back to EA criticism was that talking to Timothy feels wholesome and good, as opposed to the battery acid feeling I get from most discussions of EA.
There were ~20 in round 2, and I've gotten reports of other people being inspired by the post to get tested themselves that I estimate at least double that.
I think not enforcing an "in or out" boundary is big contributor to this degradation -- like, majorly successful religions required all kinds of sacrifice.
I feel ambivalent about this. On one hand, yes, you need to have standards, and I think EA's move towards big-tentism degraded it significantly. On the other hand I think having sharp inclusion functions are bad for people in a movement[1], cut the movement off from useful work done outside itself, selects for people searching for validation and belonging, and selects against thoughtful people with other options.
I think I'm reasonably Catholic, even though I don't know anything about the living Catholic leaders.
I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn't have a leader they trust and respect, because Catholicism has a longer tradition, and you can work within that. On the other hand... I wouldn't say this to most people, but my model is you'd prefer I be this blunt... my understanding is Catholicism is about submission to the hierarchy, and if you're not doing that or don't actively believe they are worthy of that, you're LARPing. I don't think this is true of (most?) protestant denominations: working from books and a direct line to God is their jam. But Catholicism cares much more about authority and authorization.
It feels like AI safety is the best current candidate for [lifeboat], though that is also much less cohesive and not a direct successor for a bunch of ways. I too have been lately wondering what "Post EA" looks like.
I'd love for this to be true because I think AIS is EA's most important topic. OTOH, I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta (which is more AIS). ETG was a much more viable role for GD than for AIS.
- ^
If you're only as good as your last 3 months, no one can take time to rest and reflect, much less recover from burnout.
Some related posts:
- one example among many of a long runway letting me make more moral choices
- ongoing twitter thread on frying pan agency
- I get to the airport super early because any fear of being late turns me into an asshole.
I used the word obligation because it felt too hard to find a better one, but I don't like it, even for saving children in shallow ponds. In my mind, obligations are for things you signed up for. In our imperfect world I also feel okay using it for things you got signed up for and benefit from (e.g. I never agreed to be born in the US as a citizen, but I sure do benefit from it, so taxes are an obligation). In my world obligations are always to a specific entity, not general demands.
I think that for some people, rescuing drowning children is an obligation to society, similar to taxes. Something feels wrong about that to me, although I'd think very badly of someone who could have trivially saved a child and chose not to.
A key point for me is that people are allowed to be shitty. This right doesn't make them not-shitty or free them from the consequences of being shitty, but it is an affordance available to everyone. Not being shitty requires a high average on erogatory actions, plus some number of supererogatory ones.
How many supererogatory actions? The easiest way to define this is relative to capacity, but that seems toxic to me, like people to don't have a right to their own gains. It also seems likely to drive lots of people crazy with guilt. I don't know what the right answer is.
TBH I've been really surprised at my reaction to "~obligation to maximal growth". I would have predicted it would feel constraining and toxic, but it feels freeing and empowering, like I've been given a more chances to help people at no cost to me. I feel more powerful. I also feel more permission to give up on what is currently too hard, since sacrificing myself for one short term goal hurts my long term obligation.
Maybe the key is that this is a better way to think achieve goals I already had. It's not a good frame for deciding what one's goals should be.
[cross-posted from What If You Lived In the Least Convenient Possible World]
I came back to this post a year later because I really wanted to grapple with the idea I should be willing to sacrifice more for the cause. Alas, even in a receptive mood I don't think this post does a very good job of advocating for this position. I don't believe this fictional person weighed the evidence and came to a conclusion she is advocating for as best she can: she's clearly suffering from distorted thoughts and applying post-hoc justifications. She's clearly confused about what convenient means (having to slow down to take care of yourself is very inconvenient), and I think this is significant and not just a poor choice of words. So I wrote my own version of the position.
Let's say Bob is right that the costs exceed the benefits of working harder or suffering. Does that need to be true forever? Could Bob invest in changing himself so that he could better live up to his values? Does he have an ~obligation[1] to do that?
We generally hold that people who can swim have obligations to save drowning children in lakes[2], but there's no obligation for non-swimmers to make an attempt that will inevitably drown them. Does that mean they're off the hook, or does it mean their moral failure happened when they chose not to learn how to swim?
One difficulty with this is that there are more potential emergencies than we could possibly plan for. If someone skipped the advance swim lesson where you learn to rescue panicked drowning people because they were learning wilderness first aid, I don't think that's a moral failure.
This posits a sort of moral obligation to maximally extend your capacity to help others or take care of yourself in a sustainable way. I still think obligation is not quite the right word for this, but to the extent it applies, it applies to long term strategic decisions and not in-the-moment misery.
I came back to this post a year later because I really wanted to grapple with the idea I should be willing to sacrifice more for the cause. Alas, even in a receptive mood I don't think this post does a very good job of advocating for this position. I don't believe this fictional person weighed the evidence and came to a conclusion she is advocating for as best she can: she's clearly suffering from distorted thoughts and applying post-hoc justifications. She's clearly confused about what convenient means (having to slow down to take care of yourself is very inconvenient), and I think this is significant and not just a poor choice of words. So I wrote my own version of the position.
Let's say Bob is right that the costs exceed the benefits of working harder or suffering. Does that need to be true forever? Could Bob invest in changing himself so that he could better live up to his values? Does he have an ~obligation[1] to do that?
We generally hold that people who can swim have obligations to save drowning children in lakes[2], but there's no obligation for non-swimmers to make an attempt that will inevitably drown them. Does that mean they're off the hook, or does it mean their moral failure happened when they chose not to learn how to swim?
One difficulty with this is that there are more potential emergencies than we could possibly plan for. If someone skipped the advance swim lesson where you learn to rescue panicked drowning people because they were learning wilderness first aid, I don't think that's a moral failure.
This posits a sort of moral obligation to maximally extend your capacity to help others or take care of yourself in a sustainable way. I still think obligation is not quite the right word for this, but to the extent it applies, it applies to long term strategic decisions and not in-the-moment misery.
What makes grammarly pro worth it? I used the free version for a while, but it became so aggressive with unwanted corrections I couldn't even see the real suggestions, chrome caught up with the useful features, and on long essays it crippled my browser.
Never [consciously] hating anyone more than transiently and finding a childhood that bad to not be terrible are consistent with an unwell person in denial (in a way I don't think holds for all of your statements. e.g. "self loathing was a confusing concept for me" feels much more consistent with the kind of confusion I'd expect from 99th percentile mental health).
It's bad form to psychoanalyze people in public, especially when they've made themselves vulnerable. But for the benefit of others I want to register that what's written here is consistent with both psychological health and specific forms of being very unwell. And even for people for whom this is what psychological health looks like, you can't cargo cult it.
Thanks for this comment. I was both very moved by this post and unwilling to lean into it due to fears I couldn't articulate. "fear of being eaten" is a pretty good match for what I was feeling, and having read the post I feel much more able to distinguish shadowmoth situations from being-eaten situations.
Some aspects that seem important to me for distinguishing between the two:
- do you actually want the goal that struggling is supposed to bring you closer to? Did you choose it, or was it assigned to you? how useful is the goal to the non-helper?
- how much work is the non-helper saving themselves by refusing to help you? In the shadowmoth case it was hurting the non-helper, which makes not-helping more likely to be genuinely for the moth's benefit.
- is the struggling recurring indefinitely, or is there some definitive endpoint? or at least, a gradual graduation to a higher class of problem?
- can you feel something strengthening as you struggle?
- is there reason to believe you'll eventually be capable of doing the thing?
- is this goal the best use of your limited energy? maybe someone should help you out of this cocoon so you can struggle against a more important one.
- are you going into freeze response?
I agree with your examples and larger point. That was a gloss I was never happy with but can't quite bring myself to remove. I was hoping to articulate the reasons better than this but so far they're eluding me.
Remember the proof that humans can't get high scores playing pinball because 'chaos theory'?
Can you point to where the post says this? Because I read it as saying "It is impossible to predict a game of pinball for more than 12 bounces in the future" and "Professional pinball players try to avoid the parts of the board where the motion is chaotic."
what I want for rationality techniques is less a tutor and more of an assertive rubber duck walking me through things when capacity is scarce.
I bet Jeff Bezos was like "okay I bet I could make an online bookstore that worked", was also thinking "but, what if I ultimately wanted the Everything Store? What are obstacles that I'd eventually need to deal"
I've heard Jeff Bezos was aiming for Everything Store from the beginning, and started with books because they have a limited range of sizes.
If you'd like to recommend a particular AI product, please reply to this thread.
People who think my premise is faulty: please give your arguments under this thread.
I disagree with the sibling thread about this kind of post being “low cost”, BTW; I think adding salience to “who blocked whom” types of considerations can be subtly very costly.
I agree publicizing blocks has costs, but so does a strong advocate of something with a pattern of blocking critics. People publicly announcing "Bob blocked me" is often the only way to find out if Bob has such a pattern.
I do think it was ridiculous to call this cultish. Tuning out critics can be evidence of several kinds of problems, but not particularly that one.
Curation notice: a good old fashion fact post. It's relevant, detailed, and legibly written.
Continuing the list...
- mother can't be separated from the baby for longer than it takes them to get hungry, and must handle every nighttime wakeup. In the newborn phase that can be every 1-3 hours.
- you can get around this by pumping, but this has its own costs. Pumping is uncomfortable to painful, time consuming, and then has all the inconvenience of formula feeding and then some (like maintaining a cold chain, and more steps requiring sterilization). It's also a hardcore logistical puzzle to get as much stored milk as possible without underfeeding your infant in the moment. The best solutions are the least convenient to the mother.
- and this is all pretty best case scenario. Lots of women or their babies have medical impediment or just don't produce enough milk, even on medication.
- some babies suck at breastfeeding and need a bottle. you could always pump and bottle feed, but as we covered, pumping has its own cost.
- some women need medications that are contraindicated by breastfeeding.
None of this contradicts the evidence that breastfeeding is beneficial, or easier for some people. But the frame should be "this is (usually) a sacrifice that we want to quantify the benefits of, to figure out if it's worth it" not "hey, free value!"
Huh, yeah, does seem like Claude was the winner there. I reproed the intellectual disability answers and got the same results you did. I was able to get a better answer from Perplexity with a slight rephrase, but I hate having to play the rephrase game with AIs so that's a modest mitigator. And the answer was not internally consistent
I think the difference between us might be that I do primarily want a search engine, and perhaps my natural phrasing works better with Perplexity.
I got my first hallucination shortly after posting this- it's definitely not perfect. But I still find the ease of checking a big improvement over other models.
I haven't, but if you have I'd love if you posted the results here.
I assume you're using claude pro? Because I found the top free version unusable.
Could you post some questions you've run on both and their answers?
Projects I might do or wish someone else would do:
- Compare AI research assistants more intensively.
- Compare individual Perplexity models
- Compare paper reading ability across models.
- Look into that fluoride paper
These results are all from the vanilla UI. Comparing individual models on harder tasks is on my maybe list, but the project was rapidly suffering from scope creep so I rushed it out the door.
You can also retroactively support my work in the EA Community Choice grants. This is narratively for my covid work, but if you would like to label it with something else that's allowed.
What happened with Approximate Entropy was that Chaos could be useful, it just wasn't as useful as a pure information theory derived solution. Wouldn't surprise me if that were true here as well.
Gleick gives Mandelbrot credit for this, but it wouldn't be the first major misrepresentation I've found in the Gleick book.
I know someone's gonna ask so let me share the most powerful misrepresentation I've found so far: Gleick talks about the Chaos Cabal at UC Santa Cruz creating the visualization tools for Chaos all on their own. In The Chaos Avante Garde, Professor Ralph Abraham describes himself as a supporter of the students (could be misleading) and, separately, founding the Visual Math Project. VMP started with tools for undergrads but he claims ownership of chaos work within a few years. I don't know if the Chaos Cabal literally worked under Abraham, but it sure seems likely the presence of VMP affected their own visualization tools.
One thing that jumps out, writing this out explicitly, is that chaos theory concievably could be replaced with intuition of "well obviously that won't work," and so I don't know to what extent chaos theory just formulated wisdom that existed pre-1950s, vs generated wisdom that got incorporated into modern common sense.
Yeah, this is top of my mind as well. to get a sense of the cultural shift I'm trying to interview people who were around for the change or at least knew people who were. If anyone knows any boomer scientists or mathematicians with thoughts on this I'd love to talk to them.
I haven't nailed this down yet, but there's an interesting nugget along the lines of chaos being used to get scientists comfortable with existential uncertainty, which (some) mathematicians already were. My claim of the opposite triggered a great discussion on twitter[1], and @Jeffrey Heninger gave me a great concept: "the chaos paradigm shift was moving from "what is the solution" to "what can I know, even if the solution is unknowable?" That kind of cultural shift seems even more important than the math but harder to study.
BTW thank you for your time and all your great answers, this is really helpful and I plan on featuring "ruling things out" if I do a follow-up post.
- ^
I've now had two ex-mathematicians, one ex-astrophysicist, and one of the foremost theoretical ecologists in the world convincingly disagree with me.