I enjoyed writing this post, but think it was one of my lesser posts. It's pretty ranty and doesn't bring much real factual evidence. I think people liked it because it was very straightforward, but I personally think it was a bit over-rated (compared to other posts of mine, and many posts of others).
I think it fills a niche (quick takes have their place), and some of the discussion was good.
but I think this is actually a decline in coal usage.
Ah, my bad, thanks!
They estimate ~35% increase over the next 30 years
That's pretty interesting. I'm somewhat sorry to see it's linear (I would have hoped solar/battery tech would improve more, leading to much faster scaling, 10-30 years out), but it's at least better than some alternatives.
I found this last chart really interesting, so did some hunting. It looks electricity generation in the US grew linearly until around ~2000. In the last 10 years though, there's been a very large decline in "petroleum and other", along with a strong increase in natural gas, and a smaller, but significant, increase in renewables.
I'd naively guess things to continue to be flat for a while as petroleum use decreases further; but at some point, I'd expect energy use to increase again.
That said, I'd of course like for it to increase much, much faster (more like China). :)
I liked this post a lot, though of course, I didn't agree with absolutely everything.
These seemed deeply terrible. If you think the best use of funds, in a world in which we already have billions available, is to go trying to convince others to give away their money in the future, and then hoping it can be steered to the right places, I almost don’t know where to start. My expectation is that these people are seeking money and power,
I'm hesitant about this for a few reasons.
Sure, we have a few billion available, and we're having trouble donating that right now. But we're also not exactly doing a ton of work to donate our money yet. (This process gave out $10 Million, with volunteers). In the scheme of important problems, a few (~40-200) billion really doesn't seem like that much to me. Marginal money, especially lots of money, still seems pretty good.
My expectation is that these people are seeking money and power -> I don't know which specific groups applied or their specific details. I can say that my impression, lots of EAs really just don't know what else to do. It's tough to enter research, and we just don't have that much in terms of "these interventions would be amazing, please someone do them" for longtermism. I've seen a lot of orgs get created with something like, "This seems like a pretty safe strategy, it will likely come into use later on, and we already have the right connections to make it happen." This, combined with a general impression that marginal money is still useful in the long-term, I think could present a more sympathetic take than what you describe.
The default strategy for lots of non-EA entrepreneurs I know has been something like, "Make a ton of money/influence, then try to figure out how to use it for good. Because people won't listen to me or fund my projects on my own". I wish more of these people would do direct work (especially in the last few years, when there's been more money), but can sympathize with that strategy. Arguably, Elon Musk is much better off having started with "less ambitious" ventures like Zip2 and Paypal; it's not clear if he would have been funded to start with SpaceX/Tesla when he was younger.
All that said, the fact that EAs have so little idea of what exactly is useful seems like a pretty burning problem to me. (This isn't unique to EAs, to be clear). On the margin, it seems safe to heavily emphasize "figuring stuff out" instead of "making more money, in hopes that we'll eventually figure stuff out" However, "figuring stuff out" is pretty hard and not nearly as tractable as we'd like it to be.
"I would hire assistance to do at least the following"
I've been hoping that the volunteer funders (EA Funds, SFF) would do this for a while now. Seems valuable to at least try out for a while. In general, "funding work" seems really bottlenecked to me, and I'd like to see anything that could help unblock it.
definitely a case of writing a longer letter
I'm impressed by just how much you write on things like this. Do you have any posts outlining your techniques? Is there anything special, like speech-to-text, or do you spend a lot of time on it, or are you just really fast?
I might suggest starting with an "other" list that can be pretty long. With Slack, different subcommunities focus heavily on different emojis for different functional things. Users sometimes figure out neat innovations and those proliferate. So if it's all designed by the LW team, you might be missing out.
That said, I'd imagine 80% of the benefit is just having anything like this, so I'm happy to see that happen.
I wouldn't read too much into my choice of word there.
It's also important to point out that I was trying to have a model that assumed interestingness. The "disagreeables" I mention are the good ones, not the bad ones. The ones worth paying attention to I think are pretty decent here; really, that's the one thing they have to justify paying attention to them.
1) This seems great, and I'm impressed by the agency and speed.
2) From reading the comments, it seems like several people were actively afraid of how Leverage could retaliate. I imagine similar for accusations/whistleblowing for other organizations. I think this is both very, very bad, and unnecessary; as a whole, the community is much more powerful than individual groups, so it seems poorly managed when the community is scared of a specific group. Resources should be spent to cancel this out.
In light of this, if more money were available, it seems easy to justify a fair bit more. Or even better could be something like, "We'll help fund lawyers in case you're attacked legally, or anti-harassing teams if you're harassed or trolled". This is similar to how the EFF helps with cases from small people/groups being attacked by big companies.
I don't mean to complain; I think any steps here, especially so quickly are fantastic.
3) I'm afraid this will get lost in this comment section. I'd be excited about a list of "things to keep in mind" like this to be repeatedly made prominent somehow. For example, I could imagine that at community events or similar, there could be necessary papers like, "Know your rights, as a Rationalist/EA", which flags how individuals can report bad actors and behavior.
4) Obviously a cash prize can encourage lying, but I think this can be decently managed. (It's a small community, so if there's good moderation, $15K would be very little compared to the social stigma that would come and you've found out to have destructively lied for $15k)
The latter option is more of what I was going for.
I’d agree that the armor/epistemics people often aren’t great at coming up with new truths in complicated areas. I’d also agree that they are extremely unbiased and resistant to both poor faith arguments, and good faith, but systematically misleading arguments (these are many of the demons the armor protects against, if that wasn’t clear).
When I said that they were soft-spoken and poor at arguing, I’m assuming that they have great calibration and are likely arguing against people who are very overconfident, so in comparison they seem meager. I think of a lot of superforecasters in this way; they’re quite thoughtful and reasonable, but not often bold enough to sell a lot of books. Other people with too epistemics sometimes recognize their skills (especially when f they have empirical track records like in forecasting systems), but that’s right now a meager minority.
When I hear the words "intelligence" and "wisdom", I think of things that are necessarily properties of individual humans, not groups of humans. Yet some of the specifics you list seem to be clearly about groups.
I tried to make it clear that I was referring to groups with the phrase, "of humanity", as in, "as a whole", but I could see how that could be confusing.
the wisdom and intelligence of humanity
For those interested in increasing humanity’s long-term wisdom and intelligence
I also suspect that work on optimizing group decision making will look rather different from work on optimizing individual decision making, possibly to the point that we should think of them as separate cause areas.
I imagine there's a lot of overlap. I'd also be fine with multiple prioritization research projects, but think it's early to decide that.
This makes me wonder how nascent this really is?
I'm not arguing that people haven't made successes in the entire field (I think there's been a ton of progress over the last few hundred years, and that's terrific). I would argue though that there's very little formal prioritization of such progress. Similar to how EA has helped formalize the prioritization of global health and longtermism, we have yet to have similar efforts for "humanity's wisdom and intelligence".
I think that there are likely still strong marginal gains in at least some of the intervention areas.
That's an interesting perspective. It does already assume some prioritization though. Such experimentation can only really be done in a very few of the intervention areas.
I like the idea, but am not convinced of the benefit of this path forward, compared to other approaches. We already have had a lot of experiments in this area, many of which cost a lot more than $15,000; marginal exciting ones aren't obvious to me.
But I'd be up for more research to decide if things like that are the best way forward :)
The first few chapters of "The Existential Pleasures of Engineering" detail some optimism, then pessimism, of technocracy in the US at least.
I think the basic story there was that after WW2, in the US, people were still pretty excited about tech. But in the 70s (I think), with environmental issues, military innovations, and general malaise, people because disheartened.
Thanks for the opinion, and I find the take interesting.
I'm not a fan of the line, "How about a policy that if you use illegal drugs you are presumptively considered not yet good enough to be in the community?", in large part because of the phrase "not yet good enough". This is a really thorny topic that seems to have several assumptions baked into it that I'm uncomfortable with.
I also think that many here like at least some drugs that are "technically illegal", in part, because the FDA/federal rules move slowly. Different issue though.
I like points 2 and 3, I imagine if you had a post just with those two it would have gotten way more upvotes.
I really like things like this. I think it's possible we could do a "decent enough" job, though it's impossible to have a solution without risk.
One thing I've been thinking about is a browser extention. People would keep a list of things, like, "User XYZ is Greg Hitchenson", and then when it sees XYZ, it adds annotation".
Lots of people are semi-anonymous already. They have psuedonyms that most people don't know, but "those in the know" do. This sort of works, but isn't formalized, and can be a pain. (Lots of asking around: "Who is X?")
I imagine grantmakers would be skeptical about people who would say "yes" to an optional form. Like, they say they're okay with the information being public, but when it actually goes out, some of them will complain about it, leading to a lot of extra time.
However, some of our community seems unusually reasonable, so perhaps there's some way to make it viable.
I agree that it would have been really nice for grantmakers to communicate with the EA Hotel more, and other orgs more, about their issues. This is often a really challenging conversation to have ("we think your org isn't that great, for these reasons"), and we currently have very few grantmaker hours for the scope of the work, so I think grantmakers don't have much time now to spend on this. However, there does seem to be a real gap here to me. I represent a small org and have been around other small orgs, and the lack of communication with small grantmakers is a big issue. (And I probably have it much easier than most groups, knowing many of the individuals responsible)
I think the fact that we have so few grantmakers right now is a big bottleneck that I'm sure basically everyone would love to see improved. (The situation isn't great for current grantmakers, who often have to work long hours). But "figuring out how to scale grantmaking" is a bit of a separate discussion.
Around making the information public specifically, that's a whole different matter. Imagine the value proposition, "If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see." Fewer people would apply and many would complain a whole lot when it happens. The LTFF already gets flack for writing somewhat-candid information on the groups they do fund.
(Note: I was a guest manager on the LTFF for a few months, earlier this year)
I was just thinking of the far right-wing and left-wing in the US; radical news organizations and communities. Q-anon, some of the radical environmentalists, conspiracy groups of all types. Many intense religious communities.
I'm not making a normative claim about the value of being "moral" and/or "intense", just saying that I'd expect moral/intense groups to have some of the same characteristics and challenges.
To put it bluntly, EA/rationalist community kinda selects for people who are easy to abuse in some ways. Willing to donate, willing to work to improve the world, willing to consider weird ideas seriously -- from the perspective of a potential abuser, this is ripe fruit ready to be taken, it is even obvious what sales pitch you should use on them.
For what it’s worth, I think this is true for basically all intense and moral communities out there. The EA/rationalist groups generally seem better than many religious and intense political groups in these areas, to me. However, even “better” is probably not at all good enough.
I very much agree about the worry, My original comment was to make the easiest case quickly, but I think more extensive cases apply to. For example, I’m sure there have been substantial problems even in the other notable orgs, and in expectation we should expect there to continue to be so. (I’m not saying this based on particular evidence about these orgs, more that the base rate for similar projects seems bad, and these orgs don’t strike me as absolutely above these issues.)
One solution (of a few) that I’m in favor of is to just have more public knowledge about the capabilities and problems of orgs.
I think it’s pretty easy for orgs of about any quality level to seem exciting to new people and recruit them or take advantage of them. Right now, some orgs have poor reputations among those “in the know” (generally for producing poor quality output), but this isn’t made apparent publicly. One solution is to have specialized systems that actually present negative information publicly; this could be public rating or evaluation systems.
This post by Nuno was partially meant as a test for this:
Another thing to do, of course, would be to just do some amounts of evaluation and auditing of all these efforts, above and beyond what even those currently “in the know” have. I think that in the case of Leverage, there really should have been some deep investigation a few years ago, perhaps after a separate setup to flag possible targets of investigation. Back then things were much more disorganized and more poorly funded, but now we’re in a much better position for similar efforts going forward.
 I don’t particularly blame them, consider the alternative.
As someone part of the social communities, I can confirm that Leverage was definitely a topic of discussion for a long time around Rationalists and Effective Altruists. That said, often the discussion went something like, "What's up with Leverage? They seem so confident, and take in a bunch of employees, but we have very little visibility." I think I experienced basically the same exact conversation about them around 10 times, along these lines.
As people from Leverage have said, several Rationalists/EAs were very hostile around the topic of Leverage, particularly in the last ~4 years or so. (I've heard stories of people getting shouted at just for saying they worked at Leverage at a conference). On the other hand, they definitely had support by a few rationalists/EA orgs and several higher-ups of different kinds.
They've always been secretive, and some of the few public threads didn't go well for them, so it's not too surprising to me that they've had a small LessWrong/EA Forum presence.
I've personally very much enjoyed staying mostly staying away from the controversy, though very arguably I made a mistake there.
(I should also note that I had friends who worked at or worked close to Leverage, I attended like 2 events there early on, and I applied to work from there around 6 years ago)
Is there any culture in which power structures aren't systemic and deeply ingrained into our culture? Even a tribe of hunter gather has it's cultural norms that regulate the power between the individuals.
I agree. I think there's a whole lot of stuff deeply ingrained in the culture of every group.
I would expect that most people at LessWrong don't have a problem with power structures provided they fulfill critieria like being meriocratic and a few other criteria.
It's hard for me to understand your argument here, I expect that this would have to be a much longer discussion. I'm not saying that there aren't some cases where power structures aren't justified. But I think there are pretty clearly some that almost all of us would agree were unjustified, and I think that a lot of racial/historical cases work like that.
Great question! I have some books I personally enjoyed, and also would like to encourage others to recommend texts. I'm sure that my understanding is vastly less than what I'd really want. However, there are a few books that come to mind.
I think the big challenge, for me, is "attempting to empathize and understand African Americans". This is incredibly, ridiculously difficult! Cultures are very different from one another. I grew up in an area with a large mix of ethnic groups, and I think that was useful, but the challenge is far greater.
In pop culture, I found "Dear White People", both the movie, and the TV show (mostly the first 2 seasons), to be pretty interesting.
I really like James Baldwin, though enjoyed his speeches more than his books, so far.
Honestly, African American Studies is just a gigantic field with lots of great work. This can be looked at as interesting to better understand African Americans, but there's also a lot of other take-aways, like understanding severe cognitive biases and motivated reasoning and from a very different angle.
Thanks so much for clarifying! Sorry to have misinterpreted that.
I think this topic is particularly toxic for online writing. People can be intensely attacked for either side here. This means that people of positions feel more inclined to hint at their positions rather than directly saying them. Which correspondingly means that I'm more inclined to think that text is meant as being hints.
If you or others want to have a private video call about these topics I'd be happy to do so (send me a PM), I just hate public online discussion for topics like these.
But he's also stating that he thinks I have literally nothing to offer him by way of new information and vice-versa. That's pretty low!
This is definitely not how I saw it.
I'm sure everyone has a lot to learn from everyone else. The big challenge is that this learning is costly and we have extremely limited resources. There's an endless number of discussions we could be part of, and we all have very limited time left in total (for discussions and other things). So if I try to gently leave a conversation, it's mainly a signal of "I don't think that this is the absolutely most high-value thing for me to be doing now", which is a high bar!
Second, I think you might have been taking this a bit personally, like me trying to hold off conversation was a personal evaluation as you as a person.
Again, I know very little about you, and I used to know even less (when you made the original comment). This is the comment in question:
Defending a position by pointing out that a portion (however big or small) of the critics of the position are 'vitriolic' isn't actually a valid argument. If people really hate something so much so that they get emotional about it that's still pretty good evidence that the something is bad.
This really doesn't give me much insight into your position or background. Basically all I know about you is that you wrote these two sentences here, and have written a few comments on LessWrong in the past. My prior for "person with an anonymous name on LessWrong, a few previous comments there, and so on", doesn't make me incredibly excited to spend a lot of time going back and forth with. I've been burned in the past, a few times, with people who match similar characteristics.
Often people who use anonymous accounts wind up being terrific, it's just hard to discern which are which, early on.
About that last line; I'm fine with you replying or not replying. I wish you the best in the continuation of your intellectual journey.
Lastly, I'll note that this "White Fragility" is a very sensitive topic that I'm not excited to chat about publicly on forums like this. (In part because my comments on this get downvoted a lot, in part because this sort of discussion can easily be used as ammunition later on by anyone interested (against either myself or any of the other commenters who responds)). My identity is clearly public, so there is real risk.
I write blog posts on LessWrong that are far less controversial, and am much more happy to publicly discuss those topics.
I think it means the reaction to the book is not really the reaction to the book itself, but rather to the political powers this book represents.
I think it's very likely that you're right here. I do wish this could be said more. It's totally fine to argue against political powers and against potential situations. Ideally this argument would be differentiated around discussion on this particular book/author.
What is more likely to happen, is someone reading the book, and then yelling at me for not agreeing with some idea in the book. Possibly in a situation where this might get me in trouble
I agree that there are lots of ideas in the book that are probably wrong. To be clear, I could also easily imagine many situations where unreasonable people would take either the wrong ideas too far, or take their own spin on this and take those ideas far too far. I imagine that in either case, the results can be highly destructive.
I hope that these sorts of fears don't prevent us from understanding or understanding interesting/useful ideas from such material. I think they make this massively harder, but there might be some decent strategies.
I would be curious if people here have recommendations on how they would like to see these ideas getting discussed in ways that minimize the potential hazards of getting people into trouble for unreasonable reasons or creating tons of anxiety. I think that this book has generated a lot of high-anxiety discussion that's clearly not very effective at delivering better understanding.
I'm really sorry if I hurt or offended you. I assumed that a brief description of where I was at would be preferred to not replying at all. I clearly was incorrect about that.
I disagree with some of your specific implications. I'm fairly sure though that you'd disagree with my responses. I could easily imagine that you've already predicted them, well enough, and wouldn't find them very informative, particularly for what I could write in a few sentences.
This isn't unusual for me. I try to stay out of almost all online discussion. I have things to do, I'm sure you have things to do as well. Online discussion is costly, and it's especially costly when people know very little about each other, and the conversation topic (White Fragility) is as controversial as this one is.
: I know almost nothing about you. I feel like I'd have a very difficult time feeling comfortable saying things in ways I can predict you'd be receptive to, or things that you wouldn't actively attack me for. I find that I've had a difficult time modeling people online; particularly people who I barely know. This could easily lead to problems of several different kinds. It's very, very possible that none of this applies to you, but it would take a fair amount of discussion for me to find that out and feel safe with my impressions of you. This also applies for all the other people I don't know, but who might be watching this conversation or jump in at any point.
I feel like both sides of the "White Fragility" debate have some of this going on.
I don't feel like I've exactly seen rationalists on these sides (in large part because the discussion generally hasn't been very prominent), but I've seen lots of related people on both sides, and I expect rationalists to have similar beliefs to those people. (Myself included)
I think from reading some of the other comments here on the LessWrong post, I'm a bit worried that this might be turning into some flame wars.
I'd note that this particular book is probably not the best one to have debates around this issue for. The book seems to be quite a bit more sensationalist, moralistic, and less scientific than I'd really like, which I think makes it very difficult to discuss. This seems like a subject that would attract lots of motte-and-bailey thinking on both sides. (the connection between more reasonable vs. outlandish claims representing the motte-and-bailey, but switched on each side).
This is clearly a highly sensitive issue. No one wants to be (publicly especially!) associated with either racism or cancel culture.
Publicly discussion is far more challenging than private discussions. For example, we simply don't know who is watching these discussions or who might be trying to use anything posted here for antagonistic purposes. (They copy several comments from someone and post them without much context, accusing them of either racism or cancel culture).
Very sadly, public discussion of topics like these right now is thoroughly challenging for many reasons. My guess is that it's often just not worth it.
I'm really not sure what you're trying to do here, but I feel like your phrasing could be interpreted like creating a dichotomy between: 1. People who this impacts (in near mode), who will be very much hurt by this work.
2. Armchair, ivory-tower intellectuals who smirk and find the same sorts of interest in this book that they would get from the next "provocative" Game of Thrones book.
As such, the clear implication (that some readers) might take away is that I sit very much in the camp of (2), that just finds it interesting because the issues don't actually matter much to me. So my opinion probably shouldn't matter as much as those in (1).
It's possible that such a criticism, if it were meant, might be justified! I've been wrong before, many times. But I wanted to be more clear if this is what you were intending before responding.
I'd note that far-mode being-interesting-and-provocative, as I used it, often means that for some people it will be difficult.
Previous discussions introducing athiesm/veganism/altruism also really upset a lot of people. They clearly led to a whole lot of change that was incredibly challenging or devastating to different people.
Often interesting-and-provocative could be very bad, like both extreme left-wing and extreme right-wing literature.
Thanks for writing this up. I was a bit nervous when reading the title because I was expecting that this would have been an "edgy takedown", but it wasn't.
I haven't read the book, but I seen a few talks by Robin DiAngelo, and found them generally reasonable. They at least brought up several points I thought were interesting and provocative, which is a high bar for public presentations.
I then saw numerous reviews from sources I previously deemed decent that treated the book with extreme vitriol.
I found the hate leveled at this book to be frightening. There are a lot of "mediocre popular science books", but this one was truly disdained by large communities. (Right wing ones, of course, but also some somewhat politically neutral or left crowds).
The basic ideas of "racism" being systemic in our culture, but occasionally very difficult to directly notice (especially for those in power), strike me as very similar to ones of implicit biases and similar. The Elephant in the Brain comes to mind. I think the Rationality community and similar should be well equipped to be able to discuss some of these issues.
My impression is that this book isn't rigorous in the ways that most of us here would hope for. It doesn't seem to have nearly as much nuance as I'd probably want, but books with nuance typically don't become popular. It's a bit of a pity, it is an important topic, so it would be great to work here we could trust to be fairly non-biased (either way) and thoughtful. However, I think I'm still happy that this book was written. I'm sure that Robin DiAngelo has probably faced gigantic amounts of harassment for writing it; perhaps this will lessen the burden for other people doing work in the area.
It seems like there are two big issues here:
1. Racism and power structures are systemic and deeply ingrained into our culture
2. This book presents a scientifically rigorous account of many details around the situation.
My impression is that #1 has a lot of truth to it, but #2 is lacking. In fairness, lots of books are terrible at #2, but this one might be particularly bad (given the broad claims). Unfortunately, I get the impression that a lot of reviews argue that because #2 is poor, #1 is wrong, and that seems cheap to me.
I considered writing my own review on the book on LessWrong to generate discussion, but myself was too wimpy to do so. I was very nervous about possible flame wars from doing so. (This makes me more thankful you've done it.)
For examples of the vitriol I'm talking about, see the Goodreads reviews: https://www.goodreads.com/book/show/43708708-white-fragility?ac=1&from_search=true&qid=sSB9PhQyYt&rank=1