My emotional reaction to the current funding situation
post by Sam F. Brown (sam-4) · 2022-09-09T22:02:46.301Z · LW · GW · 36 commentsThis is a link post for https://sambrown.eu/writing/trajan
Contents
36 comments
I’m allowed to spend two days a week at Trajan House, a building in Oxford which houses the Center for Effective Altruism (CEA), along with a few EA-related bodies. Two days is what I asked for, and what I received. The rest of the time I spend in the Bodleian Library of the University of Oxford (about £30/year, if you can demonstrate an acceptable “research need”), a desk at a coworking space in Ethical Property (which houses Refugee Welcome, among other non-EA bodies, for £200/month), Common Ground (a cafe/co-working space which I’ve recommended to people as a place where the staff explicitly explain, if you ask, that you don’t need to order anything to stay as long as you like), a large family house I’m friends with, and various cafes and restaurants where I can sit for hours while only drinking mint tea.
I’m allowed to use the hot-desk space at Trajan House because I’m a recipient of an EA Long Term Future Fund grant, to research Alignment. (I call this “AI safety” to most people, and sometimes have to explain that AI stands for Artificial Intelligence.) I judged that 6 months of salary at the level of my previous startup job, with a small expenses budget, came to about £40,000. This is what I asked for, and what I received.
At my previous job I thought I was having a measurable, meaningful impact on climate change. When I started there, I imagined that I’d go on to found my own startup. I promised myself it would be the last time I’d be employed.
When I quit that startup job, I spent around a year doing nothing-much. I applied to Oxford’s Philosophy BPhil, unsuccessfully. I looked at startup incubators and accelerators. But mostly, I researched Alignment groups. I visited Conjecture, and talked to people from Deep Mind, and the Future of Humanity Institute. What I was trying to do, was to discern whether Alignment was “real” or not. Certainly, I decided, some of these people were cleverer than me, more hard-working than me, better-informed. Some seem deluded, but not all. At the very least, it’s not just a bunch of netizens from a particular online community, whose friend earned a crypto fortune.
During the year I was unemployed, I lived very cheaply. I’m familiar with the lifestyle, and – if I’m honest – I like it. Whereas for my holidays while employed I’d hire or buy a motorbike, and go travelling abroad, or scuba dive, instead my holidays would be spent doing DIY at a friend’s holiday home for free board, or taking a bivi bag to sleep in the fields around Oxford.
The exceptions to this thrift were both EA-related, and both fully-funded. In one, for which my nickname of “Huel and hot-tubs” never caught on, I was successfully reassured by someone I found very smart that my proposed Alignment research project was worthwhile. In the other, I and others were flown out to the San Francisco Bay Area for an all-expenses-paid retreat to learn how to better build communities. My hotel room had a nightly price written on the inside of the door: $500. Surely no one ever paid that. Shortly afterwards, I heard that the EA-adjacent community were buying the entire hotel.
While at the first retreat, I submitted my application for funding. While in Berkeley for the second, I discovered my application was successful. (“I should hire a motorbike, while I’m here.” I didn’t have time, between networking opportunities.) I started calling myself an “independent alignment researcher” to anyone who would listen and let me into offices, workshops, or parties. I fit right in.
At one point, people were writing plans on a whiteboard for how we could spend the effectively-infinite amount of money we could ask for. Somehow I couldn’t take it any more, so I left, crossed the road, and talked to a group of homeless people I’d made friends with days earlier, in their tarp shelter. We smoked cigarettes, and drank beer, and they let me hold their tiny puppy. Then I said my thank-yous and goodbyes, and dived back into work.
Later, I’m on my canal boat in Oxford. For a deposit roughly the price of my flight tickets, I’ve been living on the boat for months. I get an email: the first tranche of my funding is about to be sent over, it’ll probably arrive in weekly instalments. I’ll be able to pay for the boat’s pre-purchase survey.
Then I check my bank account, and it seems like it wasn’t the best use of someone’s time for them to set up a recurring payment, and instead the entire sum has been deposited at once. My current account now holds as much money as my life savings.
I’m surprised by how negative my reaction is to this. I am angry, resentful. After a while I work out why: every penny I’ve pinched, every luxury I’ve denied myself, every financial sacrifice, is completely irrelevant in the face of the magnitude of this wealth. I expect I could have easily asked for an extra 20%, and received it.
A friend later points out that this is irrational. (I met the friend through Oxford Rationalish [? · GW].) Really, he points out, I should have been angry long before. I should have been angry when I realised that there were billionaires in the world at all, not when their reality-warping influence happens to work in my favour. My feelings continue to be irrational.
But now I am funded, and housed, and fed (with delicious complementary vegan catering), and showered (I’m too sparing of water to shower on the boat). I imagine it will soon be cold enough on the boat that I come to the office to warm up; this will be my first winter. And so all my needs are taken care of. I am safe, while the funding continues. And even afterwards, even with no extension, I’ll surely survive. So what remains is self-actualisation. And what I want to do, in that case, is to explore the meaning of the good life, to break it down into pieces which my physics-trained, programmer’s brain can manipulate and understand. And what I want to do, also, is to understand community, build community, contribute love and care. And, last I thought about these things, I’m exactly where I need to be to be asking these questions and developing these skills.
(I realise, in this moment of writing, that I am not building a house and a household, not working with my hands, not designing spaces. I am also not finding a wife.)
I have never felt so obliged, so unpressured. If I produce nothing, before Christmas, then nothing bad will happen. Future funds will be denied, but no other punishment will ensue. If I am to work, the motivation must come entirely from myself.
My writing has been blocked for months. I know what I want to write, and I have explained it in words to people dozens of times. But I don’t believe, on some level, that it’s valuable. I don’t think it’s real, I don’t think that my writing will bring anyone closer to solving Alignment. (This is only partially true.) I have no idea what I could meaningfully offer, in return or exchange. And I can’t bear the thought of doing something irrelevant, of lying, cheating, stealing. Of distracting. Instead, I procrastinate, and – in seeking something measurable – organise an EA-adjacent retreat.
I wander over to the library bookshelves in Trajan House. I pick up a book about community-building, which looks interesting. I see a notice: “Like a book? Feel free to take it home with you. Please just scan this QR code to tell us which book you take :)” I’m pleased: I assume that they’ll ask for my name, so they can remind me later to return the book. This seeming evidence of a high-trust society highlights what I like about EA: everyone is trying to be kind. Then I scan the QR code, and a form loads. But I’m not asked for my name, nor is my email shared with them. They only ask for the title of the book. I realise that – of course – they’re just going to buy a replacement. Of course. It would be ridiculously inefficient to ask for the book back: what if I’m still reading it? What if I’m out of town? And whose time would be used to chase down the book? Much better to solve the problem with money. This isn’t evidence of a high-trust society, after all, only of wealth I still haven’t adjusted to. I submit the form, and pocket the book.
36 comments
Comments sorted by top scores.
comment by habryka (habryka4) · 2022-09-10T00:43:42.758Z · LW(p) · GW(p)
My hotel room had the nightly price written on the inside of the door: $500. Shortly afterwards, I found out that the EA-adjacent community had bought the entire hotel complex.
Huh, I don't know of any retreat in the Bay Area with hotels that would cost $500 a night (the most expensive hotel rooms in Berkeley I've ever booked run into ~$250/night). I also don't know of any hotel complex that has been acquired by any EA-adjacent community in the Bay Area (Lightcone might be buying the Rose Garden Inn, but that is very recent, and those rooms went for as low as ~$70/night, definitely not $500).
I think most of your overall points still stand, but I do wonder whether some kind of miscommunication happened about the actual expenses here.
Replies from: lincolnquirk, jacobjacob, sam-4↑ comment by lincolnquirk · 2022-09-10T01:07:32.981Z · LW(p) · GW(p)
(Inside-of-door-posted hotel room prices are called "rack rates" and nobody actually pays those. This is definitely a miscommunication.)
↑ comment by jacobjacob · 2022-09-11T17:07:41.374Z · LW(p) · GW(p)
There's at least one hotel in Berkeley with rooms for $500/night or more, and I claim for the better hotels it is quite rare that you can get them for <$200. As evidence, you can select some dates and look at the hotels here: https://maps.app.goo.gl/pMwuNQoZVJzV9Kx77
↑ comment by Sam F. Brown (sam-4) · 2022-09-10T07:27:51.733Z · LW(p) · GW(p)
Thanks @habryka - I've edited the post to make it clearer that it's hearsay and that the purchase is not complete. If you think "hotel complex" is a misleading description for the RGI I'd happily consider an alternative term.
Thanks @lincolnquirk - It's almost certainly a price that no one pays, and I've edited the post to make that clearer, but it did still shock me.
Replies from: habryka4↑ comment by habryka (habryka4) · 2022-09-10T20:19:32.929Z · LW(p) · GW(p)
Oh, I think the Rose Garden Inn is just a hotel, and I wouldn't think of it as a "hotel complex" (it does have multiple buildings, but they are just part of the same hotel). I think hotel complex makes people think of 100+ rooms, whereas the Rose Garden has like 40 rooms, which is on the smaller side of being a hotel.
I am also confused. Was the rack rate of $500 written on the inside of a Rose Garden Inn room? If so, that would be kind of hilarious. The place is/was super run down and as I said, normal nightly rates went for as low as $70 during the pandemic as one of the cheapest hotels in Berkeley.
Replies from: Raemon, sam-4, jacobjacob↑ comment by Raemon · 2022-09-11T06:23:08.152Z · LW(p) · GW(p)
fwiw I don't actually have a strong intuitive sense of what a hotel complex is supposed to be (if it's a technical term I didn't know it before), and would have thought Rose Garden Inn was reasonably described as hotel complex by virtue of building multiple related buildings (which is what I normally think of 'complex' meaning in architecture settings).
Googling "hotel complex" doesn't actually return a clear definition, and the examples that come up don't feel like they definitively point towards something much bigger than RGI. (seems like reasonable people could disagree tho)
But "building complex" gets defined as:
A complex is a group of buildings designed for a particular purpose, or one large building divided into several smaller areas. [...]
which on the margin I think implies somewhat larger buildings, but the connotations for me sound like something "at least about as bit as Rose Garden Inn."
↑ comment by Sam F. Brown (sam-4) · 2022-09-11T06:09:21.272Z · LW(p) · GW(p)
The multiple buildings made it feel like a complex to me, but I've changed the wording to simply "hotel".
Yes, I'm now questioning my memory, but the rack-rate was on the inside of the RGI room I was staying in. I forget the room number, but feel free to DM me if you'd like a description of which one it was.
Replies from: habryka4↑ comment by habryka (habryka4) · 2022-09-11T18:41:43.712Z · LW(p) · GW(p)
Cool, yeah, I believe you. That is, as I said, kind of hilarious given the current state of the property (and the more recent room prices it managed to fetch).
↑ comment by jacobjacob · 2022-09-11T17:00:41.433Z · LW(p) · GW(p)
I'm not sure, but I think I might have seen a sign in a rose garden room with a $500 rack rate. Second floor building B. I found it quite funny given how far from the current state it was, it read like a decades old relic of what the Inn once was.
comment by Mitchell_Porter · 2022-09-10T04:12:02.326Z · LW(p) · GW(p)
I don't get what the intent of this post is. You had a very elite kind of self-actualization (working at a climate change startup in Oxford), you gave it up and got a grant to be an effective altruist, you were disconcerted by how much money was being thrown around in this new environment, but got used to it, and...?
Replies from: sam-4↑ comment by Sam F. Brown (sam-4) · 2022-09-10T07:29:32.573Z · LW(p) · GW(p)
I don't really know what the point is either. I think I'm just trying to share how I feel.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-09-11T18:22:17.217Z · LW(p) · GW(p)
It does come across as trying to make a point, especially since you ended on the anecdote about the book lending system being indicative of wealth and not necessarily high trust.
Most reasonable folks would at least draw the conclusion that you were trying to express disappointment that the culture wasn’t as high trust as you had imagined, that there are organizational inefficiencies, etc.
comment by JBlack · 2022-09-10T03:50:37.706Z · LW(p) · GW(p)
Your work may well be some small part of the most important that humanity has ever conducted in its entire existence. If alignment turns out to matter at all, which seems likely, then the modal outcomes are: incredible advances worth far more than merely trillions of dollars, or total annihilation of everything we care about.
You've become part of something very much larger than yourself.
Yes, your prior sacrifices pale in comparison to the resources you've been allocated to work on this problem. If anything the difference in scale is not nearly enough. If your work contributes 0.0001% expected payoff in reaching the better future rather than the worse one, then it is worth at least many millions.
The part that makes me more angry is that AI capability work is being funded vastly more, because most of the gains in the medium term are privatized while all of the (immense!) risk is socialized.
Replies from: sam-4↑ comment by Sam F. Brown (sam-4) · 2022-09-10T07:31:35.562Z · LW(p) · GW(p)
I appreciate the encouragement, and I do still agree with my decision to attempt a 6 month exploration to see whether I can do meaningful alignment work.
comment by Sam F. Brown (sam-4) · 2022-09-10T10:46:46.853Z · LW(p) · GW(p)
The book, in case anyone is wondering, is The Art of Community by Charles H. Vogl, and is very good. I'm grateful to the CEA.
comment by Erich_Grunewald · 2022-09-11T14:51:41.217Z · LW(p) · GW(p)
There was a flurry of posts about this on the EA Forum this spring. Since no one (I think) mentioned them yet, here are the highest karma ones (in chronological order):
comment by Elizabeth (pktechgirl) · 2022-09-30T19:57:56.012Z · LW(p) · GW(p)
Data point: I've had a few times now where people asked me to apply for money on very short deadlines but implied lower rigor. Unless I had a shovel ready project I happened to not have funding for, the process has always felt bad and I end up withdrawing. Somehow it always starts as "blue sky slush fund, just give us some ideas" and ends with commitments to do pretty particular things I haven't thought out well and have been optimized more for legibility and ease of cost estimates than value.
Everyone involved meant well and I think partially this is caused by a lack of skill, which can be remedied. Now that I've noticed and named the pattern I expect to be less susceptible to it in the future. But I do think this reflects a systemic issue that I hope we fix long term.
comment by romeostevensit · 2022-09-09T23:03:05.071Z · LW(p) · GW(p)
Is there any empirical evidence for how to run grant programs? I've never heard it discussed either in terms of existing results or as a high impact area of inquiry.
comment by Dunning K. · 2022-09-10T01:42:28.275Z · LW(p) · GW(p)
I've been in a similar situation and have had similar feelings. Is this really the most efficient use of the money? Surely reducing comforts such as catered food by a little bit can't have such a huge impact on added productivity?
Replies from: sam-4↑ comment by Sam F. Brown (sam-4) · 2022-09-10T08:35:02.886Z · LW(p) · GW(p)
I actually think that catering of high enough quality that people don't leave the permises for meals is a very efficient use of money. And there's a good argument to be made that the most efficient use of money isn't the most effective one.
But also, thanks :)
comment by Daniel Paleka · 2022-09-10T06:57:10.946Z · LW(p) · GW(p)
I am very sorry that you feel this way. I think it is completely fine for you, or anyone else, to have internal conflicts about your career or purpose. I hope you find a solution to your troubles in the following months.
Moreover, I think you did an useful thing, raising awareness about some important points:
- "The amount of funding in 2022 exceeded the total cost of useful funding opportunities in 2022."
- "Being used to do everything in Berkeley, on a high budget, is strongly suboptimal in case of sudden funding constraints."
- "Why don't we spend less money and donate the rest?"
Epistemic status for what follows: medium-high for the factual claims, low for the claims about potential bad optics. It might be that I'm worrying about nothing here.
However, I do not think this place should be welcoming of posts displaying bad rhetoric and epistemic practices.
Posts like this can hurt hurt the optics of the research done in the LW/AF extended universe. What does a prospective AI x-safety researcher think when they get referred to this site and see this post above several alignment research posts?
EDIT: The above paragraph was off. See Ben's excellent reply [LW(p) · GW(p)] for a better explanation of why anyone should care.
I think this place should be careful about maintaining:
- the epistemic standard of talking about falsifiable things;
- the accepted rhetoric being fundamentally honest and straightforward, and always asking "compared to what?" before making claims;
- the aversion to present uncertainties as facts.
For some examples:
My hotel room had the nightly price written on the inside of the door: $500. Shortly afterwards, I found out that the EA-adjacent community had bought the entire hotel complex.
I tried for 15 minutes to find a good faith reading of this, but I could not.
Most people would read this as "the hotel room costs $500 and the EA-adjacent community bought the hotel complex in which that hotel is a part of", while being written in a way that only insinuates and does not commit to meaning exactly that. Insinuating bad optics facts while maintaining plausible deniability, without checking the facts, is a horrible practice, usually employed by politicians and journalists.
The poster does not deliberately lie, but this is not enough when making a "very bad optics" statement that sounds like this one. At any point, they could have asked for the actual price of the hotel room, or about the condition of the actual hotel that might be bought.
I have never felt so obliged, so unpressured. If I produce nothing, before Christmas, then nothing bad will happen. Future funds will be denied, but no other punishment will ensue.
This is true. But it is not much different from working a normal software job. The worst thing that can happen is getting fired after not delivering for several months. Some people survive years coasting until there is a layoff round.
An important counterfactual for a lot of people reading this is a PhD degree.
There is no punishment for failing to produce good research, except getting dropping out of the program after a few years.
After a while I work out why: every penny I’ve pinched, every luxury I’ve denied myself, every financial sacrifice, is completely irrelevant in the face of the magnitude of this wealth. I expect I could have easily asked for an extra 20%, and received it.
This might be true. Again, I think it would be useful to ask: what is the counterfactual?
All of this is applicable for anyone that starts working for Google or Facebook, if they were poor beforehand.
This feeling (regretting saving and not spending money) is incredibly common in all people that have good careers.
I would suggest going through the post with a cold head and removing parts which are not up to the standards.
Again, I am very sorry that you feel like this.
↑ comment by Ben Pace (Benito) · 2022-09-10T07:12:58.499Z · LW(p) · GW(p)
I agree with the focus on epistemic standards, and I think many of the points here are good. I disagree that this is the primary reason to focus on maintaining epistemic standards:
Posts like this can hurt hurt the optics of the research done in the LW/AF extended universe. What does a prospective AI x-safety researcher think when they get referred to this site and see this post above several alignment research posts?
I think we want to focus on the epistemic standards of posts so that we ourselves can trust the content on LessWrong to be honestly informing us about the world. In most places you have to watch your back way more than on LessWrong (e.g. Twitter, Reddit, Facebook). I don't currently value the question "what does this look like to other people" half as much as I care about the question "can I myself trust the content on LessWrong".
(Though, I admit, visibly having strong truth-seeking norms is a good way to select for the sorts of folks who will supply truth and not falsehood.)
Replies from: Daniel Paleka↑ comment by Daniel Paleka · 2022-09-10T07:45:18.969Z · LW(p) · GW(p)
I somewhat agree, athough I obviously put a bit less weight on your reason than you do. Maybe I should update my confidence of the importance of what I wrote to medium-high.
Let me raise the question of continuously rethinking incentives on LW/AF, for both Ben's reason and my original reason.
The upvote/karma system does not seem like it incentivizes high epistemic standards and top-rigor posts, although I would need more datapoints to make a proper judgement.
Replies from: Morpheus↑ comment by Morpheus · 2022-09-10T21:07:39.616Z · LW(p) · GW(p)
Rigor as in meticulously researching everything seems not like the best thing to strive for [LW · GW]? For what it's worth I think the post actually did a good job in framing this post, so I mostly took this as, "this is what this feels like" and less this is what the current fundig situation ~actually~ is. The Karma system of the comments did a great job at surfacing important facts like the hotel price.
↑ comment by Lukas_Gloor · 2022-09-10T09:24:06.663Z · LW(p) · GW(p)
This might be true. Again, I think it would be useful to ask: what is the counterfactual?
All of this is applicable for anyone that starts working for Google or Facebook, if they were poor beforehand.
You're interpreting as though they're making evaluative all-things-considered judgments, but it seems to me that the OP is reporting feelings.
(If this post was written for EA's criticism and red teaming contest, I'd find the subjective style and lack of exploring of alternatives inappropriate. By contrast, for what it aspires to be, I thought the post was extremely good at describing a certain mood. I'm usually a bit bad at inhabiting the experience of people with frugality intuitions / scrupulosity about spending, but this one seemed to evoke something that helps me understand and relate.)
And re Google or Facebook, the juxtaposition of "we're doing this for altruistic reasons" and "we're swimming in luxury" is extra jarring for some people.
↑ comment by Elizabeth (pktechgirl) · 2022-09-10T21:56:39.360Z · LW(p) · GW(p)
Most people would read this as "the hotel room costs $500 and the EA-adjacent community bought the hotel complex in which that hotel is a part of", while being written in a way that only insinuates and does not commit to meaning exactly that.
I disagree both that posts that are clearly marked as sharing unendorsed feelings in a messy way need to be held to a high epistemic standard, and that there is no good faith interpretation of the post's particular errors. If you don't want to see personal posts I suggest disabling their appearance on your front page, which is the default anyway.
Replies from: Daniel Paleka↑ comment by Daniel Paleka · 2022-09-10T23:18:08.986Z · LW(p) · GW(p)
This is a mistake on my own part that actually changes the impact calculus, as most people looking into AI x-safety on this place will not actually ever see this post. Therefore, the "negative impact" section is retracted.[1] I point to Ben's excellent comment [LW(p) · GW(p)] for a correct interpretation of why we still care.
I do not know why I was not aware of this "block posts like this" feature, and I wonder if my experience of this forum was significantly more negative as a result of me accidentally clicking "Show Personal Blogposts" at some point. I did not even know that button existed.
No other part of my post is retracted. In fact, I'd like to reiterate a wish for the community to karma-enforce [2] the norms of:
- the epistemic standard of talking about falsifiable things;
- the accepted rhetoric being fundamentally honest and straightforward, and always asking "compared to what?" before making claims;
- the aversion to present uncertainties as facts.
Thank you for improving my user experience of this site!
- ^
I am now slightly proud that my original disclaimer precisely said that this was the part I was unsure of the most.
- ^
As in, I wish to personally be called out on any violations of the described norms.
↑ comment by Elizabeth (pktechgirl) · 2022-09-11T07:14:46.901Z · LW(p) · GW(p)
Personal is a special tag in various ways, but you can ban or change weightings on any tag. You can put a penalty on tag so you see it less, but still see very high karma posts, or give tags a boost so even low karma posts linger on your list.
↑ comment by Sam F. Brown (sam-4) · 2022-09-10T08:09:21.182Z · LW(p) · GW(p)
Thanks for being open about your response, I appreciate it and I expect many people share your reaction.
I've edited the section about the hotel room price/purchase, where people have pointed out I may have been incorrect or misleading,
This definitely wasn't meant to be a hit piece, or misleading "EA bad" rhetoric.
On the point of "What does a prospective AI x-safety researcher think when they get referred to this site and see this post above several alignment research posts?" - I think this is a large segment of my intended audience. I would like people to know what they're getting themselves in for, so they can make an informed decision.
I think that a lot of the point of this post is to explore and share the dissonance between what "thinks" right, and what "feels" right. The title of the piece was intended to make it clear that this is about an emotional, non-rational reaction. It's styled more as a piece of journalism than as a scientific paper, because I think that that's the best way to communicate the emotional reaction which is the main focus of the piece.
↑ comment by Raemon · 2022-09-11T06:51:56.450Z · LW(p) · GW(p)
Fwiw I disagree with this. I'm a LW mod. Other LW mods haven't talked through this post yet and I'm not sure if they'd all agree, but, I think people sharing their feelings is just a straightforwardly reasonable thing to do.
I think this post did a reasonable job framing itself as not-objective-truth, just a self report on feelings. (i.e. it's objectively true about "these were my feelings", which is fine).
I think the author was straightforwardly wrong about Rose Garden Inn being $500 a night, but that seems like a simple mistake that was easily corrected. I also think it is straightforwardly correct that EA projects in San Francisco spend money very liberally, and if you're in the middle of the culture shock of realizing how much money people are spending and haven't finished orienting, $500/night is not an unbelievable number.
(it so happens that there's been at least one event with lodging that I think averaged $500/person/night (although this was including other venue expenses, and was a pretty weird edge case of events that happened for weird contingent reasons. Meanwhile in Berkeley there's been plenty of $230ish/night hotel rooms used for events, which is not $500 but still probably a lot more than Sam was expecting)
I do agree with you that the implied frame of:
"After a while I work out why: every penny I’ve pinched, every luxury I’ve denied myself, every financial sacrifice, is completely irrelevant in the face of the magnitude of this wealth. I expect I could have easily asked for an extra 20%, and received it."
is, in fact, an unhelpful frame. It's important for people to learn to orient in a world where money is available and learn to make use of more money. (Penny-pinching isn't the right mindset for EA – even before longtermist billionaires flooding the ecosystem I still think it was generally a better mindset for people to look for strategies that would get them enough surplus money that they didn't have to spend cognition penny pinching)
But, just because penny-pinching isn't the right mindset for EA in 2022, doesn't mean that that the amount of wealth isn't... just a pretty disorienting situation. I expect lots of people to experience cultural whiplash about this. I think posts like this are a reasonable part of processing it. I also think there are issues not articulated here that come from a sudden influx of money that are potentially pretty bad (i.e. attracting grifters trying to scam us, reducing signal/noise, etc).
I think people writing up their emotional experiences (flagging them as such) is an important source of information about this.
I do of course want people to write posts about this from other perspectives as well. And I think it'd be bad if the implied frame of this post became the default frame.
Replies from: Raemon↑ comment by Raemon · 2022-09-11T07:40:41.730Z · LW(p) · GW(p)
(that all said, after some reflection I did weak downvote the OP because I thought 98 karma felt a bit too high. ((I'm someone who thinks it's fine to vote based on the total karma, not just on whether I thought it was overall good or bad)). I would feel like the site-karma-health was off if this got like 200 karma, and IMO an emotional report like this should get, like, a respectable 40-80-ish karma, but if it's getting over 100 I expect that's largely coming from people who are applauding the general concept of wealth-is-sinful or something, and I do worry about the cultural effects of that)
comment by ojorgensen · 2022-09-09T23:17:02.367Z · LW(p) · GW(p)
I really like this post! I can’t see whether you’ve already cross posted this to the EA forum, but it seems valuable to have this there too (as it is focussed on the EA community).
Replies from: sam-4↑ comment by Sam F. Brown (sam-4) · 2022-09-09T23:35:36.422Z · LW(p) · GW(p)
I'm happy for it to be cross-posted there, but I'm not sure how to do that myself. If anyone else wants to, feel free. (Edit: I'm confused by the downvote. Is this advising against cross-posting? Or suggesting that I should work out how to and then do it myself?)
Replies from: ChristianKl↑ comment by ChristianKl · 2022-09-11T08:39:13.572Z · LW(p) · GW(p)
Cross posting essentially means just copy pasting.
Replies from: sam-4↑ comment by Sam F. Brown (sam-4) · 2022-09-11T12:50:57.270Z · LW(p) · GW(p)
Thanks, cross-posted: https://forum.effectivealtruism.org/posts/6xX96ZqFtH5n7mchW/my-emotional-reaction-to-the-current-funding-situation