Grant applications and grand narratives
post by Elizabeth (pktechgirl) · 2023-07-02T00:16:25.129Z · LW · GW · 22 commentsContents
More on the costs of the question Pushes away the most motivated people Vulnerable to grift Punishes underconfidence Corrupts epistemics Pushes projects to grow beyond their ideal scope Rewards cultural knowledge independent of merit Brainstorming fixes None 22 comments
The Lightspeed application asks: “What impact will [your project] have on the world? What is your project’s goal, how will you know if you’ve achieved it, and what is the path to impact?”
LTFF uses an identical question, and SFF puts it even more strongly (“What is your organization’s plan for improving humanity’s long term prospects for survival and flourishing?”).
I’ve applied to all three grants of these at various points, and I’ve never liked this question. It feels like it wants a grand narrative of an amazing, systemic project that will measurably move the needle on x-risk. But I’m typically applying for narrowly defined projects, like “Give nutrition tests to EA vegans and see if there’s a problem”. I think this was a good project. I think this project is substantially more likely to pay off than underspecified alignment strategy research, and arguably has as good a long tail. But when I look at “What impact will [my project] have on the world?” the project feels small and sad. I feel an urge to make things up, and express far more certainty for far more impact than I believe. Then I want to quit, because lying is bad but listing my true beliefs feels untenable.
I’ve gotten better at this over time, but I know other people with similar feelings, and I suspect it’s a widespread issue (I encourage you to share your experience in the comments so we can start figuring that out).
I should note that the pressure for grand narratives has good points; funders are in fact looking for VC-style megabits. I think that narrow projects are underappreciated, but for purposes of this post that’s beside the point: I think many grantmakers are undercutting their own preferred outcomes by using questions that implicitly push for a grand narrative. I think they should probably change the form, but I also think we applicants can partially solve the problem by changing how we interact with the current forms.
My goal here is to outline the problem, gesture at some possible solutions, and create a space for other people to share data. I didn’t think about my solutions very long, I am undoubtedly missing a bunch and what I do have still needs workshopping, but it’s a place to start.
More on the costs of the question
Pushes away the most motivated people
Even if you only care about subgoal G instrumentally, G may be best accomplished by people who care about it for its own sake. Community building (real building, not a euphemism for recruitment) benefits from knowing the organizer cares about participants and the community as people and not just as potential future grist for the x-risk mines.* People repeatedly recommended a community builder friend of mine apply for funding, but they struggled because they liked organizing for its own sake, and justifying it in x-risk terms felt bad.
[*Although there are also downsides to organizers with sufficiently bad epistemics.]
Additionally, if G is done by someone who cares about it for its own sake, then it doesn’t need to be done by someone whose motivated by x-risk. Highly competent, x-risk motivated people are rare and busy, and we should be delighted by opportunities to take things off their plate.
Vulnerable to grift
You know who’s really good at creating exactly the grand narrative a grantmaker wants to hear? People who feel no constraint to be truthful. You can try to compensate for this by looking for costly signals of loyalty or care, but those have their own problems.
Punishes underconfidence
Sometimes people aren’t grifting, they really really believe in their project, but they’re wrong. Hopefully grantmakers are pretty good at filtering out those people. But it’s fairly hard to correct for people who are underconfident, and impossible to correct for people who never apply because they’re intimidated.
Right now people try to solve the second problem by loudly encouraging everyone to apply to their grant. That creates a lot of work for evaluators, and I think is bad for the people with genuinely mediocre projects who will never get funding. You’re asking them to burn their time so that you don’t miss someone else’s project. Having a form that allows for uncertainty and modest goals is a more elegant solution.
Corrupts epistemics
Not that much. But I think it’s pretty bad if people are forced to choose between "play the game of exaggerating impact" and "go unfunded". Even if the game is in fact learnable, it's a bad use of their time and weakens the barriers to lying in the future.
Pushes projects to grow beyond their ideal scope
Recently I completed a Lightspeed application for a lit review on stimulants. I felt led by the form to create a grand narrative of how the project could expand, including developing a protocol for n of 1 tests so individuals could tailor their medication usage. I think that having that protocol would be great and I’d be delighted if someone else developed it, but I don’t want to develop it myself. I noticed the feature creep and walked it back before I submitted the form, but the fact that the form pushes this is a cost.
This one isn’t caused by the impact question alone. The questions asking about potential expansion are a much bigger deal, but would also be costlier to change. There are many projects and organizations where “what would you do with more money?” is a straightforwardly important question.
Rewards cultural knowledge independent of merit
There’s nothing stopping you from submitting a grant with the theory of change “T will improve EA epistemics”, and not justifying past that. I did that recently, and it worked. But I only felt comfortable doing that because I had a pretty good model of the judges and because it was a Lightspeed grant, which explicitly says they’ll ask you if they have follow-up questions. Without either of those I think I would have struggled to figure out where to stop explaining. Probably there are equally good projects from people with less knowledge of the grantmakers, and it’s bad that we’re losing those proposals.
Brainstorming fixes
I’m a grant-applier, not a grant-maker. These are some ideas I came up with over a few hours. I encourage other people to suggest more fixes, and grant-makers to tell us why they won’t work or what constraints we’re not aware of.
- Separate “why you want to do this?” or “why do you think this is good?” from “how will this reduce x-risk?”. Just separating the questions will reduce the epistemic corruption.
- Give a list of common instrumental goals that people can treat as terminal for the purpose of this form. They still need to justify the chain between their action and that instrumental goal, but they don’t need to justify why achieving that goal would be good.
- E.g. “improve epistemic health of effective altruism community”, or “improve productivity of x-risk researchers”.
- This opens opportunities for goodharting, or for imprecise description leaving you open to implementing bad versions of good goals. I think there are ways to handle this that end up being strongly net beneficial.
- I would advocate against “increase awareness” and “grow the movement” as goals. Growth is only generically useful when you know what you want the people to do. Awareness of specific things among specific people is a more appropriate scope.
- Note that the list isn’t exhaustive, and if people want to gamble on a different instrumental goal that’s allowed.
- Let applicants punt to others to explain the instrumental impact of what is to them a terminal goal.
- My community organizer friend could have used this. Many people encouraged them to apply for funding because they believed the organizing was useful to x-risk efforts. Probably at least a few were respected by grantmakers and would have been happy to make the case. But my friend felt gross doing it themselves, so it created a lot of friction in getting very necessary financing.
- Let people compare their projects to others. I struggle to say “yeah if you give me $N I will give you M microsurvivals”. How could I possibly know that? But it often feels easy to say “I believe this is twice as impactful as this other project you funded”, or “I believe this in the nth percentile of grants you funded last year”.
- This is tricky because grants don’t necessarily mean [EA · GW] a funder believes a project is straightforwardly useful. But I think there’s a way to make this doable.
- E.g. funders could give examples with percentile. I think open phil did something like this in the last year, although can’t find it now. The lower percentiles could be hypothetical, to avoid implicit criticism.
- Lightspeed’s implication that they’ll ask follow-up questions is very helpful. With other forms there’s a drive to cover all possible bases very formally, because I won’t get another chance. With Lightspeed it felt available to say “I think X is good because it will lead to Y”, and let them ask me why Y was good if they don’t immediately agree.
- When asking about impact, lose the phrase “on the world”. The primary questions are what goal is, how they’ll know if it’s accomplished, and what the feedback loops are. You can have an optional question asking for the effects of meeting the goal.
- I like the word "effects" more than "impact", which is a pretty loaded term within EA and x-risk.
- A friend suggested asking “why do you want to do this?”, and having “look I just organizing socal gatherings” be an acceptable answer. I worry that this will end up being a fake question where people feel the need to create a different grand narrative about how much they genuinely value their project for its own sake, but maybe there’s a solution to that.
- Maybe have separate forms for large ongoing organizations, and narrow projects done by individuals. There may not be enough narrow projects to justify this, it might be infeasible to create separate forms for all types of applicants, but I think it’s worth playing with.
- [Added 7/2]: Ask for 5th/50th/99th/99.9th percentile outcomes, to elicit both dreams and outcomes you can be judged for failing to meet.
- [Your idea here]
I hope the forms change to explicitly encourage things like the above list, but I don’t think applicants need to wait. Grantmakers are reasonable people who I can only imagine are tired of reading mediocre explanations of why community building is important. I think they’d be delighted to be told “I’m doing this because I like it, but $NAME_YOU_HIGHLY_RESPECT wants my results” (grantmakers: if I’m wrong please comment as soon as possible).
Grantmakers: I would love it if you would comment with any thoughts, but especially what kinds of things you think people could do themselves to lower the implied grand-narrative pressure on applications. I'm also very interested in why you like the current forms, and what constraints shaped them.
Grant applicants: I think it will be helpful to the grantmakers if you share your own experiences, how the current questions make you feel and act, and what you think would be an improvement. I know I’m not the only person who is uncomfortable with the current forms, but I have no idea how representative I am.
22 comments
Comments sorted by top scores.
comment by Raemon · 2023-07-02T01:41:18.186Z · LW(p) · GW(p)
Corrupts epistemics
Not that much. But I think it’s pretty bad if people are forced to choose between "play the game of exaggerating impact" and "go unfunded". Even if the game is in fact learnable, it's a bad use of their time and weakens the barriers to lying in the future.
Huh, the way you phrase this feels like a minor point, and maybe was less salient for you, but I think this is maybe the thing I was most worried about.
I think the epistemic corruption is subtle, and possibly small on an individual basis, but I think good epistemics are extremely rare/precious and very hard to maintain, and maintaining epistemics in the face of systemic economic pressure is particularly hard. This feels like a contender to me for "civilization-wide top-tier problem worth solving", and it feels worth it both locally for EA to avoid the problem for themselves, as well as hopefully being an experiment that can radiate outward into other nearby grantmaking systems.
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-07-02T02:27:02.343Z · LW(p) · GW(p)
I think it's the most important but have no idea how to close the inferential distance with people who don't already agree, and this post was a rush job.
EDIT: ended up figuring out an encapsulation I like [LW(p) · GW(p)]
Replies from: Raemoncomment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2023-07-09T00:01:39.168Z · LW(p) · GW(p)
Strong agree. Glad somebody is articulating this. There is far too much emphasis on megalomaniac grand plans light on details and not enough on projects that are robustly good, process-oriented, tractable and most importantly based on a highly specific ideas & expertise
comment by Seth Herd · 2023-07-04T19:09:29.330Z · LW(p) · GW(p)
It seems like the primary problem you point to here is that asking for a grand narrative produces pressure to inflate the importance of your proposed project.
Inflating the importance of your project is a fundamental pressure in any granting situation. Grantors fundamentally want projects that have a large impact per dollar.
So getting rid of a grand narrative requirement isn't going to solve that problem.
The counterbalance is grantors looking for proposals that are epistemically careful in estimating the impact of their project. The organizations you mention are almost certainly careful to have a well-tuned bullshit detector as part of their review process. This disincentives trying to fool the org, and therefore trying to fool yourself. It's not perfect.
The other factor here is that if an organization is giving grants for projects that "[improve ] humanity’s long term prospects for survival and flourishing...” then not all projects are eligible for that funding. Which sucks for those projects.
Your project of measuring vegan's biomarkers could qualify. A large portion of people doing survival-odds-enhancing projects are rationalists; a significant fraction of the rationalist community is vegan; if vegans aren't healthy, they'll have less energy and intelligence; that marginally effects their ability to do all of the other potentially highly impactful projects. It's a small but real change in our odds of survival and flourishing. Estimating those odds accurately and honestly could either win you a grant, or convince you that this project isn't actually in the category of shifting our odds of a good future, and maybe you should do something else or apply elsewhere for funding.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2023-07-05T06:32:05.440Z · LW(p) · GW(p)
Furthering this line of argument, I can think of specific projects that do make it easy to supply a convincing grand narrative about their impact on the world, including technical AI safety research, wastewater monitoring for potential pandemics, institutions working on improved epistemics, and work to enhance human intelligence and decision-making. Whether or not a project lends itself to a grand narrative does, in fact, suggest to me that it's more likely to be able to achieve impact on the world scale. And many of these projects seem concrete enough to me that it's easy to say whether or not the grand narrative seems reasonable or not.
The activity of helping vegans get tested for nutritional deficiencies doesn't fit a grand narrative for world-scale impact. But if the idea was to work on making concierge medicine especially available to ultra-high performers in the field of x-risk in order to ensure that the Paul Christianos of the world face minimal health impediments to their research, I think that would lend itself to a grand narrative that might be compelling to grantmakers. It also suggests a wider and different range of options for how one might pivot if nutritional testing for vegans wasn't feeling like it was achieving enough impact.
I also think there's an analogy to be drawn here between startups and those applying for grants. One of the most common reasons startups fail is that they make a product people don't want to buy, and never pivot. One of the things venture capital and startup advisors can do is counsel startups on how to make a product the market wants. It seems like there's an opportunity here to help energetic, self-starting, smart people connect their professional interests with the kinds of world-impact grand narratives that grantmakers find compelling. EA and 80,000 Hours do this to some extent, but there's often a sense in which they're trying to recruit people into pre-established molds or simply headhunt people who have it all figured out already. Helping people who already have compelling but small-scale projects think bigger and adapt their projects into things that might actually have world-scale potential seems useful and perhaps under-supplied.
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-07-08T05:28:47.537Z · LW(p) · GW(p)
The vegan nutrition project led to a lot of high-up EAs getting nutrition tested and some supplement changes, primarily due to seeking the tests out themselves after the iron post. If I was doing the project again, I'd prioritize that post and similar over offering testing. But I didn't know when I started that iron deficiencies would be the standout issue, and even if I had I would have felt uncomfortable listing "impact by motivating others" as a plan. What if I wrote something and nobody cared? I did hope to create a virtuous cycle via word of mouth on the benefits of test-and-supplement, which has mostly not happened yet.
You can argue it was a flaw in me that rendered me incapable of imagining that outcome and putting it on a grant. More recently I wrote a grant that had "motivate medical change via informative blog posts" at its core, so clearly I don't think doing so is inherently immoral. But the flaw that kept me from predicting that path before I'd actually done it is connected to some of my virtues, and specifically the virtues that make me good at the quantified lit review work.
Or my community organizer friend. There are advantages to organizers who care deeply about x-risk and see organizing as a path to doing so. But there are serious disadvantages [EA · GW] as well.
I think my model might be [good models + high impact vision grounded in that model] > [good models alone + modest goals] > [mediocre model + grand vision], where good model means both reasonably accurate and continually improving based on feedback loops inherent in the project, with the latter probably being more important. And I think that if you reward grand vision too much, you both select for and cause worse models with less self-correction.
Of the items you listed
technical AI safety research, wastewater monitoring for potential pandemics, institutions working on improved epistemics, and work to enhance human intelligence and decision-making
I would only count wastewater monitoring as a project. Much technical alignment research counts as a project but "do technical alignment research" is a stupid plan, you need to provide specfics. The other items on the list are goals. They're good goals, but they're not plans and they're not projects and I absolutely would value a solid plan with a modest goal over a vague plan to solve any of these.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2023-07-08T07:22:51.744Z · LW(p) · GW(p)
Thanks for the reply, Elizabeth. I agree with pretty much everything you say here. I particularly like this part:
[good models + high impact vision grounded in that model] > [good models alone + modest goals] > [mediocre model + grand vision]
I think this is a foundational starting point for thinking about the process of spinning up any impactful project. It also helps me see the wisdom in what you are arguing here more clearly, for two reasons:
- It shows how selecting for grand vision can result in selecting for heavily mediocre models, particularly if there is a) no equally stringent selection for good models and b) as we might expect, far more grand visions + mediocre models than good models + high-impact visions grounded in that model. Since I expect most people will agree that in the general population, (b) is true, the crux of any disagreement may depend on how successfully good models are already selected for by grantmakers, as well as how self-selecting the population of applicants is for having good models. I don't have an opinion on that subject, and I think that professional grantmakers and people who have looked at a lot of grant-winners and grant-losers would be the relevant experts. I hope they do deign to comment on your post.
- When people shift from pursuing a grand narrative to pursuing a good model, this can come witih the dissolution of the formerly motivating grand narrative, along with envy or a sense of discouragement when comparing oneself with those who have achieved a high impact vision grounded in a good model. When we read statements such as that of Hamming, that asks why you're not working on the obviously most important project in your field, we can feel discouraged or as if we are on the wrong track if our instant answer is anything other than "but I am working on the most important problem!" This model offers an alternative point of view, which is that the first step in getting to your Hamming problem is to stop pursuing a groundless grand vision and to pursue a good model, even if it's not obvious what the ultimate benefit is. Figuring out how to give that journey some structure so that it doesn't become an exercise in self-justification or a recipe for aimless wandering seems good, but I still think it is a step in the right direction over clinging to one's initial grand vision. I would rather see a population of people chasing good models, sometimes aimlessly, than a population of people chasing grand visions, since I expect this would be even more aimless.
Some minor notes:
- It does sound like discomfort with articulating the most apt high-impact vision was part of why you were reluctant to list it. I'm not sure if that was emotional discomfort or intellectual discomfort, but I would not be surprised if in general, lack of emotional confidence to identify and articulate the most apt high-impact vision when one has a good model already is a slowdown for some people. I have noticed that ChatGPT has been pretty good at helping me to articulate the high-impact vision in suitably ringing prose when I have a good model already - I used it to write the copy for a database website I created, because it was a lot harder for me to write "mission statement-esque" prose than to write software, clean the data, and build the website. The good model was much easier than articulating the high-impact vision even though I had the vision. I don't know if that's a "flaw" exactly - the point is just to distinguish "I have no idea what the apt high-impact vision for my extant good model is" from "I have an idea of what the high-impact vision is, but I'm not sure" from "I know what the high-impact vision is, but I'm uncomfortable being loud and proud about it or don't know how to put it into words effectively."
- Regarding my list of putative projects, I agree with you that only the wastewater monitoring project is a project, per se. The rest of the ones I listed are more themes for projects, but I presume there are a number of concrete projects within each theme that could be listed - I am simply relatively unfamiliar with these areas and so I didn't have a bunch of specific examples at the top of my head.
↑ comment by Elizabeth (pktechgirl) · 2023-07-08T22:38:40.488Z · LW(p) · GW(p)
I'm glad it was so helpful, thanks for prompting me to formalize it and for providing elaborations. Both of your points feel important to me.
I'm glad GPT worked for you but I think it's a risky maneuver and I'm scared of the world where it is the common solution to this problem. The push for grand vision doesn't just make models worse, it hurts your ability to evaluate models as well. GPT is designed to create the appearance of models where none exists, and I want it as far from the grantmaking process as possible. I think solutions like "ask for a 0.1percentile vision" solve this more neatly.
I'm no longer quite sure what you were aiming for with the first paragraph in your first comment. I think projects with the goal of "improve epistemics" are very nearly guaranteed to be fake. Not quite 100%- I sent in a grant with that goal myself recently, and I have high hopes for CE's new Research Training Program [EA · GW]. But a stunning number of things had to go right for my project to feel viable to me. For one, I'd already done most of the work and it was easy to lay out the remaining steps (although they still ballooned and I missed my deadline).
It also feels relevant that I didn't backchain that plan. I'd had a vague goal of improving epistemics for years without figuring out anything more useful than "be the change I wanted to see in the world". The useful action only came when I got mad about a specific object-level thing I was investigating for unrelated reasons.
PS. I realize that using my projects as examples places you in an awkward position. I officially give you my blessing to be brutal in discussing projects I bring up as examples.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2023-07-09T02:31:54.225Z · LW(p) · GW(p)
I'm glad GPT worked for you but I think it's a risky maneuver and I'm scared of the world where it is the common solution to this problem.
Yeah, I would distinguish between using GPT to generate a grand vision vs. using it to express it in a particular style. The latter is how I used it - with the project I'm referring to, the model + vision were in place because I'd already spent almost a year doing research on the topic for my MS. However, I just don't have much experience or flair for writing that up in a way that is resonant and has that "institutional flavor," and GPT was helpful for that.
Here's a revision of the first paragraph you were asking about. I think that there are many grantmaking models that can work at least somewhat, but they all face tradeoffs. If you try to pick only projects with a model-grounded vision, you risk giving to empty grand narratives instead. If you try to pick only grantees with a good model, then you risk creating a stultified process in which grantees all feel pressure to plan everything out in advance to an unrealistic degree. If you regrant to just fund people who seem like they know what they're doing, then you make grants susceptible to privilege and corruption.
I think all these are risks, and probably at the end of the day, the ideal amount of grift, frustration, and privilege/corruption in grantmaking is not zero (an idea I take from Lying For Money, which says the same about fraud). And I believe this because I also think that grantmakers can have reasonable success in any of these approaches - vision-based, model-based, and person-based. There are also some projects that are based on a model that's legible to just about anybody, where the person carrying them out is credible, and where it clearly can operate on the world scale. I would characterize Kevin Esvelt's wastewater monitoring project that way. Projects like this are winners under any grantmaking paradigm, and that's the sort of project my first paragraph in my original comment was about.
Another way I might put it is that grantmaking is in the ideal case giving money to a project that ticks all three boxes (vision, model, person), but in many cases grants are about ticking one or two boxes and crossing one's fingers. I think it would be good to be clear about that and create a space for model-based, or model+person-based, or model+vision-based grantmaking with some clarity about what a pivot might look like if the vision, model or person didn't pan out.
I have to disagree with you at least somewhat about projects to improve epistemics. Maybe it's selection bias - I'm not plugged into the SF rationalist scene and it may be that there's a lot of sloppy ideas bruited about in that space, I don't know, but I can think of a bunch of projects to improve epistemics that I have personally benefitted from greatly - LessWrong and the online rationalist community, the suite of prediction markets and competitions, a lot of information-gathering and processing software tools, and of course a great deal of scientific research that helps me think more clearly not just about technical topics about about thinking itself. I wouldn't be at all surprised if there are a bunch of bad or insignificant projects that are things like workshops or minor software apps. I guess I just think that projects to improve epistemics don't seem obviously more difficult than others, the vision makes sense to me, and it seems tractable to separate the wheat from the chaff with some efficacy. That might be my own naivety and lack of experience however.
I have personally benefitted from some of your projects and ideas, particularly the idea of epistemic spot-checks, which turn out to be useful even if you do have or are in the process of earning a graduate degree in the subject. That's not only because there's a lot of bull out there, but also because the process of checking a true claim can greatly enrich your interpretation of it. When I read review articles, I frequently find myself reading the citations 2-3 layers deep, and even that doesn't seem like enough in many cases, because I gain such great benefits from understanding what exactly the top-level review summary is referring to. It seems like your projects are somewhat on the borderline between academic research and a boutique report for individual or small-group decision making. I think both are useful. It's hard to judge utility unless you yourself have a need for either the academic research or are making a decision about the same topic, so I can't opine about the quality of the reports you have generated. I do think that my academic journey so far has made me see that there's tremendous utility in putting together the right collection of information to inform the right decision, but it's only possible to do that if you invest quite a bit of yourself into a particular domain and if you are in collaboration with others who are as well. So from the outside, it seems like it might be valuable to see if you can find a group of people doing work you really believe in, and then invest a lot in building up those relationships and figuring out how your research skills can be most informative. Maybe that's what you're already doing, I am not sure. But if I was a regranter and had money to give out at least on a model+person basis, I would happily regrant to you!
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-07-11T18:40:17.455Z · LW(p) · GW(p)
- I agree with your general principles here.
- I think my statement of "nearly guaranteed to be false" was an exaggeration, or at least misleading for what you can expect after applying some basic filters and a reasonable definition of epistemics. I love QURI and manifold and those do fit best in the epistemics bucket, although aren't central examples for me for reasons that are probably unfair to the epistemics category.
Guesstimate might be a good example project. I use guesstimate and love it. If I put myself in the shoes of its creator writing a grant application 6 or 7 years, I find it really easy to write a model-based application for funding and difficult to write a vision-based statement. It's relatively easy to spell out a model of what makes BOTECs hard and some ideas for making them easier. It's hard to say what better BOTECs will bring in the world. I think that the ~2016 grant maker should have accepted "look lots of people you care about do BOTECs and I can clearly make BOTECs better", without a more detailed vision of impact.
I think it's plausible grantmakers would accept that pitch (or that it was the pitch and they did accept it, maybe @ozziegooen [LW · GW] can tell us?). Not every individual evaluator, but some, and as you say it's good to have multiple people valuing different things. My complaint is that I think the existing applications don't make it obvious that that's an okay pitch to make. My goal is some combination of "get the forms changed to make it more obvious that this kind of pitch is okay" and "spread the knowledge that that this can work even if the form seems like the form wants something else".
In terms of me personally... I think the nudges for vision have been good for me and the push/demands for vision have been bad. Without the nudges I probably am too much of a dilettante, and thinking about scope at all is good and puts me more in contact with reality. But the big rewards (in terms of money and social status) pushed me to fake vision and I think that slowed me down. I think it's plausible that "give Elizabeth money to exude rigor and talk to people" would have been a good[1] use of a marginal x-risk dollar in 2018.[2]
During the post-scarcity days of 2022 there was something of a pattern of people offering me open ended money, but then asking for a few examples of projects I might do, and then asking for them to be more legible and the value to be immediately obvious, and fill out forms with the vibe that I'm definitely going to do these specific things and if I don't have committed a moral fraud... So it ended up in the worst of all possible worlds, where I was being asked for a strong commitment without time to think through what I wanted to commit to. I inevitably ended up turning these down, and was starting to do so earlier and earlier in the process when the money tap was shut off. I think if I hadn't had the presence of mind to turn these down it would have been really bad, because I not only was committed to a multi-month plan I spent a few hours on, but I would have been committed to falsely viewing the time as free form and following my epistemics.
Honestly I think the best thing for funding me and people like me[3] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That's probably a bigger deal than wording on a form.
- ^
where by good I mean "more impactful in expectation than the marginal project funded".
- ^
I have gotten marginal exclusive retreat invites on the theory that "look she's not aiming very high[4] but having her here will make everyone a little more honest and a little more grounded in reality", and I think they were happy with that decision. TBC this was a pitch someone else made on my behalf I didn't hear about until later.
- ^
relevant features of this category: doing lots of small projects that don't make sense to lump together, scrupulous about commitments to the point it's easy to create poor outcomes, have enough runway that it doesn't matter when I get paid and I can afford to gamble on projects.
- ^
although the part where I count as "not ambitious" is a huge selection effect [LW(p) · GW(p)].
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2023-07-11T20:42:48.918Z · LW(p) · GW(p)
My complaint is that I think the existing applications don't make it obvious that that's an okay pitch to make. My goal is some combination of "get the forms changed to make it more obvious that this kind of pitch is okay" and "spread the knowledge that that this can work even if the form seems like the form wants something else".
That seems like an easy win - and if the grantmaker is specifically not interested in pure model-based justifications, saying so would also be helpful so that honest model-based applicants don't have to waste their time.
and fill out forms with the vibe that I'm definitely going to do these specific things and if I don't have committed a moral fraud
That seems like a foolish grantmaking strategy - in the startup world, most VCs seem to encourage startups to pivot, kill unpromising projects, and assume that the first product idea isn't going to be the last one because it takes time to find a compelling benefit and product-market fit. To insist that the grantee stake their reputation not only on successful execution but also on sticking to the original project idea seems like a way to help projects fail while selecting for a mixture of immaturity and dishonesty. That doesn't mean I think those awarded grants are immoral - my hope is that most applicants are moral people and that such a rigid grantmaking process is just making the selection process marginally worse than it otherwise might be.
Honestly I think the best thing for funding me and people like me[3] [LW(p) · GW(p)] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That's probably a bigger deal than wording on a form.
Yeah, I think this is an interesting space. Certainly much more work to make this work than changing the wording on a form though!
Sounds like we're pretty much in agreement at least in terms of general principles.
comment by agrippa · 2023-07-07T02:43:18.339Z · LW(p) · GW(p)
Thanks for giving an example of a narrow project, I think it helps a lot. I have been around EA for several years, I find that grandiose projects and narratives at this point alienate me, and hearing about projects like yours make my ears perk up and feel like maybe I should devote more time and attention to the space.
comment by Nina Panickssery (NinaR) · 2023-07-02T16:40:57.684Z · LW(p) · GW(p)
Strong agree, and I like your breakdown of costs. Most good work in the world is done without the vision of "saving humanity" or "achieving a flourishing utopia" or similar in mind. Although these are fun things to think about and useful/rational motivations, grand narratives are not the sole source of good/rational/useful motivations and should not be a prerequisite for receiving grants.
comment by Raemon · 2023-07-27T18:42:22.216Z · LW(p) · GW(p)
Curated.
"How to allocate funding for public goods" is a pretty important question that I feel like civilization is overall struggling with. "How to incentivize honesty, minimize grift, and find value" are all hard questions.
The EA/X-risk-ecosystem is somewhat lucky in that there's a group of funders, thinkers and doers who are jointly interested in the longterm health of the overall network and some shared concern for epistemics such that it's a tractable question to ask "how do we make tradeoffs in grantmaking applications?"
Are we trading off "how much value we're getting from this round of grants" against "how much longterm grift or epistemic warping are we encouraging in future rounds"? Are there third-options that could get more total value, both now and in the future?
I think it's a complex question how to optimize a grantmaking ecosystem, but I like this post for laying out one set of considerations and pointing towards how to brainstorm better solutions.
↑ comment by Raemon · 2023-07-27T19:57:00.777Z · LW(p) · GW(p)
Since one person commented privately:
For purposes of curation, I think it's a bit of a point-against-the-post that it's focused on the EA community and is a bit inside-baseball-y, but I think the general lessons here are pretty relevant to the broader societal landscape. (I also think there's enough EA people reading LessWrong that occasional posts somewhat focused on this-particular-funding-landscape is also fine)
I'm actually fairly curious how much the Silicon Valley funding landscape has the capacity to optimize itself for the longterm. I assume it's much larger and more subject to things like "unilaterally trying to optimize for epistemics doesn't really move the needle on what the rest of the ecosystem is doing overall, so you can't invest in the collective future as easily". But there might also be a relatively small number of major funders who can talk to each other and coordinate? (but, also, the difference this and being a kinda corrupt cartel is also kinda blurry, so watch out for that?)
comment by Noosphere89 (sharmake-farah) · 2023-07-02T16:49:12.504Z · LW(p) · GW(p)
Corrupts epistemics
Not that much. But I think it’s pretty bad if people are forced to choose between "play the game of exaggerating impact" and "go unfunded". Even if the game is in fact learnable, it's a bad use of their time and weakens the barriers to lying in the future.
This is my biggest issue with the grand narrative framing, in that it implies that people can realistically expect to have a lot of impact, and in most cases, this won't happen even if succcessful, let alone the grants that failed.
comment by Elizabeth (pktechgirl) · 2023-07-27T21:24:24.112Z · LW(p) · GW(p)
Datapoint: I saw that LTFF funded a project I liked but expected to be hard to justify on the form's terms; it was capacity building and the benefits would be hard to measure (although easy to goodhart). The application did ~what I suggested here, declaring an intermediate goal for the project and letting the grantmakers figure out if they valued that goal or not, without justifying why it was valuabe. And it got funded.
I take this as mild evidence that this technique works, and mild evidence against the forms needing to spell this out explicitly. It might have helped that this person was reasonably well-networked, and I expect had good references.
comment by Lorxus · 2023-07-31T12:57:46.026Z · LW(p) · GW(p)
Some even worse meta-effects: I have had some fairly bad experiences already in my attempts to get grants or a research position. I wish I could detail them more here, but I am not stupid, and I know that the people who deal with those grant applications or sit on those hiring panels come here and read. Probably this is already too much to have said. If you want, you can reach out to me privately and I'll happily speak on this.
comment by Joe Rogero · 2023-07-28T17:27:52.630Z · LW(p) · GW(p)
I found this a very useful post. I would also emphasize how important it is to be specific [LW · GW], whether one's project involves a grand x-risk moonshot or a narrow incremental improvement.
- There are approximately X vegans in America; estimates of how many might suffer from nutritional deficiencies range from Y to Z; this project would...
- An improvement in epistemic health on [forum] would potentially affect X readers, which include Y donors who gave at least $Z to [forum] causes last year...
- A 1-10% gain in productivity for the following people and organizations who use this platform...
For any project, large or small, even if the actual benefits are hard to quantify, the potential scope of impact can often be bounded and clarified. And that can be useful to grantmakers too. Not everything has to be convertible to "% reduction in x-risk" or "$ saved" or "QALYs gained", but this shouldn't stop us from specifying our actual expected impact as thoroughly as we can.
comment by Review Bot · 2024-07-02T23:08:27.188Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
comment by Review Bot · 2024-07-02T23:08:27.267Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?