The Crisis of Evidence Use

post by Ian David Moss · 2019-08-30T04:50:21.000Z · LW · GW · 0 comments

We aspire to let evidence drive our actions, but in practice it rarely does.

Photo by Patrick Tomasso on Unsplash

In 2015, the Center for Evaluation Innovation and the Center for Effective Philanthropy surveyed evaluation and program executives at 127 US and Canadian foundations with at least $10 million in annual giving. The result was the report “Benchmarking Foundation Evaluation Practices,” and it contains one of the most amazing facts I’ve encountered in nearly two decades of working in the social sector.

In response to a question about the challenges they encounter in their work, more than three-quarters of respondents said they have a hard time commissioning evaluations that yield meaningful insights for their own foundations. To put that in plainer language: most of the folks who commission evaluations of social programs have trouble getting people — including even their own colleagues — to use them. And this is coming from the people whose job it is to commission those evaluations, so if anything we’d expect them to be biased against admitting there’s any problem. Not only that, each of the top four challenges identified by foundation evaluation professionals were all related to use, and all experienced by more than 7 in 10 respondents.

 — @iandavidmoss

Some might suppose that this has something to do with the choice of study design, but foundations that commission randomized controlled trials (RCTs) didn’t seem to fare much better. Of the foundations that funded RCTs, only 38% reported that they actually drove grantmaking decisions. And just 25% found them useful for developing strategy.

It’s no surprise, then, that survey respondents hoped to see substantial changes to the way the field designs and uses evaluations going forward.

 — @iandavidmoss

These numbers are eye-popping, but you could be forgiven for wondering how much stock to put in them. After all, this is only one study. What if it was just a fluke?

Sadly, there is plenty of other research that demonstrates we are experiencing a crisis of evidence use in our field. Taken together, they strongly reinforce the point that people with influence over social policy simply don’t read or use the vast majority of the knowledge we produce, no matter how relevant it is.

One of my favorite factoids of all time comes from a study the World Bank conducted on its own policy papers several years ago. The methodology was simple: researchers just counted the number of times each paper had been downloaded between 2008–12. They found that three-quarters of these papers, each one representing likely hundreds of hours and thousands of dollars of investment on the institution’s part, were downloaded fewer than 100 times. Nearly a third had never been downloaded even once — not even by their authors!

This was all chronicled in a Washington Post article with the unforgettable title, “The Solutions to All Our Problems May Be Buried in PDFs Nobody Reads.” Touché!

Research that asks policymakers and philanthropists about their reading habits tells a similar story. Funders responding to a survey of more than 700 foundation professionals sponsored by the William and Flora Hewlett Foundation reported being completely overwhelmed with information, to the point where some of them just delete emails announcing new reports and studies without even skimming them first to see if they’re relevant.

In a study of over 1600 civil servants in Pakistan and India by Harvard’s Evidence for Policy Design initiative, policymakers “agreed that evidence should play a greater role in decision-making” but acknowledged that it doesn’t. According to the study, the issues are structural: “Few [respondents] mentioned that they had trouble getting data, research, or relevant studies. Rather, they said…that they had to make decisions too quickly to consult evidence and that they weren’t rewarded when they did.” And the topline finding of an EHPSA/INASP study study looking at HIV policymakers in eastern and southern Africa is that “policymakers value evidence but they may not have time to use it.”

What about front-line practitioners? A US survey found doctors generally don’t follow research relevant to their practice area, and when research comes out that challenges the way they do their work, they expect their medical associations to attack it. Meanwhile, the UK’s Educational Endowment Foundation conducted an actual randomized controlled trial to test strategies to get evidence in front of schoolteachers. They tried online research summaries, magazines, webinars, conferences. None of these “light-touch” methods had any measurable effect on student outcomes.

It’s important to note that none of this is news to the people who make it their business to generate and advocate for the use of evidence. In my experience, the vast majority of such professionals know this is a huge problem and have their own stories to tell. For example, this report from the 2017 Latin American Evidence Week decried the “operational disconnect [that] makes it impossible for evidence generated at the implementation level to feed into policy (and programme) design.” It’s also why 125 social sector leaders interviewed by Deloitte Monitor Institute’s Reimagining Measurement initiative identified “more effectively putting decision-making at the center” as the sector’s top priority for the next decade.

The consistent theme across all these readings is that it’s really tough to get policymakers and other people in power to use evidence, especially when it challenges their beliefs. We’re barely aware of most of the evaluation and research that’s relevant to our work. When we encounter it, we usually don’t read it. When we read it, we often don’t use it. It is very, very rare that evidence on its own will actually influence the views or actions of influential people.

What’s really astounding about this is how much time, money, attention we spend on evidence-building activities for apparently so little concrete gain. With such depressing results, we could just throw up our hands and say, “well, no one’s going to read this stuff anyway, so why bother?”

But we don’t. Instead, it’s probably not an exaggeration to say that our society invests millions of hours and billions of precious dollars toward generating knowledge about the social sector — most of which will have literally zero impact.

In short, we can’t decide what we really think as a society about evidence. We aspire to let it drive our actions, but in practice it seldom does. So we are either vastly overvaluing or vastly undervaluing the act of building knowledge. And we need to get it right, because those are real resources that could be spent elsewhere, and the world is falling apart around us while we get lost in our spreadsheets and community engagement methodologies.

Don’t get me wrong — I believe in science! We could learn so much just from better understanding and connecting the work we’ve already done. At the same time, there’s so much more we could be doing to ensure we get bang for our evidence buck.

0 comments

Comments sorted by top scores.