Opinion Article Against Measuring Impact

post by ole.koksvik · 2018-07-25T07:37:01.020Z · LW · GW · 2 comments

This is a link post for https://www.theguardian.com/global-development/2018/jul/16/buzzwords-crazes-broken-aid-system-poverty

A strange opinion article in The Guardian today: it is not entirely clear whether the authors object to a concern with effectiveness, or just think that "assessing the short-term impacts of micro-projects" is somehow misguided (and if so, why that is).

2 comments

Comments sorted by top scores.

comment by ChristianKl · 2018-07-25T16:49:12.848Z · LW(p) · GW(p)

The article starts of in a way that seems to either extremely clueless or purposefully misleading. I have a heard time seeing how someone can say that Global Proverty is intractable in a paragraph that speaks about failing Millennium goals in the same paragraph when the Millennium goal of halving the amount of people who live on less than a dollar a day between 2000 and 2015 was successful. The goal of halving the amount of undernourished people was also successful.

The article should go into the fake news bin even when it might be possible to argue the case in a decent way.

comment by jessicata (jessica.liu.taylor) · 2018-07-25T08:11:59.405Z · LW(p) · GW(p)

The text seems pretty clear on both these questions.

But the real problem with the “aid effectiveness” craze is that it narrows our focus down to micro-interventions at a local level that yield results that can be observed in the short term. At first glance this approach might seem reasonable and even beguiling. But it tends to ignore the broader macroeconomic, political and institutional drivers of impoverishment and underdevelopment. Aid projects might yield satisfying micro-results, but they generally do little to change the systems that produce the problems in the first place. What we need instead is to tackle the real root causes of poverty, inequality and climate change.

...

In all these areas, there is still an enormous amount to be done. If we are concerned about effectiveness, then instead of assessing the short-term impacts of micro-projects, we should evaluate whole public policies. In this respect, there is a wealth of underused data provided by decades of household surveys by national statistical offices. Combined with satellite data, recently made public, they can now be used for detailed analysis, capable of providing clear information on the public policies that have been most successful. In the face of the sheer scale of the overlapping crises we face, we need systems-level thinking.

The problems with choosing interventions based on how well they are measured to do are similar to the problems faced by model-free reinforcement learning algorithms (such as: necessity to collect lots of high-quality data, the costs of exploration, local maxima that could be avoided with better models, Goodhart's law, lack of human understanding of the underlying phenomena, problems with learning long-term dependencies, use of CDT or EDT as a decision theory), because the process of choosing interventions based only on how well they are measured to do is literally a model-free reinforcement learning algorithm.

One thing the article unfortunately fails to acknowledge is that observational data is often insufficient to infer causality, and RCTs can help here.