Rating Charity Effectiveness

In our last post, The Overhead Myth, we noted that this welcome new initiative urges prospective donors to look at the whole picture, and in particular, indicators of program effectiveness instead of financial ratios, as the main measure of a charity’s worthiness.  So far, so good.... this kind of thinking in the charity ratings business is long overdue.  But it brings up a troubling question:  How will watchdog and ratings organizations “rate” program effectiveness? 

Charity Navigator plans to roll out the beginnings of a system to rate charities on program effectiveness over the next several years; others are sure to follow.   While many in the industry (ourselves included) applaud the intent, they are also troubled by how this might work out in actual practice, considering the difficulties inherent in applying standardized formulas to work of such wide-ranging and variable scope as alleviating extreme poverty.  Here are some of the main concerns:

  • True program effectiveness (the lasting change or long-term impact on a community or region) does not lend itself well to quantitative analysis using standardized formulas.  While numbers of people served, goods delivered, and other achievements are easy to track, the end outcomes are much more difficult to quantify.  Further difficulties will be encountered in attempting to compare organizations of different sizes, using different approaches, pursuing different goals, and working in different or multiple sectors.
  • Proper analysis of program effectiveness almost by definition requires long-term longitudinal studies, which require many years (up to 20) to conduct; performing such studies on an on-going basis will involve significant investment in "non-program" expertise and expense.  This is a burden that will impact smaller organizations and newer programs in a disproportionate way, thereby stifling innovation.
  • We know from unfortunate experience that all rating/ranking schemes are subject to manipulation by those being rated; a recent example of this is the trend towards grossly overstating the value of “gifts in kind” (medicines are the most prominent example) in order to improve fundraising and overhead ratios.  Given that program effectiveness is difficult to define and measure in the first place, ratings schemes based on this criteria will likely prove especially prone to manipulation and misrepresentation.  This will not only make such ratings less reliable in themselves; such "gaming" will also cast a shadow over the charitable world in general.

For these reasons, we suspect that, for the foreseeable future, large-scale, standardized approaches to developing “ratings” for program effectiveness will probably prove to be inadequate at best, and misleading at worst.  A better approach will be to look for reliable, research-based indicators of program effectiveness on a charity by charity basis.  In the context of organizations devoted to fighting extreme poverty, this means looking for evidence of best practices in their overall approach and daily work.

The article "Helping Well" provides a broad survey of these best practices for transformational development.

 

If you found this article useful, please share it with a friend.
Posted in Making a Difference.