It’s time for our “feel-good” sector to seriously step up to the impact maximization plate.
By Andrea Abel (Co-Founder, International Policy Research & Evaluation Group)
Let’s face it, when it comes to devoting careers, time and energy into the non-profit, foundation/aid sector, women are leading the way. In the US for example, 73% of all non-profit staff is female.
But when it comes to efficiency and effectiveness, this sector lags miles behind the traditionally male-dominated private sector. For years, the stereotype for organizations operating in the sector has been a ‘heart-is-in-the-right-place but loosey-goosey culture’, at which people throw money more to make themselves feel good, and less because they actually think they will do good.
Maybe it’s because I’m competitive by nature, but as far as I’m concerned, no one puts baby in the corner. It’s time for our “feel-good” sector to seriously step up to the impact maximization plate. We’re not here to make you feel good about giving your money to a social cause, we’re here to do darn good.
Impact maximization comes at considerable initial cost and difficulty. Whereas in the private sector, there is a set of generally agreed upon financial and operational metrics that provide a yardstick for performance and accountability, and are relatively straightforward to measure, in the non-profit/aid sector this is far from true.
For example, it is necessary but not sufficient to know how much of the donor funding is going to programs versus operations. If the programs to which money is being allocated all turn out to be ineffective at best, what does it matter to the bottom impact line that 98% of funds went into that program and 2% went to operations?
Right now, too often what is being counted as a measure of program impact bares very little relation to actual program impact, with an emphasis placed on easily measureable metrics associated with inputs and outputs. The yardstick for measurement of impact, however, should be outcomes.*
Many non-profits, foundations and governmental organizations are hesitant to engage in serious evaluations of program outcomes because they fear that they may score low on the impact ‘score card’, which in turn may cause donors and funders to run in the opposite direction.
Evaluation for the sake of evaluation may be an interesting academic exercise which checks the box for accountability, but from an organizational perspective, impact evaluation is critical and most useful when it is used as a tool for learning. There is nothing to fear if you go into a program evaluation with the objective of trying to achieve your program goals to the best of your ability.
If the objective is to learn if your programs are working, and if not, why not, how to make them better going forward, then you should be considered a visionary unafraid to know and acknowledge that programs cannot be perfect on try one.
What impact is your program having? This is the question to be answered by evaluation. What are the obstacles to your achieving the impact you’d like to have, and how can you overcome? These are the questions to be answered by impact maximization.
In trying to do good for the world, we should be aiming to do it to the best of our ability. We should not shirk from our responsibility to maximize the impact of the programs we are implementing. And we should not be immune to accountability.
*Say for example, you were an NGO running a program to disarm, demobilize and reintegrate young fighters into their communities in the aftermath of a civil war. Telling your donors that you have had xx number of youths go through your program – a significant programmatic output – reveals very little about what the effect of the ‘treatment’ was – the programmatic outcome. Were those that participated in your program more likely to disarm, demobilize and reintegrate than those that hadn’t? Are those that go through your program less likely to relapse into fighting if and when conditions deteriorate, than those that had not?
Answering these questions, which get at the real impact of such a program, requires a meticulously thorough research design for evaluation. Yes, this is difficult, but not impossible. It is critical to understanding the efficacy of your program.
Editor’s note: Got a question for our guest bloggers? Leave a message in the comments below.
About the guest blogger: Andrea Abel is co-founder of IPRE Group. She holds a PhD in Political Science from Stanford University and Bachelor’s degrees in Aerospace Engineering and Commerce, and a Masters of Engineering Research in Mechatronic Engineering, all at the University of Sydney. She has held fellowships from the Department of Political Science at Stanford, the Australian Government, and the Faculty of Engineering at the University of Sydney. Follow her on Twitter at @AAA_ipregroup.