Two major purposes of evaluation have traditionally been accountability and learning. In engaging in evaluation for accountability, practitioners collect and analyze data to prove that what was supposed to be done was, in fact, done and to determine how well it was done. Multiple frameworks and standards have emerged over the past decades to support and facilitate the accountability purpose of evaluation. However, much less thinking has been done with respect to the learning function of evaluation.
Several months ago, Universalia was asked by the MasterCard Foundation to conduct a mid-term evaluation of its YouthSave Initiative (YSI). The evaluation approach chosen placed learning at the center of the process – with a strong focus on four key learning questions –, in addition to assessing the more traditional criteria of effectiveness, efficiency, relevance and sustainability. This approach was supported by multiple feedback loops throughout the implementation of the evaluation and managed by a governance structure (composed of the evaluator, the implementing partner and the donor – all interested in both what was learned and what was accomplished).
YIS is a Youth Saving and Financial Services project with the goal of implementing a landmark global study on how to sustainably deliver saving services to youth. This US$12.5 million dollar-project started in 2010 and is implemented in four countries (Colombia, Ghana, Nepal and Kenya) through four partners: Save the Children, the Consultative Group, the New America Foundation and the Center for Social Development.
Reflecting on this experience, we have come to identify some challenges, both for the agency requesting an evaluation and for the practitioner:
- Balancing accountability and learning: Some organizations have the possibility of focusing their evaluation on learning. However, in times of scarce resources or in times when constituencies are more scrupulous about how resources are spent and results achieved, many agencies tend to reinforce the accountability function of evaluation. At issue for an agency is its level of tolerance to taking the risk of emphasizing learning at the expense of accountability. Our preliminary assumption is that private entities have more autonomy to take such risks. At the MasterCard Foundation, the learning agenda is strongly supported by senior management and the Board, which is believed to be a key requirement to successfully adopting this evaluation approach.
- Accepting the demand for additional resources: To foster and disseminate learning acquired during the evaluation process, some additional resources are required in order to include feedback loops at each step of the evaluation process. These loops provide more frequent opportunities for discussion, to reflect and gather lessons. Again, in times when evaluation units may be overworked or have limited resources, how can one instill learning inside the evaluation process?
- Need to trust: When learning is at the center of the project design, the likelihood that innovative results will be obtained increases, but so does the possibility of failure. Thus, a certain level of trust is required between the evaluating unit, the evaluator and the entity being evaluated: the implementer must feel comfortable with discussing both successes and failures of the project in question, bearing in mind that both of these constitute important opportunities to learn.
- Methodology: Learning requires being less prescriptive about methodologies in order to give the evaluator the freedom to collect data and analyze the questions under examination using a wider range of methods. In an era where randomized controlled trials (RTCs) are increasingly perceived as the main approach leading to credible results, how flexible are agencies in allowing for a pluralistic methodological approach?
- Budget and reporting: Finally, for the practitioner, there are issues of budgeting and reporting. How does the evaluator balance the need to budget for a learning evaluation process (requiring more time and feedback loops) and the desire to develop a financially competitive pricing structure? And how does one use methodologies for reporting (other than the traditional report) to foster learning and increase the likelihood that learning will be applied?
Over the next little while, we hope to develop greater understanding of the implications of putting learning at the center of the evaluation.
Dr. Marie-Hélène Adrien is the President and a Senior Consultant of Universalia, specialising in evaluation and management in international development. Over the past 25 years, she has conducted more than 100 assignments with various agencies such as the World Bank, the UN, the African Development Bank, the Islamic Development Bank, the Canadian International Development Agency, the International Development Research Center, various bilateral agencies, and NGOs.
Dr. Adrien is a Professor of Practice and a Founding Member of the Leadership Council of the McGill Institute for the Study of International Development. She has published works on evaluation, including Enhancing Organizational Performance: A Toolbox for Self-Assessment, published by the International Development Research Centre (IDRC), and Organizational Assessment: A Framework for Improving Performance, published by the IDRC and the Inter-American Development Bank. She is the Past President (2005-2008) of the International Development Evaluation Association (IDEAS).