Are you ready for the Learning chaLLenge?

Two major purposes of evaluation have traditionally been accountability and learning. In engaging in evaluation for accountability, practitioners collect and analyze data to prove that what was supposed to be done was, in fact, done and to determine how well it was done. Multiple frameworks and standards have emerged over the past decades to support and facilitate the accountability purpose of evaluation. However, much less thinking has been done with respect to the learning function of evaluation.

Several months ago, Universalia was asked by the MasterCard Foundation to conduct a mid-term evaluation of its YouthSave Initiative (YSI). The evaluation approach chosen placed learning at the center of the process – with a strong focus on four key learning questions –, in addition to assessing the more traditional criteria of effectiveness, efficiency, relevance and sustainability. This approach was supported by multiple feedback loops throughout the implementation of the evaluation and managed by a governance structure (composed of the evaluator, the implementing partner and the donor – all interested in both what was learned and what was accomplished).

YIS is a Youth Saving and Financial Services project with the goal of implementing a landmark global study on how to sustainably deliver saving services to youth. This US$12.5 million dollar-project started in 2010 and is implemented in four countries (Colombia, Ghana, Nepal and Kenya) through four partners: Save the Children, the Consultative Group, the New America Foundation and the Center for Social Development.

Reflecting on this experience, we have come to identify some challenges, both for the agency requesting an evaluation and for the practitioner:

  1. Balancing accountability and learning: Some organizations have the possibility of focusing their evaluation on learning. However, in times of scarce resources or in times when constituencies are more scrupulous about how resources are spent and results achieved, many agencies tend to reinforce the accountability function of evaluation. At issue for an agency is its level of tolerance to taking the risk of emphasizing learning at the expense of accountability. Our preliminary assumption is that private entities have more autonomy to take such risks. At the MasterCard Foundation, the learning agenda is strongly supported by senior management and the Board, which is believed to be a key requirement to successfully adopting this evaluation approach.
  2. Accepting the demand for additional resources: To foster and disseminate learning acquired during the evaluation process, some additional resources are required in order to include feedback loops at each step of the evaluation process. These loops provide more frequent opportunities for discussion, to reflect and gather lessons. Again, in times when evaluation units may be overworked or have limited resources, how can one instill learning inside the evaluation process?
  3. Need to trust: When learning is at the center of the project design, the likelihood that innovative results will be obtained increases, but so does the possibility of failure. Thus, a certain level of trust is required between the evaluating unit, the evaluator and the entity being evaluated: the implementer must feel comfortable with discussing both successes and failures of the project in question, bearing in mind that both of these constitute important opportunities to learn.
  4. Methodology: Learning requires being less prescriptive about methodologies in order to give the evaluator the freedom to collect data and analyze the questions under examination using a wider range of methods. In an era where randomized controlled trials (RTCs) are increasingly perceived as the main approach leading to credible results, how flexible are agencies in allowing for a pluralistic methodological approach?
  5. Budget and reporting: Finally, for the practitioner, there are issues of budgeting and reporting. How does the evaluator balance the need to budget for a learning evaluation process (requiring more time and feedback loops) and the desire to develop a financially competitive pricing structure? And how does one use methodologies for reporting (other than the traditional report) to foster learning and increase the likelihood that learning will be applied?

Over the next little while, we hope to develop greater understanding of the implications of putting learning at the center of the evaluation.


Dr. Marie-Hélène Adrien is the President and a Senior Consultant of Universalia, specialising in evaluation and management in international development. Over the past 25 years, she has conducted more than 100 assignments with various agencies such as the World Bank, the UN, the African Development Bank, the Islamic Development Bank, the Canadian International Development Agency, the International Development Research Center, various bilateral agencies, and NGOs.

Dr. Adrien is a Professor of Practice and a Founding Member of the Leadership Council of the McGill Institute for the Study of International Development. She has published works on evaluation, including Enhancing Organizational Performance: A Toolbox for Self-Assessment, published by the International Development Research Centre (IDRC), and Organizational Assessment: A Framework for Improving Performance, published by the IDRC and the Inter-American Development Bank. She is the Past President (2005-2008) of the International Development Evaluation Association (IDEAS).

One thought on “Are you ready for the Learning chaLLenge?

  1. Dear Marie,
    I completely agree on the article. Learning, almost in all fields and especially after the last crisis, is ever since becoming more challengeable. However, in addition to as above, I think project efficiency and project capitalizations are also very important. The first one is normally part of any project evaluation be it before, during or after the intervention. A rather new concept and hence, in a way, intervention capitalization is a wider approach that not only analyzes what a standard evaluation does, but also what was learned and put into practice that now functions, produces sustainable results in short and long term. In one term, all results achieved from a project / program intervention could be called as the “asset”. These assets then are analyzed as how best they are operating, functioning, etc. in the future. Asset in this case could be new or modified learning, method, approach, etc. I see a clear distinction or at least when somehow one “stops” and the other “begin”. In very short, this is how, in my opinion, capitalization not only goes beyond evaluation but also better analyzes the assets achieved and what is their real value in practice.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s