Dr. Eric Abitbol, Senior Consultant and Practice Leader for Environment, Security and Conflict Transformation at the Universalia Management Group, recently returned from the 2016 Conference of the American Evaluation Association, on the theme of “Evaluation + Design”. Below, he shares his reflections on his time at the conference, and on “what’s at stake” in contemporary evaluation.
Arriving in Atlanta for the 2016 annual conference of the American Evaluation Association, I was fortunate enough to receive word of an informal dinner gathering of Canadian evaluators. I was even more fortunate to sit next to Anna Paskal, with whom I have recently undertaken an evaluation of the Canadian International Food Security Research Fund (CIFSRF). Conversation quickly turned to the various roles, responsibilities and power of evaluators. Unbeknownst to me at the time, this became a central theme for the subsequent days I would spend attending panels, participating in roundtables, listening to keynotes and generally sharing hallway conversations with a few of the 3,500 or so conference participants.
In a grand hotel ballroom, a few key voices from the evaluation field sat together on an elevated stage and shared a lifetime of experience about evaluation. Situating the contemporary field of evaluation as part of a tradition dating back many thousands of years, Michael Scriven of Claremont Graduate University and the Faster Forward Fund (3F) entreated us all to recognize “evaluation as the alpha discipline”, the discipline which recognizes (or rejects) disciplines themselves. Cognizant and critical of the positive bias in evaluation, he vividly held a candle to the ethical mandate and responsibility of evaluators.
Several conference sessions I attended touched on this theme from a range of perspectives. Perhaps the most poignant focused on the reporting of failure, of critical or unflattering perspectives. In the short term, reporting flatteringly on a project or program may result in return business for individual evaluators or evaluation firms, but at what cost? In the medium term, commissioning organizations may not be getting the information really sought from independent and professional evaluators. In the longer-term, the field of evaluation’s very relevance is itself at stake. As Giovanni Dazzo of the US Department of State, Bureau of Democracy, Human Rights and Labor meaningfully pointed out, eyebrows are raised when independent external evaluations are less critical about programs than the self-reporting of grantees. A self-regulating system is at work that should keep everyone honest, critical, appreciative and ultimately epistemologically ethical.
The field of evaluation has evolved through the ongoing trial, error, bungling and innovation of countless people and organizations, creating, adapting, even abandoning methodologies in the face of emerging needs, producing appropriate degrees of sophistication. A humble giant in the field, Robert Chambers stood on stage in a T-shirt with the words “ECLECTIC METHODOLOGICAL PLURALISM” printed boldly black on white. Underpinned by this mantra, he pushed us to move beyond mechanistic evaluation approaches, towards greater participation, inclusion, even play in our methodological choices. A few participatory methods he encouraged us all to consider included Participatory Statistics, PIALA (Participatory Impact Assessment and Learning Approach), and the Reality Check Approach.
This type of creative exploration was pursued in discussion facilitated by Rees Warne of ClimateWorks, as we all sought to consider approaches for evaluating long-term impacts of environment and climate-change interventions. While we did not come to definitive conclusions about how to go about doing this, we opened discursive pathways that may very well inform a new generation of methodologies and tools for doing so: meta-theories of change and projection harvesting, anyone?
Of course, AEA sessions are typically rich with methodologies, tools and projects using them. Speakers in a session on Social and Collective Impact evaluation indicated that for such an approach, the choice of design should be collaboratively determined. We were told it is important to experiment with and then ultimately draw on diverse, preferably participatory methodologies, relevant to specific contexts, giving visibility to specific societal values. In such evaluations, as our colleagues from Japan Professor Ken Ito (Keio University) and Katsuji Imata of the Japan NPO Center indicated, the evaluator could assume a unique role of “banso”, Japanese for coach, critical friend, change facilitator, and learning partner. Other methodologies discussed included Developmental Evaluation, Real-Time Polling, Chalk Talk, Collaborative Sense-Making and Outcome Harvesting.
Evaluation for the Entire World
In the collaborative spirit, Mallika Samaranayake of the Community of Evaluators (COE) of South Asia traced the contemporary history of evaluation, from a largely donor-driven field of practice to one of growing South-North partnerships and with increased attention to contextual relevance. In a separate session, her COE colleague, Sonal Zaveri, called on evaluation practitioners from the Global South and North to join them in building the partnered future of evaluation at their next annual conference to be held in June 2017 in Bhutan.
I had the pleasure of being one of several ‘conversation starters’ in a session on evaluating scaling-up hosted by the International Development Research Centre (IDRC). Colleagues from the Rockefeller Foundation, USAID, Oxfam, COE and myself from Universalia introduced a variety of themes related to evaluating scaling-up, with many more insights shared from the experiences of some 50 participants. A few key takeaways I continue to ponder in relation to my practice include:
Evaluators are not often brought in early enough to most effectively support programs in planning for achievement.
- It is both challenging and important to balance doable M&E with deeper levels of analysis, and for all to appreciate the costs and trade-offs involved in doing so.
- In relation to debates related to attribution and contribution, it is also key to recognize what are reasonable project outcomes as well as plausible contributions to outcomes in relation to longer-term societal outcomes.
- Not all projects, services or products should necessarily go to scale, given their contextualized value and specificity.
- There are many types and strategies for scaling up, and evaluating scaling up should reflect this diversity and contextuality.
Ultimately, it became clear that definitions of success, mechanisms of change, and framings of M&E systems ought to take considerations of scale into account, ideally as early as possible in the planning cycle.
An Evaluation of Intention
At the present historic moment, evaluators have important choices to make on the matter of evaluator independence and power. Typically, evaluators have been valued for their independence and neutrality, but is this an unnecessarily neutralizing vantage point from which to craft the evaluator’s invited interventions? Shawna Hoffman of the Rockefeller Foundation suggested the overall concern with evaluator independence might be restrictive. In a powerful plea for justice, Stafford Hood challenged us all to rethink our socio-political locations as evaluators. Should the evaluator’s role limit itself to providing assessments of projects and programs in their own terms, towards objectives, against indicators? Or might the evaluator play another, perhaps more poignant role, “to be of consequence” as social justice practitioners, providing “evidence in support of the cultural validity of different communities”? From this vantage point, evaluation is emancipatory practice.
This resonates for those of us who undertake M&E work in conflict environments. Speaking from my own experience, rather than being selected for neutrality and independence, I have been drawn in to support the transboundary peacebuilding work of organizations ostensibly on opposite sides of the political divide, specifically because of prior, contextual knowledge, and a sensitivity to the many threats and triggers ever present in conflict environment. Interestingly, the “bias” is not towads one party or another but intentionally in favour of justice, equity, and partnership, which meaningfully informs the actual M&E advisory work that is requested and pursued. Such issues were also discussed in a session on Conflict Sensitive Evaluation led by Isabella Jean of CDA Collaborative Inc. Here, conflict sensitivity was (among others things) understood as the work undertaken by evaluators to cultivate relations of trust across the conflict. In other session on Principles-Focused Evaluation hosted by FSG Inc. with Michael Patton as discussant, the development of principles for guiding programming, action and further evaluation emerged as a means through which an evaluation of intention could be collaboratively pursued by diverse actors across multiple social, political, cultural and other divides.
Concluding with Inception
This year’s AEA was an incredibly rich intellectual and methodological shared space. With so many approaches to evaluation, it is abundantly clear that the field has been going through a creative expansiveness as of late, with still lots of room to grow. In practice, this points to the process of inception as a key methodological moment for operationalizing this diversity, and doing so in contextually appropriate ways. Beyond this, however, we are in a historically reflexive moment, where our work as evaluators can live in the functional shadows, or it can assume the epistemological implications of the moment and “be of consequence”. There is no single, right answer along this continuum, which is by no means Manichean. But this fundamental, underlying question faces us all, mindful that what is at stake is our professional reputation, the value of our diverse and multiplying methodologies, and also the relevance of our field of practice. In the process of inception is the moment of creativity and the kernel of consequence.