NEWS / POSTS

Three weeks later, three lasting impressions from the CES 2017

Our colleague Lorenzo recently returned from the Canadian Evaluation Society annual conference in Vancouver, and relates his impressions of the event.

From May 1st to May 3rd, 2017, the Canadian Evaluation Society (CES) held its 2017 annual conference in beautiful Vancouver. It was a great learning opportunity, and I’ve been trying to reflect a little on what lessons will stay with me. In so doing, three themes come to mind which pull together various strands of my personal conference experience, and I elaborate briefly on each below: our position(s) as evaluators, failure to deliver as we expected, and the importance of focusing on the meaningful, not the measurable.

Position

A prominent theme across many conversations at the conference was that of evaluators’ positions, in three senses: vis-à–vis the land they work on, the persons they work with, and the profession they exercise.

From the very beginning, the conference opened with a welcome and blessing by Elder Roberta Price, who immediately situated the conference on the traditional, unceded homelands of the Musqueam, Squamish, and Tsleil Wau-tuth First Nations. This anchored the conference in the reality and space of inequity, suffering, and renewal of Coast Salish peoples, and introduced key themes such as fracture, reconciliation, and reciprocity to the conference.[1] Elder Roberta thus challenged evaluators to reflect on who could or should be in the position to evaluate, and what positions evaluators take on relative to their clients and communities. Who evaluates whom and for what? This was discussed throughout the conference, and I was particularly drawn to the following depictions of our roles:

  • Kim Van der Woerd, keynote speaker, suggested that evaluators are witnesses of what happens in their communities: they record, from a position of relative safety, who does what, how, and with/to whom. They have a responsibility to understand, inspire, and guard or share local knowledge.
  • Kerry MacKelvie suggested that evaluation is about being a good coach. Coaches support, direct and influence their athletes; they are trusted, experienced, long-term, and results-oriented; but they don’t pretend to be better than or even as good as their athletes: no coach runs as fast as the sprinter they train.
  • Third, various presentations focused on the duty of Monitoring, Evaluation and Learning (MEL) specialists to be facilitators of local learning, via exercises such as joint data analysis and restitution in Uganda (as presented by Dr Julian Bagyendera) or feedback meetings in Mozambique (as presented by Oxford Policy Management). This contrasts with ‘traditional’ approaches in which data is extracted from local communities for the benefit of donors with little local benefit.

How evaluators frame their roles influences how they interact with people – and vice-versa. One presentation, by Natalie De Sole, unpacked, for instance, how the physical position of a ‘presenter’ in a room shapes and reflects who is included in the “learning zone”. A traditional monologue from the front of a room is a good tool to convey high volumes of specific knowledge, but is less of a good tool to build skills and craft ownership of new practices. A more appropriate ‘position’ for the latter would be one in which the presenter sits or evolves in the same space as the audience, and as such, shares control about what is said, heard and learned.

At the close of the conference, Elder Roberta picked up again on the theme of relative position and relative distance, suggesting that “only the people who have gone through [certain] experiences can comfort those who are having these experiences.” The challenge for evaluators, in that respect, is perhaps that one of their core roles consists in the interpretation and reconciliation of different stakeholders’ experiences, which the evaluator’s personal history can only partially reflect. How to get close to these various realities without being overwhelmed by any given one remains a challenge,[2] and I found the CES helpful in at least being more aware of the positions we start from, the positions we choose, and how these shape what learning and change we can foster.

Failure

Failure was a second theme that popped up throughout the conference, in the sense of things not working out the way they were expected to. From a smart and humorous panel about “Evaluation Bloopers, Missteps & F-ups: Reflecting on our Mistakes & Challenges” to insightful presentations about methodologies that “flopped”, evaluators from all walks of the profession were candid about what went wrong, what went differently, and what they learned. Evaluators are of course used to commenting on the blunders of projects and programs, but it was inspiring to hear so much self-reflection.

Participants also spent much time diagnosing societal failures to deliver on promises (not least in the context of the rising prominence of ‘deliverology’ in Canada, discussed on this blog before). Two conversations stood out in particular:

  • First, the observation that almost a hundred years of successive inquiries into the condition of Indigenous Peoples in Canada have failed to deliver change, leading to great pressure on the Truth and Reconciliation Commission’s Calls to Action to fare differently. One suggestion was that evaluators should reflect on the extent to which their work fosters ‘reconciliation’, and the CES has recently included reconciliation in its evaluation competency scheme.
  • Second, a vibrant debate about the mixed record of the Millennium Development Goals, and the likely failure of the current Sustainable Development Goals (SDGs), a (non-binding) pledge by 196 countries to build a better world. Part of this debate was about whether the SDGs could only be monitored on the basis of their (very many) indicators, or whether they could, in any reasonable sense of the term, be evaluated as a coherent agenda. This ties into the final theme: importance.

Importance

My third lasting impression from the conference was that evaluation ought to focus on capturing what is important (and worry less about tracking what is measurable). The idea is certainly not new, but, as Dr. Jane Davidson explained in her pre-conference workshop, it goes right to the heart of contemporary evaluation dilemmas. In a world of big data and of 230+ Sustainable Development Goal indicators, it is easy to fall prey to ‘indicator-itis’ – the disease of placing artificially inflated emphasis on indicators simply because they are readily measurable.

One of the exercises during the workshop, for instance, was to think about how one would evaluate a museum’s objective to ‘spark curiosity’ among visitors. Although we (workshop participants) were quick to come up with easy measures – number of comments left, number of repeat visitors, number of people who would recommend the museum to a friend – we struggled to tie down what ‘greater curiosity’ looked like. Perhaps a mix of excitement, wonder, and innovation, all equally difficult to assess? And is curiosity an individual characteristic, or one that can be fostered at the level of a community? The exercise did not generate specific answers, but taught me to put the focus squarely on (collectively) imagining what success is, before the (also important, but second) step of thinking about measuring it.

This idea, of making the important measurable rather than the measurable important, transcended many other conversations at the conference, including a talk by Plan Canada about “Capturing Vulnerability”, or presentations about the measurement of “innovation” (Dale Hill), “context” (Damien Contandriopoulos), “collaboration” (J. McDonald, H. McKee and P. Russell), and “what matters” (Dr. S. Schulman, J. Roh). Each in their own way, these presenters sought to tackle a key challenge we face as evaluators: to seek vague answers to the right questions over precise answers to the wrong questions.

So much for a round-up of my conference experience. I would be curious to hear in the comments about whether this resonates with other attendants or evaluators!

Lorenzo Daïeff is a Junior Consultant at Universalia Management Group. At the CES he presented instruments to assess inter-organizational relationships, and he would be keen to exchange with anyone thinking or working in the space of partnership evaluation. ldaieff@universalia.com.

[1] Interestingly, there was less talk about ‘reparations’, a term sometimes contrasted with reconciliation because it places the burden more explicitly on perpetrators.

[2] On this note, Natasha Van Borek’s presentation on ‘vicarious trauma’ discussed how evaluators who work with survivors of trauma could prevent being themselves traumatized from the exposure to suffering.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s