Many charitable foundations are grappling with how to evaluate advocacy. Just how do you figure out which strategies are truly making a difference? How do you know what works? This teasing out of cause and effect, of evaluating the efficacy of advocacy strategies, is the work of monitoring, evaluation and learning (MEL).

Traditional approaches to MEL in advocacy have relied heavily on the premise that it is possible to measure “what really happened.” And that this process is linear, predictable, attributable, and replicable (a dash of magical thinking).

In their recent report, No Royal Road, Jim Coe and Rhonda Schlangen roll their sleeves up and take a deep dive into bringing structure to the process of measuring the unmeasurable, to “ . . . move advocacy MEL forward toward approaches that integrate uncertainty and accept the unpredictability of advocacy . . .”

The paper outlines 6 changes for an advocacy MEL reboot. Each change is explored in its own section, along with examples and tools:

  • Better factor in uncertainty (Accommodating uncertainty): What advocacy is, why its unpredictable and strategies to align with needs and realities.
  • Refocus contribution to be more in line with the realities of how social change comes about (Planning for unpredictability): The implications for theories of change and ways of refining them.
  • Refocus contribution to be more in line with the realities of how social change comes about (Understanding contribution): Teasing out particular contributions within “the wider mosaic of influences” and the typology or roles that any single actor of organization may play.
  • Parse outcomes and their significance (Making sense of outcomes): Results need to be contextualized and looked at across multiple dimensions of change.
  • Break down barriers to engaging advocates in MEL (Staying oriented on what is useful to practitioners): Re-grounding MEL in a foundation of what’s useful for practitioners of the work—creating space for them to reflect and building strategic evaluative thinking in advocates.
  • Think differently about how we evaluate more transformational advocacy (Evaluating more transformational advocacy): Looking through a lens of advocacy readiness and using “fitness for purpose assessments” instead of assessments of outcomes and contributions.

In the words of Coe and Schlangen: “The challenges of getting to certainty in advocacy don’t invalidate the effort to review and learn. They make it more important.”

Other resources that you might find useful:

Additional resources can be found on the Center for Evaluation Innovation website:  http://www.evaluationinnovation.org/focus-areas/advocacy-public-policy