If you want to become a better chef, first you need to experiment with your recipes. Next, you need someone that you trust to try your creation and tell you how it tastes. Inviting a friend “into the kitchen” to examine and critique what you do can make many people quite uncomfortable, but in that opening there is potential for growth and change. This method is exactly what we did as part of the Learning Collaborative, a two-year initiative bringing together practitioner organizations to improve practice and contribute to knowledge about transparency, accountability and participation programs.

One of the Learning Collaborative’s unique features was a peer-led support model, based on the idea that practitioners can help each other to reflect and learn because they have shared experiences and interests. In my role as a “practitioner in residence” on behalf of MIT GOV/LAB, I supported learning assessments for the practitioner organizations in the Collaborative. Critically, these learning assessments were not limited to what are typically thought of as learning components: monitoring mechanisms, evaluation plans, key metrics, and so on. And we didn’t just speak with the organization’s leadership and the people in charge of measurement (or monitoring and evaluation). Instead, the learning assessments were deep dives into the work and reflection of each team, from leadership to finance to field managers, while relating their work to organizational goals and learning processes.

The practitioner organizations were at first somewhat reluctant about letting us see what was “cooking” through these assessments. Unlike hiring consultants to deliver on specific tasks, these exercises required some ceding of control by the leadership. It takes courage to allow well-meaning but inquisitive peers to get behind the scenes of the organization, unpack its inner workings, and present back a synthesis of what the organization itself – its people – believed was working and what had to be improved. In the end, the organizations reported these assessments to be positive and transformative. Below is a quote from one of the practitioner organizations:

“Our organization had been developing a learning portfolio for some time, but it was a slow process that didn’t have a clear focus or mandate. The learning assessment was very useful in that it synthesized a lot of thinking about learning and a variety of small and disparate practices related to learning that had been going on within various teams and units. Building on what the teams were already doing, it helped to clarify for the leadership of the organization what direction we wanted to develop the learning work in a way that would be most useful to us at this point in time.” (Dejusticia)

Might you be tempted to invite a critical friend into your own kitchen? If you do, here are the key features of our assessments which made them successful:

  • Peer-led and in-person: Given the horizontal governance structure of the Collaborative, we were all peers in the process (albeit with different expertise). While my colleague and I prepared by reading as many foundational documents as we could before visiting each organization, key to the assessments were in-person meetings across the organization. Drawing on design-thinking and other human-centered methods, we facilitated participatory discussions in which teams defined their own learning needs.
  • Focus on content, then process: We began the assessments with a review of the organization’s strategy (and theories of change), then linked those to implementation, and only then examined current learning processes and thought about how to improve those. This grounded the analysis of the learning mechanisms around the core question of “learning about what and for what purpose?”
  • Deep dive across organizational units: To understand not only the learning structures but also the learning culture of an organization, we conducted interviews and meetings with nearly all teams, including financial and other support teams.
  • No funding at stake: Focused on the needs of the organizations, these assessments were not linked to performance review by a donor or for a specific grant. This mitigated the power dynamic between assessors and the organization’s staff, and gave the staff freedom to engage without worrying about how the results might be interpreted by an external party, particularly one with funding power.
  • Candid and balanced feedback: We delivered initial feedback in person to the organization at the end of the visit, then provided a detailed written report and had online conversations. It was the organization’s decision whether to share the outcomes of the assessment more publicly. The feedback always began by highlighting strengths in any given area, followed by assessment of gaps and concrete recommendations for next steps. It included a range of voices – not forcing consensus on any issue – and noted examples of existing good practices.

The idea of peer learning is hardly new: we often say we want to learn from our contemporaries, but usually this takes the shape of sanitized “sharing of lessons.” Real learning is not possible without honest reflection and admission of mistakes and failures, which is tough to do but made easier with the help of a critical friend. For a more thorough look at the Learning Collaborative activities and results, see the Executive Summary.