This blog was originally posted by the Transparency and Accountability Initiative.
What happens when you put four civil society organizations and two supporting academic entities together and shake, adding a sprig of funding and a large pinch of learning? You get a new cocktail: The Learning Collaborative.
Joking aside, the Collaborative was conceived as an experiment among six organizations working on governance issues in the global south. The hypothesis was that a joint learning agenda can achieve more than separate ones: the organizations will further their internal learning, support each other, improve their practice, and contribute knowledge to the transparency, accountability, and participation field.
In the learning spirit, we highlight five key questions that are emerging from our effort of knitting together a functional, meaningful learning network. To be honest, the Collaborative had a rocky start, and we think it’s useful to share our current reflections on why this may be the case.
What is the right size for a learning collaborative?
The Collaborative was always intended to be small, building on a key lesson learned from the Collaborative’s predecessor (TALEARN initiative), which noted that large gatherings were helpful for networking and bilateral partnerships but not for building sustained learning collaborations. But how small is too small? If one of the four partners experiences setbacks (be it within the organization or stemming from their context), the entire Collaborative is affected. A small group has a difficult time absorbing such risks.
What is a good mix of members?
We believe diversity is good for learning, but not having enough in common makes it challenging to craft a joint learning agenda, particularly in a small group. The members all share key elements which brought them to the Collaborative: each is active in the transparency, accountability, and participation (TAP) field as non-governmental practitioners (or academics); and committed to learning within their own organization. But we are also markedly different in terms of organizational goals, specific problems those goals address, and concrete areas of work. For learning to be meaningful it has to be specific and applied, and as such, the differences in our learning and action aims suddenly appeared larger than expected. For example, everyone in the Collaborative is interested in promoting citizen participation, but while CEGSS wants to learn how to better support grassroots indigenous networks of health monitors in Guatemala, Twaweza wants to galvanize young people in local government decision-making in Uganda, Dejusticia is monitoring a legal requirement for districts in Colombia to hold community consultations, and Global Integrity is investigating what triggers citizen anti-corruption action in Tunisia. Finding commonalities at the conceptual level gave a sense of joint purpose, but building concrete learning agendas which speak to the direct needs and questions of implementers proved far more challenging.
How much trust do you need to learn together, and how long does it take to build the trust?
The Collaborative aims to align and facilitate the learning interests among partners and their ongoing initiatives. It’s meant to be that added icing on the cake (i.e., ideas, funding, and time) that many implementers say they lack to really focus on the learning component of their work. We banked on the fact that the members of the Collaborative are active organizations and eager to learn, and so could co-design and start joint learning projects without much further ado. But we skirted over the fact that many of the organizations in the Collaborative had not worked together previously and needed time to build confidence and trust in each other. While trust is a key factor in any alliance, it turns out that the need for trust is even greater for a group whose premise is to learn together – an endeavour in which participants have to be open about success and failure. We realized mid-way through that these cross-organizational learning initiatives were not going to happen unless we build in moments where members get to spend time together in person to build trust through ideation and reflection. In retrospect, we ought to have argued for at least a six months start-up phase explicitly focused on trust-building and design of learning initiatives.
How much structure and support does a Learning Collaborative need?
The Collaborative was designed horizontally to avoid a heavy management structure. The idea was that if the members are vested in the learning, they will make it happen. The Collaborative also intentionally positioned academic members in a co-creative (rather than leading) role in setting the learning agenda. On the positive side, this means that the core questions of interest in the Collaborative are driven by practitioner needs. On the downside, it means that the academics – for whom learning is in fact their core business – wait to be activated by practitioner requests. The members say that the structure and operation of the learning collaborative has been effective in promoting an open, honest, and equal exchange among all partners involved. But in the purely horizontal model, no one felt empowered or responsible for shaping collaborative decisions. More leadership and more active facilitation in the second year helped to develop and manage the common learning agenda, while retaining the equilibrium of the collaborative.
Does commitment to learning provide sufficient “glue”?
Cross-organizational learning for the implementing organizations is the icing, not the cake. As much as they all have robust monitoring and evaluation mechanisms and learning strategies, these organizations do not exist to learn. Rather, they exist in view of their substantive missions, by implementing their project portfolios, and pursuing transparency, accountability, and participation outcomes. The Learning Collaborative purposely avoided project implementation funding. But it stands to reason that for implementing organizations to really have skin in a collaborative learning game, they would likewise need to have skin in a collaborative implementation game. This would, after all, play to their core interests and strengths. One way to achieve this might be to engage from the start organizations that are more similar in terms of their programming. But if we continue to believe that diversity of approach and perspective is a strength, another option could be to have a larger pool of organizations where matches could be found more easily. Or perhaps give organizations the freedom (as well as funding) to jointly design initiatives that had enough programming in common to build a meaningful learning agenda across settings but were adaptable enough to correspond to each organization’s mandate and interest.
The last point gives us food for thought for the future: as one of our funders asked, if we were to re-write the proposal for the Collaborative today, what would we change? For the remainder of the year, we will be implementing a number of cross-organizational projects, while percolating on the “how would we do it differently” question. We plan to share openly and honestly how this Learning Collaborative experiment overall turns out and what it teaches us about making strange new learning cocktails.
Many thanks for review and contributions to the blog to Walter Flores, Jonathan Fox, Karen Hussmann, Michael Moses, and Baruani Mshale.