Image from Canva. This brief was re-posted from the Transparency and Accountability Initiative website.

While conducting evidence reviews for the Transparency and Accountability Initiative (TAI), we develop a pilot tool that incorporates contextual factors (or “scope conditions”) in interpreting experimental and quasi-experimental results.

Say you are implementing or funding governance programs in Tanzania, and you are interested in evidence of how citizens can hold local bureaucrats accountable. You may reasonably assume you should prioritize learning from experiences in Uganda or Kenya; after all these are East African countries, and a number of influential studies have been conducted there.

But how similar are these countries really in terms of the key contextual factors that you think might matter in influencing whether citizens are able to hold bureaucrats accountable? For example, are civil freedoms equally protected in these countries? Are there similar degrees of professionalization of bureaucracy? What would you say if the recommendation was not to look for evidence from Kenya and Uganda at all, but from Peru and Pakistan instead, based on the factors that you defined? We all know that “context matters” but how should we incorporate contextual factors when parsing literature for “what works” to inform programmatic decision?

Going beyond “context matters”

Social science has long struggled with the generalizability question – that is, whether and how do findings in one context translate to another. Policy and program implementers will always consider a range of factors and information in making decisions; when it comes to research evidence, we would like them to use the “best” evidence possible. In our view, “best” is not only rigorous, but it is also contextually relevant, yet common approaches of reviewing literature do not systematically include contextual factors.

We developed a method to do so while conducting a literature review for Transparency and Accountability Initiative (TAI), a donor collaborative focusing on governance. The request was to parse through the last ten years of evidence and update our common understanding of whether certain initiatives are effective in promoting citizen engagement and government responsiveness. While compiling the report, How to Learn from Evidence: A Solutions in Context Approach, we developed an Evidence Tool to illustrate how contextual factors could be incorporated into a review of experimental and quasi-experimental findings.

Be specific about accountability relationships and context

In our evidence tool, you first identify the main accountability actors and relationships between them. It sounds overly simplistic, but we often fail to specify the pathway and actors through which a change is meant to occur. It’s better to phrase requests for evidence reviews as a series of very specific questions in which you clearly identify the actors exercising accountability and the actors that are meant to respond. In the How to Learn from Evidence memo we outline some useful categories of actors in the governance context: for example, while many initiatives focus on citizen action leading to greater responsiveness by government, the ability and feasibility of citizens to hold government accountable depends on whether the targeted officials are elected politicians, technocrats, or front-line service providers.

And second, our tool allows you to search for evidence from contexts similar to yours. Most evidence reviews don’t make claims about the context and focus instead on the similarities of the intervention and the measured effects. Our three reviews also don’t make many claims about the context; it’s through the interactive tool that we are testing out the feasibility of a format that would incorporate these factors and therefore allow us to query evidence in a more thoughtful way.

This sounds good, you may say, but there is an infinite number of contextual factors; how do we know which ones are important when we want to identify contexts that are similar to ours? We suggest focusing on factors which are most likely to have an effect on the specific accountability relationship between the actors you have identified. Take, for example, a transparency initiative that makes information about government budgets widely available to citizens. Among other factors, we would want to know: do they live in a competitive democracy where they can vote out politicians unable to account for the misuse of public funds? Do they live in a clientelist system where they are so dependent on goodies distributed at election time that they don’t feel they can afford to vote their patrons out of office even if they learn new information about how much their politicians are stealing?

An initial selection of factors is presented in the How to Learn from Evidencememo; these were chosen based on cumulative theoretical understanding as to which kinds of factors are salient for which types of accountability relationships. We then looked for datasets available for a large number of countries which measure these contextual characteristics (e.g., degree of economic development, level of corruption/clientelism, etc.).

A tool that brings context into the evidence base

The Evidence Tool is built so that you specify the country in which you are interested, and then select for how many “most similar” countries you want to see the results. Returning to our opening example: say that you are working in Tanzania and interested in evidence of how citizens can hold local bureaucrats accountable. If you plug in “citizens” and “bureaucrats” into the evidence tool, specify “Tanzania” as the referent country and ask for three most similar contexts, you get Peru, Pakistan, and India. Looking at the section on “comparability factors” you can see that four (of the currently available ten) factors were applied to come up with the matching country contexts: level of economic development, strength of rule of law, regime time, and degree of corruption/clientelism.[1] The search results in eight studies which are most relevant (hovering over the result boxes shows that there aren’t any studies from Peru, but there are studies from India and Pakistan). This is where the real work starts: once the studies are suggested, the user still has to read through them (the tool includes abstracts, but not full papers) to determine the applicability and relevance of the intervention, the study, and its findings.

In sum, the experience of generating the TAI evidence reviews prompted us to think more about how we engage with evidence – how we ask questions, how we consider context, how we apply the knowledge to our own decisions and designs. We think that incorporating these factors into the search and interpretation of evidence would be a significant improvement for practitioners and funders alike. What do you think?

The pilot evidence tool is available online at MIT GOV/LAB; feedback is welcome (mitgovlab@mit.edu).

The Learning from Evidence series documents a learning process undertaken by the Transparency and Accountability Initiative to engage with and utilize the evolving evidence base in support of our members’ transparency and accountable governance goals. We are pleased to have partnered with MIT’s Governance Lab and Twaweza on this initiative. This series comprises a variety of practice- and policy-relevant learning products for funders and practitioners alike, from evidence briefs, to more detailed evidence syntheses, to tools to support the navigation of evidence in context.

[1] Note that if you do not specify how many comparative countries/contexts you want, the search will return only the studies which arise from your original country of interest. In this case, there are zero studies from Tanzania which look at the effect of information in whether citizens are able to hold bureaucrats accountable.