Learning Note 8: “Understanding Citizen Preferences for Political Candidates | An Experiment in Rural Tanzania” originally appeared as a guest post on Twaweza’s website. The full paper on the methods can be found here.
This post follows from the previous blogs documenting the research in governance undertaken jointly by Twaweza and the MIT GOV/LAB. See the first and second posts describing the qualitative study that preceded the evaluation and field experiment. In this post, MIT colleagues describe the rationale and method applied; forthcoming post will describe results.
Why is the method big news? Because it has not been done in the Tanzanian context, because it’s almost never done offline, because it had to be customized to the population (rural, often limited literacy). The GOV/LAB team has not only written new code and a manual for this type of experiment, they have open-sourced the code, the manuals, as well as an app to create some of the essential materials.
How often do respondents in a survey spend an hour answering questions, see their answers deleted accidentally, and enjoy the process so much that they ask to restart the survey and do it again? After more than two decades of experience conducting field research across the globe, in countries such as China, Nigeria, and Philippines – our answer was: Never, until we collaborated with Twaweza and conducted research in Tanzania immediately prior to and after the October 2015 general election.
The aim of this research was to answer the question: what candidate attributes most influence citizens’ voting decisions? We designed a conjoint experiment where we presented profiles of hypothetical MP candidates to respondents and asked them to select one of two candidates to vote for, or abstain, and rate both candidates. Participants debated these hypothetical candidates, often laughing and joking with each other.
Conjoint analysis is an experimental method that presents respondents with hypothetical alternatives—for example pairs of candidates—that randomly vary on the basis of several attributes. Respondents are then asked to select the choice that they prefer. In Tanzania we asked respondents to select candidates that they would prefer to vote for, when comparing two hypothetical candidates that varied based on six key attributes: religion, tribe, party, past performance in the community, past performance to individuals, and credibility of promises. Each attribute could take on one of two levels, or values, and these were randomly assigned in each profile. The random assignment of attribute-levels allows for the causal identification of how much each attribute-level influences respondents’ voting decisions.(1)
We selected these six attributes by thinking about hard tradeoffs that citizens have to make when voting in real elections (for example between a coethnic who did not perform well and a non-coethnic who has promises and a plan). We also selected these attributes on the basis of what kinds of information citizens are likely to have about candidates, including immutable and mutable characteristics. We implemented this survey in two ways. One, we had a group of respondents sit together and discuss candidates before voting. The second way we implemented the survey was one-on-one with respondents at their home, in private.
Conjoint analysis, originally developed for marketing research, has recently become popular in political science as a useful tool for understanding preferences over multidimensional alternatives. One useful aspect of conjoint is that it’s easy to make this type of survey experiment into a game, which is more engaging for respondents. In addition, conjoint analysis does not rely on explicitly stated motivations or preferences but instead uses respondents’ actions (for example, which candidate they voted for in the game) to identify how important each candidate attribute is in influencing citizen behavior. This is useful since we know that people are bad at correctly articulating their underlying cognitive processes or answering why they behave a certain way (Nisbett and Wilson 1977). This method also reduces the concern of social desirability bias (the tendency for respondents to answer questions how they think researchers want them to) because we are not directly asking respondents to tell us how they evaluate candidates. We are not asking them, for example, to tell us how much a candidate’s religion matters in their voting decision, but instead let the data speak for itself.
There are also several challenges to be aware of when implementing a conjoint experiment in the field. Most conjoint experiments take place online with respondents who are contacted online and administer the survey themselves using their own Internet connection. However, we wanted to run a conjoint experiment in-person and offline, which is necessary in places like rural Tanzania where Internet connectivity is unreliable. Although tools exist to easily conduct a conjoint experiment online (for example using the survey software Qualtrics), implementing conjoint experiments offline proved more challenging. In order to randomly assign candidate attributes on our tablets with the Qualtrics offline application it required writing our own code, in collaboration with Alexander Meyer. We have open sourced the code. The code and an instruction manual are available through this link.
Another challenge of using conjoint experiments in developing country contexts is that we often work with illiterate populations. In Tanzania we presented the candidate attributes as images instead of the typical practice of using text. We have also created an app that allows you to use images for randomly assigned candidate attributes and produces conjoint profiles in pdf form to print. Figure1 provides an example of candidate profiles that we presented to respondents.
In our next guest blog post we present initial results from this conjoint experiment in Tanzania. Stay tuned!
At the bottom of the profile is where respondents would indicate which candidate they would like to vote for—or they could abstain. If the respondent wanted to vote for candidate A they would circle “A” in the circle. If the respondent wanted to vote for candidate B they would circle “B” in the rectangle. If the respondent didn’t want to vote for either candidate they would circle “X” in the triangle.
We also asked respondents to rate both candidates, regardless for which candidate or whether they voted. Enumerators would explain the bucket to respondents by saying, “say you are going to give this bucket to the candidate and the amount of water in the bucket represents how good you think this candidate will be at getting things done once in office—please draw a line to fill the bucket with water.”