We worked with the Busara Center for Behavioral Economics to assess what types of messaging and messengers are the most effective in increasing Covid-19 vaccine confidence and uptake. This project is part of the Vaccine Confidence Fund and the main geography of study is Kenya, with smaller studies in Nepal and the Philippines. The case study is part of the larger ‘Vaccine Confidence Fund Insights Report’ published by the Alliance for Advancing Health Online. The full report can be accessed here. The case study is as follows:

Project Summary

We use a novel approach in crowdsourcing original content from social media users in Kenya, Nepal, and the Philippines to build and test locally-driven COVID-19 vaccine campaigns. First, it crowdsources social media content for different types of vaccine messaging. Second, it uses survey experiments to test which types of messaging are the most effective across country contexts. Third, it uses news feed experiments to evaluate the extent to which top messaging strategies are effective within a competitive information environment for each country. We use survey measures to track vaccine confidence and behavioral measures to track vaccine uptake.

Key Findings

The study had two main research objectives: 1) Feasibility of crowdsourcing COVID-19 video content from social media users; 2) Test if local vs. international COVID-19 video content is more effective across contexts to increase vaccine willingness and misinformation.

The first objective was tested rigorously in Kenya, our  primary country context, with the following high-level results:

  • Social media users watch videos and can answer questions on the content (a proxy measure for if people watch videos and pay attention). In a survey of 12,000 users, a majority of video watchers (90%) correctly answered a content question on the video.
  • It is possible to run targeted campaigns on social media where you get random people to create a video focused on a specific issue. Of 4,400 social media users who submitted a test video, around ~450 people uploaded a persuasive video to help get people vaccinated.
  • Crowdsourced video content improves when instructional infographics are provided about how to film a good video, including tips on lighting, sound, framing, and eye-contact.
  • A survey of people who produce videos suggests that this group have strong beliefs about what messaging they think will be persuasive. For instance, they think positive videos are more convincing (75%) than negative videos, and that videos of women are better at convincing men and that men are better at convincing women.
  • Perhaps unsurprisingly, people who submit videos come up with creative and intuitive strategies to persuade people that are consistent with behavior- al theory. For example, highlighting social norms, downplaying risks, emphasizing social benefits and economic effects, peace of mind and stress reduc- tion were all strategies that were used. Storytelling (e.g., people who were affected by COVID-19) was also used as a common motivator.

Image: Ensuring high-quality videos- Providing infographics on capturing videos helped to ensure high-quality results.

Sourcing local content videos across contexts:

Replicating a smaller version of an online contest to source local content videos across Kenya, Nepal and the Philippines proved more challeng- ing. For example, spending the same in advertising across a set period (2-3 weeks) produced a few videos in Kenya and the Philippines, but no results in Nepal. Doubling the contest reward in Nepal also failed to produce more video content. As a result, we had to send individual messages to get locally produced videos in Nepal, as opposed to randomly through social media advertising. More on this in key process insights below.

For the second research objective on the effectiveness of local content against international content from the World Health Organization (WHO), we include below initial pilot baseline measures from Kenya. These measures are based on a small sample size and may change with additional data collection.

  • With the current pilot sample, there is no treatment effect, so we can’t compare across videos. We
    will need to complete the full study to see if these results hold or if the sample size is too small.
  • We do have some findings about specific groups and individual videos. For example, initial results suggest that WHO content can be most effective. 1) Individuals who trust science are more likely
    to pledge to get a COVID-19 vaccine after seeing WHO content; 2) Individuals who know where to get a vaccine are more likely to agree with state- ments that they would go and get vaccinated for their friends and family after seeing WHO content.

Additional follow ups and analysis are underway to further unpack these initial results, and we are also gaining input from stakeholder organizations to interpret these findings.

Key Process Insights

Below we include key process lessons learned on the technical and the substantive side of running online video contests and surveys in our selected geographies:

  • Invest in the technical infrastructure necessary to run complex campaigns and multi-stage surveys in developing country contexts. The rationale for our target geographies was the paucity of rigorous online experiments in these contexts. As a result, we also had to build the online campaigns and experiments infrastructure from scratch in parallel with implementing an ambitious multi-country study. A key hurdle was figuring how to process thousands of small survey payments and incen- tives across geographies with different policies in a cost-effective way. Investment in infrastructure will continue to serve and expand for future studies, but it was a big challenge to build out, and required dedicated engineering, technical resources, time, and changes to our study protocol.
  • Balancing the timeliness of crowd-sourced treat- ments with research timelines and processes. The benefit of crowd-sourced content is its relevance and timeliness, but it took time between getting the videos and implementing the surveys. This reality, combined with the rapidly changing COVID-19 infor- mation ecosystem, made us question if the content was still appropriate and relevant (for example, the WHO treatment video on “herd immunity” may have lost persuasiveness over time). Our process here may have been too methodical or slow to capture the spirit and usefulness of crowdsourced content.
  • Adapting across contexts took experimentation, time, and did not always yield results. Though so- cial media usage numbers are comparable across Kenya and Nepal, with the Philippines leading in numbers, in practice it was much more challenging to replicate our process to obtain crowdsourced videos and build survey pools in each country. For example, despite contextualizing online advertise- ments in local languages, the same amount of money spent on ads in Kenya and the Philippines resulted in very little uptake in Nepal. Even doubling the compensation for video submissions did not impact engagement numbers. These contextual differences were important learnings and required iterative experimentation and adaptation of the study protocol. They also served as a justification for the country selection and replication, to build knowledge in and across these understudied contexts.

Recommendations

We are currently in the process of getting feedback from various stakeholders and partners in-country on our initial results to further refine specific recommendations and messaging for health implementers. These early recommendations are preliminary and may undergo additional iterations. We welcome input or suggestions on their utility and applicability across contexts.

High-level:

  • To gather better insights into attitudes and behavior and to close the research gap, we need to invest heavily in online research infrastructure. Online social media and information campaigns will continue to be an increasingly important medium for public health outreach. To gain insights on effective public health messaging and compliance with health measures and policies, we need to invest in online infrastructure to better test messaging and behavioral outcomes. Technically, this requires better integration across social media, pixel tracking, survey platforms (e.g., Facebook and Qualtrics), and cost-efficient infrastructure for incentives and payments to respondents.
  • Sourcing local content is possible but challeng- ing, and varies significantly by context. To further test the utility of crowdsourcing requires resources, time, and patience for experimentation and adaptation. We had to play around with ad spend, incentives, and treatment instructions before seeing outputs in terms of videos and surveys. This exploratory process was repeated in each country’s context to varying results. A comparable small-scale campaign run across countries was more successful in Kenya and the Philippines, and less so in Nepal. We were limited here because we could not find a cost-effective way to make survey payment for incentives across countries.

Substantive:

  • Build capacity to enable rapid testing for public health campaigns on social media with timeli- ness and relevance as priorities. Social media and information ecosystems change rapidly, especially in the context of COVID-19 variants and pandemic developments, requiring flexibility and creativity. What we thought would be relevant and useful in the proposal stage changed significantly throughout the study period as international and local movements and variants led to rapid policy change around mask-wearing, lockdowns, curfews and travel, and resources (as well as attitudes) for vaccination. Crowdsourced content, which is timely and perhaps short-lived by nature, requires a shorter test horizon.
  • In Kenya, social media users watch and pay at- tention to videos online; they have strong beliefs of what makes for persuasive content. In our primary study context of Kenya, a survey of people who produce videos for Facebook felt strongly that video content can persuade, that positive videos are more persuasive than negative ones, and that people of the opposite gender are more convincing messengers (i.e., women are better at convincing men).
  • To effectively target content-specific messages, we need to understand people’s baseline beliefs. Initial baseline pilot results show that some video campaigns might be more effective for certain subsets of the populations. We need to know more about people’s baseline beliefs to effectively target content-specific messaging, requiring additional infrastructure and data.

Header: Busara Center for Behavioral Economics, Kenya.