Twelve students from MIT, Bowie State, Delaware State University, and Voorhees University participated in this year’s Social and Ethical Responsibilities of Computing (SERC) reading group on Generative AI and Democracy. Following the reading group, students worked together on a group project to study how generative AI can be used to improve democracy in a socially and ethically responsible way. The students gave presentations on their research to a packed dinner audience at MIT’s Schwarzman College of Computing on May 12, 2025. Read on to learn more about the projects and the students.
Title: Deliberate Self-Questioning: Socratic Dialogue, Perspective Defense, and Ideological Polarization
Group: Andrew Bowns, Jenna Saykhamphone, Peggy Yin, Siddhu Pachipala, Sydney Brown
Description: How effective are interventions involving Socratic dialogue and forced defense of an opposing perspective at reducing ideological polarization around issues of DEI in the workplace? Specifically, can these interventions shift participants’ ideological stances or improve their perceptions of the reasonableness of those holding opposing views? This study implemented an online experiment using deliberation.io to assess the effects of three experimental conditions: participants respond to prompts without receiving any intervention (control condition), participants are asked to reflect on their assumptions and reasoning (Socratic Dialogue), and participants are asked to think through the perspective of someone holding the opposite view (Perspective Defense).
Results for the study show that Socratic Dialogue significantly reduced support for comments that opposed DEI compared to control, while Perspective Defense was not significantly different compared to control. However, neither Socratic Dialogue nor Perspective Defense significantly changed people’s ideological position around the issue, suggesting that one-time chatbot interventions may not be sufficient for deep ideological change. Overall, structured self-questioning shows promise for softening intense opposition, but broader opinions shifts remain difficult.
Title: Comparing Human and AI-generated Political and News Content
Group: Hannah Han, Lila Chen, Sanoe Lester
Description: In politics, persuasive communication plays a critical role in shaping public opinion and influencing voter behavior. Using a survey experiment, study explores if people may be persuaded by AI vs. human-generated news content on tariffs, reproductive rights, immigration, and gun control policies. It also explores if people trust content from AI or humans more.
Study results find that political news content, both human-written and AI-generated may not be persuasive enough for people to significantly change their views on four topics. However, trust in the source of the passage changed significantly after it was revealed if the content was generated by AI or by humans, showing that overall trust of AI participation in politics is still low across the public. These results suggest that AI is capable of mimicking human-written political news content, but the public remains skeptical towards AI as an accurate, credible source of information. As AI tools became more prevalent in public communication, understanding this gap will be essential for policymakers.
Title: Social Media, AI, and Polarization: How Demographics and Politics Shape Digital Experiences
Group: Cassidy Jennings, Daniel Xu, Kameron Garland, Vacavia Mckenzie
Description: How do demographics and politics shape a person’s social media? What implications does this have for polarization? This project explores people’s social media experiences and studies how demographic differences changes those experiences using survey research.
Key patterns discovered in the research include: 1) Most respondents, regardless of age or politics, were concerned about AI’s role in political discourse; 2) Younger users (18-34) used TikTok and Twitter/X and trusted AI-generated content more than older users; 3) conservatives on TikTok found TikTok to amplify extreme views while liberals on Twitter/X found Twitter/X to amplify extreme views; and 4) the heavy users of platforms were more likely to perceive rising political polarization. These findings suggest that polarization is not only driven by platforms or AI design, but shaped by who the users are, how they engage, and what they trust. Addressing polarization on social media will require strategies that are more than technical fixes, but account for users’ trust, perceptions, and vulnerabilities in digital spaces.
Photos by Sasha Rollinger.