MIT GOV/LAB is excited to introduce our new postdoctoral associate, Virgile Rennard! Virgile joined MIT in September 2025. In addition to working with GOV/LAB, Virgile runs the reading group for AI and Political Science for the Social and Ethical Responsibilities of Computing (SERC) program. He also teaches the subject Applications of AI for Democracy. Virgile received his PhD in data science from École Polytechnique. He previously worked as a CIFRE PhD student at the Data Science and Mining Team of the Computer Science Laboratory at École Polytechnique, under the supervision of Professor Michalis Vazirgiannis, and as a research engineer for LINAGORA, under the supervision of Dr. Julie Hunter.

What drew you to working at the intersection of AI and politics?

Virgile: I’ve always been fascinated by how collective decisions get made, especially in messy, real-world contexts like meetings and public debate. Early in my research, I worked on dialogue and meeting summarization, trying to understand how we can extract structure and meaning from complex, multiparty conversations.

As large language models became more powerful, something was made obvious; these systems aren’t just summarizing or generating text, they’re shaping discourse. On one hand, these tools can help citizens engage with complex political topics and lower barriers to civic participation, on the other hand, if we rely on them too much, we risk narrowing the space of ideas in discussions and creating synthetic consensus.

LLMs influence which arguments are amplified, which perspectives are framed as reasonable, and how information is interpreted. That realization is what led me to shift toward studying political and moral bias in these systems.

Your earlier work focused on meeting understanding and summarization. How does that connect to your current research?

Virgile: At first glance, they do seem quite different. However, both are fundamentally about representation.

In meetings, summarization systems implicitly decide what is important, whose voice gets condensed into the final decision, and what gets left out. In political contexts, LLMs do something similar, they structure arguments, compress perspectives, and sometimes smooth over disagreement.

I worked on structured dialogue modeling, including graph neural network approaches to capture the dynamics and relative importance of different arguments within a meeting. That gave me a deep appreciation for how representation choices shape downstream understanding.

We’ve all seen public deliberations that get derailed or drift off-topic. Being able to model argumentative structure and identify why discussions shift, as in, who influences whom, which claims anchor the debate, is crucial. That same lens now guides how I think about AI-mediated political discourse.

You’ll be joining MIT’s SERC initiative. What excites you most about being there?

Virgile: SERC’s multidisciplinary structure is very exciting. Questions about how AI will benefit humanity can’t be answered from a purely technical perspective. We need political scientists, computer scientists, philosophers, all kinds of people at the same table.

SERC’s mission is to understand how computational technologies can serve the broader public good. And right now, democracy is under real pressure from technological change, whether through misinformation, the negative epistemic impacts of generative AI on democracy, or the concentration of power among a small number of AI developers.

What excites me is working in a space where the hard conversations have to happen, where the goal isn’t just to build systems, but to ask what kind of future they’re enabling. Having an opportunity to see this through the lens of other specialties is enlightening.

Can you share a bit about the course you are teaching this semester?

Virgile: This semester, I’m teaching a course on the Applications of AI for Democracy, which aims to look past the surface-level headlines. AI currently has a bit of a bad reputation in the public eye, often viewed solely as a tool for disinformation or automated surveillance. I want my students to lean into those preconceived notions and understand that the scares about AI are a bit exaggerated. We’re going to be deconstructing these common preconceptions to understand the technology’s dual nature.

The curriculum spans the entire democratic lifecycle: we look at AI in elections, examining both deceptive deepfakes and the helpful mechanics of voter outreach; we dive into text-as-data to see how policy is actually shaped; and we explore the role of AI in governance and surveillance. My goal isn’t to tell students that AI is “good” or “bad,” but to give them the technical and analytical tools to see how these systems can either erode democratic institutions or be harnessed to make them more transparent and inclusive.

What motivates you personally in this work?

Virgile: I care deeply about the health of democratic systems. Technology is increasingly embedded in how citizens access information, how institutions make decisions, and how public discourse unfolds.

Left entirely to market incentives or scale dynamics, LLMs could have destabilizing effects on democratic deliberation. But at the same time, they hold enormous potential. My own research is grounded in ideals of deliberative democracy — systems where disagreement is visible, arguments are evaluated on their merits, and diverse perspectives are represented fairly.

Being able to study this moment, and to teach students how to engage critically with it, feels both intellectually rewarding and normatively important. It’s rare to work on something that is simultaneously technical, political, and deeply consequential.

 

Banner photo by Alina Grubnyak on Unsplash