AI Strikes Back Against Vaccine Hesitancy

The human race has historically shown a natural hesitancy toward vaccines, with one of the most recent examples being the widespread scepticism surrounding the COVID-19 vaccines. This has hindered public health, prompting researchers to explore generative AI tools to help curb the misinformation around vaccines and their consequences. 

A new study by Hang Lu, an assistant professor at the University of Michigan, has investigated how AI-generated messages specifically tailored to individuals’ personality traits can enhance the effectiveness of vaccine communication. 

Instead of conducting several generic fact checks, Lu’s approach was to utilise OpenAI’s ChatGPT to craft targeted messages about vaccines based on personality traits, such as extraversion, and pseudoscientific beliefs. The core information remained the same, but the messages were rephrased to feel more emotionally aligned with the receiver’s personality. 

“Extraversion was a logical starting point because it’s a well-researched, stable trait with clear behavioural cues. But many other characteristics could influence how people respond to messages, both psychological and demographic,” Lu told AIM

Considering External Factors 

However, the research also highlights significant risks: AI may inadvertently reinforce harmful beliefs, particularly in cases where pseudoscientific ideas are deeply entrenched. While the study primarily focuses on personality traits, it does not examine other psychological or demographic characteristics that may influence the effectiveness of AI-generated messages.

“Traits like openness to experience, need for cognition, or even risk tolerance could affect how individuals process health information. On the demographic side, factors like age, education, and cultural background often shape trust in science and institutions,” the author added. 

According to the study, the extraversion-targeted messages significantly reduced vaccine beliefs, outperforming higher-quality generic messages, especially among participants with high extraversion levels. However, these AI-generated messages may not have a lasting impact on people. The study is based on the assumption of short-term belief change, which occurs immediately after exposure to the message. 

“While the findings are promising, we know that misbeliefs—especially those tied to identity or ideology—can be remarkably persistent. It’s likely that a single message isn’t enough. Long-lasting effects may depend on repeated exposure, reinforcement from trusted sources, and integration into broader communication campaigns,” Lu explained further. 

Lu also believes that AI can play a role in generating those messages at a larger scale, but highlights that sustaining the change in belief will require more thoughtful strategies and engagement. For further research on longer-term effects, it would be valuable to understand if the customised messages could sustain the improved beliefs or diminish over time. 

Barriers in AI Communication Systems

There are also psychological barriers that AI communication systems do not consider, as they are not fed into their learning process. The health sector must also consider that while AI has opened up possibilities for more effective communication strategies, its potential is not unlimited, and messages catered to the individual’s personality are not enough. 

Lu said that he’s also “exploring other forms of customisation, such as tone, visual design, or narrative framing. AI offers a flexible platform to test many of these variations quickly, and my goal is to better understand not just what works, but for whom and under what conditions. That kind of precision could make public health messaging more effective and more inclusive at the same time”.

The intricacies of human belief systems require a more profound understanding, especially when they influence the treatment of individuals based on race, colour, caste, and other external factors rooted in outdated thought processes. According to Lu’s analysis, misbeliefs related to personal motivations or identity are more resistant to correction, as contradictory information from an AI system can trigger defensiveness or cynicism. 

Once these barriers are entrenched in a person’s belief system, it becomes difficult to understand how they will react to AI’s messages and respond to corrective information. For all these reasons, it is crucial to involve human intervention in AI cases. 

“The ideal model is one where AI acts as a creative assistant—not a replacement—for public health professionals. AI is great at quickly generating message drafts or tailoring content to different audiences, but it lacks the contextual awareness and ethical judgment of human communicators,” Lu said. 

The Future of AI-Assisted Messaging in Public Health 

According to the study, the use of LLMS has also transformed the landscape of targeted messaging by enabling automated and scalable customisation. It also highlights the effectiveness of ChatGPT and its consistent success in providing persuasive, targeted messages in various formats, even when users offer brief prompts. 

“Especially during a fast-moving health crisis, this can accelerate response time while maintaining quality. Importantly, public health teams should have workflows in place for prompt engineering, content review, and message validation to ensure accuracy and alignment with local needs,” Lu explained. 

Lu believes that AI could become a crucial tool in combating misinformation within healthcare systems. However, the application of AI generative models to correct vaccine-related beliefs remains unexplored. While there has been success with AI-generated content in the healthcare system, it relies on extensive and interactive exchanges for addressing vaccine misinformation.

“We might soon see real-time AI systems that [support] public health teams’ response to emerging rumours or disinformation in a matter of hours rather than days. But again, this potential is only realised if AI is used responsibly—with human oversight, ongoing testing, and clear ethical guidelines. Done right, AI could help public health communicators keep pace with the speed and scale of misinformation,” he added. 

As AI also takes on a central role in public health, particularly in messaging, the ethical considerations of AI-generated content in addressing misinformation must be closely supervised. 

Even with precautions in place, AI messaging can unintentionally reinforce biases, discriminate against certain communities, or marginalise specific groups. As Lu pointed out, “These tools are only as unbiased as the data and prompts that shape them. There’s a real risk of inadvertently reinforcing stereotypes or excluding vulnerable communities if we’re not careful.”

AI and human collaboration, from an unbiased perspective, can optimise public health communication. Therefore, to address the limitations, datasets need to be diverse, include inclusive prompts, and give clear review protocols. 

“Community involvement is also key—partnering with those most affected by health disparities can help ensure that AI-generated messages are culturally appropriate and equitable. Researchers should also develop standards for transparency, fairness, and accountability when deploying AI-generated content,” he said.

While GenAI messaging tools have enormous potential in the public health sector, the research underscores the need for further investigation. The evolving landscape of AI-assisted communication in public health could also encourage researchers to explore the future of AI and misinformation management. 

The post AI Strikes Back Against Vaccine Hesitancy appeared first on Analytics India Magazine.

Scroll to Top