Some public health experts are turning to an unlikely partner in the effort to combat vaccine misinformation: artificial intelligence. A new study by Hang Lu, an assistant professor at the University of Michigan, explores how AI-generated messages tailored to personality traits could make vaccine communication more effective.
Lu, who specializes in science, environmental health and risk communication, studied whether AI-generated messages could be used to effectively correct vaccine misinformation.
Rather than relying on generic fact-checks, Lu’s approach used ChatGPT, a generative artificial intelligence model developed by OpenAI, to help craft targeted messages about vaccines tailored to specific personality traits, such as extraversion. These customized messages kept the same core information but were rephrased to feel more emotionally and stylistically aligned with the recipient’s personality.
In an interview with The Michigan Daily, Lu said he found the results were surprising. The extraversion-targeting messages performed just as well, and sometimes even better, than professionally developed generic corrections. But messages tailored to address pseudoscientific beliefs about vaccines not only failed to help but sometimes backfired. For Lu, the implications are clear: AI can be a valuable assistant in generating targeted corrections, but it’s no perfect solution.
“(AI’s) value lies in supporting human communicators by generating drafts that can be refined for different audiences,” Lu said. “Public health communication is too important to be left entirely to AI.”
In an email to The Daily, Neha Bhomia, an application systems programmer analyst at Michigan Medicine and adjunct lecturer at the School of Information, wrote she sees promise in the hybrid approach of combining generative AI with health care professionals to improve patient care and outcomes.
“The concept of using AI to personalize public health messaging based on personality traits is both innovative and promising.” Bhomia wrote. “Vaccine hesitancy isn’t just about lack of information — it’s about trust, emotion, and how people relate to messaging. AI-driven personalization could help make communication more engaging and effective by aligning messages with how different individuals process information.”
Bhomia wrote she believes AI tools can complement existing strategies by helping health professionals more effectively communicate with patients.
“AI could help fill some big gaps in healthcare, especially where there aren’t enough providers. It can improve communication across language or literacy barriers, support patients in remote or underserved areas, and help personalize care using data insights,” Bhomia wrote. “AI should build trust and improve care, not manipulate or exclude. Ethical design and thoughtful oversight are key to making sure it does that”
However, Bhomia also wrote she was concerned about relying too heavily on AI for medical information.
“Clinicians and public health professionals should absolutely be part of the process,” Bhomia wrote. “They bring essential context, clinical expertise, cultural competence, and ethical judgment, that AI alone cannot replicate.”
Rackham student Mengdi Ji, whose studies focus on epidemiology, wrote in an email to The Daily she saw promise in the usage of AI in supporting personalized public health communication.
“I see generative AI and other tech tools as part of a broader shift toward personalization and precision public health,” Ji wrote. “These tools have the potential to craft messages that address individual concerns and cultural contexts at scale — something that’s difficult to achieve with traditional public health messaging.”
While Lu, Bhomia and Ji highlight AI’s potential to support health communication, other experts are more skeptical. Jon Zelner, associate professor of epidemiology, wrote in an email to The Daily he was concerned the usage of AI in health care could overlook structural issues within the health care system.
“I don’t think this kind of AI-based approach is solving the right problem,” Zelner wrote. “When people don’t have health insurance, or they’ve historically been oppressed by the systems that now ask them to trust public health advice, that’s where the real mistrust comes from.”
Zelner wrote that he is worried that the rush to adopt AI in public health risks cutting out human connection, a very important aspect of health care.
“I think that employing a ton of public health nurses and other professionals to go out into communities with high levels of hesitancy and figure out and address their concerns and other health needs would go much further to increasing vaccination coverage than targeting people with messages via social media etc,” Zelner wrote. “It would cost more, but the return on that investment would be enormous.”
Source: The Michigan Daily