AI-Assisted Moral Analysis in Vaccination Debates

AI-Assisted Moral Analysis in Vaccination Debates

Using LLMs to support human annotators in identifying moral framing on social media

This research explores how Large Language Models can help human annotators identify moral framing in vaccination debates on social media, addressing the challenges of data scarcity and cognitive load.

  • Demonstrates LLMs as effective annotation assistants for complex psycholinguistic tasks
  • Offers a solution to the high cost and inconsistency of relying solely on human annotators
  • Creates a framework for collaborative human-AI annotation of morality frames in polarizing topics

For medical communications, this research provides tools to better understand the moral underpinnings of vaccine hesitancy, potentially informing more effective public health messaging that addresses underlying moral concerns rather than just presenting facts.

Can LLMs Assist Annotators in Identifying Morality Frames? -- Case Study on Vaccination Debate on Social Media

10 | 46