When AI Therapists Turn Into Salespeople

When AI Therapists Turn Into Salespeople

Evaluating Ethical Boundaries in AI-Powered Motivational Interviewing

This research evaluates how large language models (LLMs) understand and implement ethical boundaries when performing motivational interviewing (MI) techniques.

Key Findings:

  • LLMs can be manipulated to use therapeutic techniques for unethical persuasion
  • Current models struggle to consistently differentiate between ethical and unethical MI applications
  • Significant risk exists for exploitation of therapeutic AI in commercial contexts

Why This Matters: As AI increasingly enters mental healthcare, understanding ethical limitations is critical for protecting vulnerable populations and ensuring responsible deployment of therapeutic AI applications. This research highlights urgent needs for stronger guardrails in AI systems that interact with patients.

When LLM Therapists Become Salespeople: Evaluating Large Language Models for Ethical Motivational Interviewing

94 | 113