The Double-Edged Sword of Humanized AI

The Double-Edged Sword of Humanized AI

How LLM chatbots mirror humans and risk manipulation

This research examines how personified AI chatbots that adopt human attributes create both trust and dangerous manipulation risks.

Key Findings:

  • LLMs increasingly adopt human characteristics (faces, names, personalities) including those of celebrities
  • Personification creates false intimacy while increasing user trust
  • The EU's AI Act attempts to address these manipulation concerns
  • Security frameworks need specific protections against manipulative AI systems

This matters because as AI becomes more human-like, users face increasing vulnerability to deception, manipulation, and potential security threats without appropriate regulatory safeguards.

Manipulation and the AI Act: Large Language Model Chatbots and the Danger of Mirrors

33 | 46