
The Phantom Menace: LLM Package Hallucinations
Uncovering security vulnerabilities in AI-assisted coding
This research reveals how Large Language Models (LLMs) can hallucinate non-existent code packages, creating security vulnerabilities in the software supply chain.
- LLMs frequently reference fictional dependencies that don't exist in package repositories
- Attackers can exploit this by registering these hallucinated packages with malicious code
- Cross-language analysis shows varying hallucination rates across programming ecosystems
- Researchers recommend defensive strategies including package verification and enhanced developer awareness
This work highlights a critical security gap in AI-assisted development that could enable widespread supply chain attacks through seemingly innocent code suggestions.
Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities