Package Hallucinations: A New Security Threat

Package Hallucinations: A New Security Threat

How LLMs create software supply chain vulnerabilities

This research exposes how code-generating LLMs can inadvertently introduce package hallucinations that create security vulnerabilities in software development.

  • LLMs frequently reference non-existent packages that attackers can register
  • These hallucinations represent a novel form of package confusion attack
  • The study reveals widespread vulnerability across popular programming languages like Python and JavaScript
  • Researchers propose mitigation strategies to protect software supply chains

This matters because organizations increasingly rely on AI-assisted coding, creating new attack vectors that security teams must understand and address to prevent exploitation of these vulnerabilities.

We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs

26 | 251