Securing AI-Generated Code

Securing AI-Generated Code

Identifying & Mitigating API Misuse in LLM Code Generation

This research provides the first comprehensive analysis of API misuse patterns in LLM-generated code, offering systematic methods to detect and mitigate these vulnerabilities.

  • Identifies common API misuse patterns in code generated by large language models
  • Analyzes both method selection errors and parameter usage mistakes that lead to security risks
  • Proposes effective mitigation strategies for developers and LLM providers
  • Demonstrates improved security outcomes through practical implementation of recommended approaches

This work addresses critical security gaps in AI-assisted software development, helping organizations prevent potential vulnerabilities while still leveraging the productivity benefits of LLMs for coding tasks.

Identifying and Mitigating API Misuse in Large Language Models

2 | 19