
Optimizing LLM Code Generation
A Comprehensive Taxonomy of Inefficiencies in AI-Generated Code
This research develops a systematic framework to identify and categorize quality issues in code produced by Large Language Models.
- Identifies key inefficiencies in LLM-generated code, including redundancy, maintainability problems, and performance shortcomings
- Creates a taxonomy to help practitioners recognize and address these issues systematically
- Bridges the gap between theoretical capabilities and practical application of AI coding assistants
- Enables optimization of LLM-generated code for real-world engineering applications
For engineering teams, this research provides crucial guidance on how to effectively leverage LLMs for code generation while avoiding common pitfalls that limit production adoption.
Unveiling Inefficiencies in LLM-Generated Code: Toward a Comprehensive Taxonomy