
CodeQUEST: AI-Powered Code Quality Enhancement
Using LLMs to automatically evaluate and improve code across multiple dimensions
CodeQUEST is a novel framework that leverages GPT-4o to systematically evaluate and enhance code quality through an iterative feedback loop.
- Employs a dual-component system with an Evaluator that assesses code across 10 quality dimensions and an Optimizer that iteratively improves code
- Provides both quantitative scores and qualitative summaries for comprehensive code assessment
- Focuses on critical aspects including readability, maintainability, efficiency, and security
- Demonstrates how LLMs can serve as automated code quality engineers
For security teams, this research offers promising approaches to automatically identify and fix security vulnerabilities during the development process, potentially reducing security debt before deployment.
On Iterative Evaluation and Enhancement of Code Quality Using GPT-4o