
Harnessing LLMs for Bug Report Analysis
Using AI to extract failure-inducing inputs from natural language bug reports
This research explores how Large Language Models can automatically extract failure-inducing inputs from bug reports, streamlining the debugging process for developers.
- LLMs show promising capability in parsing natural language bug reports to identify critical inputs
- The approach automates a traditionally manual and time-consuming security analysis process
- Results demonstrate how AI can bridge the gap between natural language descriptions and technical debugging
- Offers practical applications for improving software security workflows
By automating input extraction, security teams can accelerate vulnerability remediation and reduce the manual effort required for bug triage and analysis.
LLPut: Investigating Large Language Models for Bug Report-Based Input Generation