
LLMs in Code Security: A Double-Edged Sword
Analyzing vulnerabilities and remediation in AI-assisted coding
This systematic literature review investigates how Large Language Models impact code security, revealing both opportunities and risks.
- Dual nature of LLMs: Can both detect/fix vulnerabilities and inadvertently introduce new security flaws
- Risk awareness: LLMs may miss obvious vulnerabilities or flag non-existent ones
- Data poisoning threats: Security implications extend to potential manipulation of LLM training data
- Remediation pathways: Research identifies approaches to mitigate these security concerns
As organizations increasingly adopt AI for software development, understanding these security implications becomes crucial for maintaining code integrity and protecting systems from emerging vulnerabilities.
From Vulnerabilities to Remediation: A Systematic Literature Review of LLMs in Code Security