Code Red: Security Risks in LLM-Assisted Programming

Code Red: Security Risks in LLM-Assisted Programming

Evaluating the potential harm of using off-the-shelf LLMs for coding tasks

This research introduces a comprehensive framework for assessing security risks when using Large Language Models for software development.

  • Develops a taxonomy of harmful scenarios in software engineering contexts
  • Creates a specialized dataset of prompts to test model vulnerabilities
  • Evaluates how effectively various LLMs are aligned for harmlessness
  • Provides guidance for safer implementation of AI coding assistants

As developers increasingly rely on LLM-powered solutions, understanding these security implications is critical for preventing malicious code generation and protecting systems against AI-enabled threats.

Code Red! On the Harmfulness of Applying Off-the-shelf Large Language Models to Programming Tasks

9 | 19