Defending Against Code Poisoning Attacks

Defending Against Code Poisoning Attacks

A lightweight detection method to protect neural code models

KillBadCode provides a novel approach to detect and neutralize poisoned code samples in neural code model training data, focusing on code naturalness.

  • Identifies malicious code by analyzing statistical patterns of code naturalness
  • Offers a lightweight detection method that doesn't require poisoned samples for training
  • Achieves high detection accuracy against various code poisoning attacks
  • Provides a practical security layer for code model training pipelines

This research is critical for security as it addresses the vulnerability of neural code models to data poisoning, which could otherwise lead to backdoors in code generation and analysis tools used throughout the software development lifecycle.

Original Paper: Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness

4 | 14