
Securing DNA Language Models Against Attacks
Testing robustness of AI models that interpret genetic code
This research evaluates how DNA Language Models respond to adversarial attacks, addressing a critical security gap in bioinformatics applications.
- Assesses robustness of models like GROVER, DNABERT2, and Nucleotide Transformer
- Examines vulnerability to various attack strategies including nucleotide substitutions
- Provides valuable insights for improving model security in biological applications
- Highlights the intersection of AI security and genomic research
The findings have significant implications for biology and medicine, where DNA language models are increasingly used for diagnostics, drug discovery, and genomic research. Understanding these vulnerabilities is essential for developing more robust biomedical AI systems that can withstand real-world noise and intentional tampering.
Source: Exploring Adversarial Robustness in Classification tasks using DNA Language Models