Prompt Injection and Input Manipulation Threats

Studies on how adversaries can manipulate LLM inputs through prompt injection and other techniques

This presentation covers 43 research papers on large language models applied to Prompt Injection and Input Manipulation Threats.

1 | 45