Jailbreaking Large Language Models: Disguise and Reconstruction Attack (DRA)
Explores how DRA exploits biases in LLM fine-tuning to bypass safety measures with minimal queries, achieving state-of-the-art jailbreak success.
Explores how DRA exploits biases in LLM fine-tuning to bypass safety measures with minimal queries, achieving state-of-the-art jailbreak success.
This paper provides a formal framework and benchmarking methodology for prompt injection attacks and their countermeasures in large language models.