Formalizing and Benchmarking Prompt Injection Attacks and Defenses

This paper provides a formal framework and benchmarking methodology for prompt injection attacks and their countermeasures in large language models.

January 31, 2025 · 1 min · Chengyu Zhang