Jailbreaking Large Language Models: Disguise and Reconstruction Attack (DRA)

Explores how DRA exploits biases in LLM fine-tuning to bypass safety measures with minimal queries, achieving state-of-the-art jailbreak success.

February 5, 2025 · 4 min · Chengyu Zhang

Demystifying RCE Vulnerabilities in LLM-Integrated Apps

This paper investigates remote code execution (RCE) vulnerabilities in applications that integrate large language models.

January 31, 2025 · 1 min · Chengyu Zhang