Demystifying RCE Vulnerabilities in LLM-Integrated Apps
This paper investigates remote code execution (RCE) vulnerabilities in applications that integrate large language models.
This paper investigates remote code execution (RCE) vulnerabilities in applications that integrate large language models.
This paper provides a formal framework and benchmarking methodology for prompt injection attacks and their countermeasures in large language models.
Explored methods to jailbreak LLMs using disguise and reconstruction techniques with minimal queries.
This paper explores text style transfer (TST) using non-parallel data, introducing new evaluation metrics for transfer strength and content preservation.
A comprehensive review of text style transfer (TST) techniques, their evaluation, and benchmarking results across various datasets.
Analyzes the effectiveness of label-DP in mitigating label inference attacks and provides insights on privacy settings and attack bounds.
A study introducing a black-box prompt optimization approach to uncover higher levels of memorization in instruction-tuned LLMs.
Explores vulnerabilities in VFL models to label inference and backdoor attacks and proposes effective defenses like CAE and DCAE.
Evaluates privacy risks of vertical federated learning (VFL) and proposes label inference attacks with outstanding performance, highlighting vulnerabilities and defense limitations.
This paper evaluates the effectiveness of membership inference attacks on large language models, revealing that such attacks often perform no better than random guessing.