Demystifying RCE Vulnerabilities in LLM-Integrated Apps

This paper investigates remote code execution (RCE) vulnerabilities in applications that integrate large language models.

January 31, 2025 · 1 min · Chengyu Zhang

Formalizing and Benchmarking Prompt Injection Attacks and Defenses

This paper provides a formal framework and benchmarking methodology for prompt injection attacks and their countermeasures in large language models.

January 31, 2025 · 1 min · Chengyu Zhang

Making Them Ask and Answer - Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction

Explored methods to jailbreak LLMs using disguise and reconstruction techniques with minimal queries.

January 31, 2025 · 1 min · Chengyu Zhang

Style Transfer in Text - Exploration and Evaluation

This paper explores text style transfer (TST) using non-parallel data, introducing new evaluation metrics for transfer strength and content preservation.

January 22, 2025 · 3 min · Chengyu Zhang

Text Style Transfer - A Review and Experimental Evaluation

A comprehensive review of text style transfer (TST) techniques, their evaluation, and benchmarking results across various datasets.

January 21, 2025 · 16 min · Chengyu Zhang

Does Label Differential Privacy Prevent Label Inference Attacks?

Analyzes the effectiveness of label-DP in mitigating label inference attacks and provides insights on privacy settings and attack bounds.

October 11, 2024 · 2 min · Chengyu Zhang

Using LLMs to Uncover Memorization in Instruction-Tuned Models

A study introducing a black-box prompt optimization approach to uncover higher levels of memorization in instruction-tuned LLMs.

October 11, 2024 · 2 min · Chengyu Zhang

Defending Batch-Level Label Inference and Replacement Attacks in Vertical Federated Learning

Explores vulnerabilities in VFL models to label inference and backdoor attacks and proposes effective defenses like CAE and DCAE.

October 7, 2024 · 2 min · Chengyu Zhang

Label Inference Attacks Against Vertical Federated Learning

Evaluates privacy risks of vertical federated learning (VFL) and proposes label inference attacks with outstanding performance, highlighting vulnerabilities and defense limitations.

September 16, 2024 · 2 min · Chengyu Zhang

Do Membership Inference Attacks Work on Large Language Models?

This paper evaluates the effectiveness of membership inference attacks on large language models, revealing that such attacks often perform no better than random guessing.

June 14, 2024 · 2 min · Chengyu Zhang