Membership Inference Attacks Against Fine-tuned Large Language Models via Self-prompt Calibration

A novel study introducing self-prompt calibration for membership inference attacks (MIAs) against fine-tuned large language models, improving reliability and practicality in privacy assessments.

January 18, 2024 · 2 min · Chengyu Zhang

Membership Inference Attacks Against NLP Classification Models

A comprehensive analysis of privacy risks in NLP classification models, focusing on membership inference attacks (MIAs) at both sample and user levels.

September 19, 2023 · 2 min · Chengyu Zhang

Attention Is All You Need

Proposed the Transformer model, a novel architecture using self-attention to improve sequence transduction tasks.

September 7, 2023 · 2 min · Chengyu Zhang