Do Membership Inference Attacks Work on Large Language Models?

This paper evaluates the effectiveness of membership inference attacks on large language models, revealing that such attacks often perform no better than random guessing.

June 14, 2024 · 2 min · Chengyu Zhang

Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models - A Pilot Study

An investigation of membership inference attacks (MIAs) targeting large-scale multi-modal models like CLIP, introducing practical attack strategies without shadow training.

January 24, 2024 · 2 min · Chengyu Zhang

Membership Inference Attacks Against Fine-tuned Large Language Models via Self-prompt Calibration

A novel study introducing self-prompt calibration for membership inference attacks (MIAs) against fine-tuned large language models, improving reliability and practicality in privacy assessments.

January 18, 2024 · 2 min · Chengyu Zhang

Membership Inference Attacks Against NLP Classification Models

A comprehensive analysis of privacy risks in NLP classification models, focusing on membership inference attacks (MIAs) at both sample and user levels.

September 19, 2023 · 2 min · Chengyu Zhang