Description
- Extended previous research on privacy vulnerabilities to large language models (LLMs)
- Contributed to this research from September 2024 to January 2025, focusing on implementing attack algorithms and building APIs to facilitate smoother experimentation with different target models and attack methods, ensuring consistency and efficiency in evaluations.
Tech Stack
- Python, PyTorch, NumPy, Scikit-learn.
Contributions
- Implemented attack algorithms and adapted them for new experimental setups.
- Enhanced APIs to accommodate additional functionality and streamline the evaluation process.
- Designed advanced experiments to analyze and validate findings, leading to deeper insights into privacy risks in LLMs.
Supervisor:
Professor Lei Yu