👥: Equal Contribution; 📩: Corresponding Author
- Test-time Poisoning Attacks Against Test-time Adaptation Models
Tianshuo Cong, Xinlei He, Yun Shen, and Yang Zhang.
IEEE S&P 2024 [pdf] [arxiv] [code] [slides] - Have You Merged My Model? On The Robustness of Large Language Model IP Protection Methods Against Model Merging
Tianshuo Cong, Delong Ran, Zesen Liu, Xinlei He, Jinyuan Liu, Yichen Gong, Qi Li, Anyu Wang, and Xiaoyun Wang.
ACM CCS-LAMPS 2024 (Best Paper Award) [pdf] [arxiv] [code] [slides] - SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders
Tianshuo Cong, Xinlei He, and Yang Zhang.
ACM CCS 2022 [pdf] [arxiv] [code] [slides] - FigStep: Jailbreaking Large Vision-language Models via Typographic Visual Prompts
Yichen Gong, Delong Ran, Jinyuan Liu, Conglei Wang, Tianshuo Cong(📩), Anyu Wang(📩), Sisi Duan, and Xiaoyun Wang.
In Submission [arxiv] [code] - Jailbreak Attacks and Defenses Against Large Language Models: A Survey
Sibo Yi, Yule Liu, Zhen Sun, Tianshuo Cong, Xinlei He, Jiaxing Song, Ke Xu, and Qi Li.
In Submission [arxiv] - JailbreakEval: An Integrated Safety Evaluator Toolkit for Assessing Jailbreaks Against Large Language Models
Delong Ran, Jinyuan Liu, Yichen Gong, Jingyi Zheng, Xinlei He, Tianshuo Cong(📩), and Anyu Wang.
In Submission [arxiv] [code] - 隐私计算产品通用安全分级白皮书 (2024年)
Led by Ant Group. [pdf] - On Evaluating The Performance of Watermarked Machine-Generated Texts Under Adversarial Attacks
Zesen Liu, Tianshuo Cong, Xinlei He, and Qi Li.
In Submission [arxiv] - Robustness Over Time: Understanding Adversarial Examples’ Effectiveness on Longitudinal Versions of Large Language Models
Yugeng Liu(👥), Tianshuo Cong(👥), Zhengyu Zhao, Michael Backes, Yun Shen, and Yang Zhang.
In Submission [arxiv] - On the Design of Block Cipher FESH
Keting Jia, Xiaoyang Dong, Congming Wei, Zheng Li, Haibo Zhou, and Tianshuo Cong.
Journal of Cryptologic Research [pdf]