My name is Tianshuo Cong (丛天硕). I am currently a postdoctoral researcher (Shuimu Scholar, 水木学者) at the Institute for Advanced Study, Tsinghua University (IASTU) (清华大学高等研究院), hosted by Prof. Xiaoyun Wang. I received my Ph.D degree from the Institute for Advanced Study, Tsinghua University in 2023. My Ph.D. advisor is Prof. Xiaoyun Wang. Before that, I got my B.Eng degree from the Department of Electronic Engineering, Tsinghua University in 2017. Meanwhile, I was a visiting Ph.D. student from August 2021 to December 2023 at CISPA Helmholtz Center for Information Security in Saarbrücken, Germany, advised by Dr. Yang Zhang. My research interests include the safety, security, and privacy of machine learning (e.g., large foundation models) and lightweight cipher design.

  • Office: Room 401, Science Building, Tsinghua University, Beijing 100084, China
  • E-mail: congtianshuo at tsinghua dot edu dot cn, congtianshuo at gmail dot com

🔥 News

📝 Publications

$^\star$: Equal contribution; $^\dagger$: Corresponding author

Conference

  • [NDSS’25] Safety Misalignment Against Large Language Models
    Yichen Gong, Delong Ran, Xinlei He, Tianshuo Cong$^\dagger$, Anyu Wang$^\dagger$, and Xiaoyun Wang.
    In The Network and Distributed System Security (NDSS) Symposium, Feb. 24-28, 2025, San Diego, California, U.S.A.

  • [CCS-LAMPS’24] Have You Merged My Model? On The Robustness of Large Language Model IP Protection Methods Against Model Merging
    Tianshuo Cong, Delong Ran, Zesen Liu, Xinlei He, Jinyuan Liu, Yichen Gong, Qi Li, Anyu Wang, and Xiaoyun Wang.
    In the 1st ACM CCS Workshop on Large AI Systems and Models with Privacy and Safety Analysis, Oct. 14, 2024, Salt Lake City, U.S.A.
    [official] [pdf] [arxiv] [code] [slides] 🏆 Best Paper Award

  • [Oakland’24] Test-time Poisoning Attacks Against Test-time Adaptation Models
    Tianshuo Cong, Xinlei He, Yun Shen, and Yang Zhang.
    In Proceedings of the 45th IEEE Symposium on Security and Privacy, May 20-22, 2024, San Francisco, CA, U.S.A.
    [official] [pdf] [arxiv] [code] [slides]

  • [CCS’22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders
    Tianshuo Cong, Xinlei He, and Yang Zhang.
    In Proceedings of the 29th Conference on Computer and Communications Security, Nov. 7-11, Los Angeles, U.S.A.
    [official] [pdf] [arxiv] [code] [slides]

Journal

  • [密码学报] On the Design of Block Cipher FESH
    Keting Jia, Xiaoyang Dong, Congming Wei, Zheng Li, Haibo Zhou, and Tianshuo Cong.
    In Journal of Cryptologic Research
    [pdf] 🏆 2nd Prize in Block Cipher Track, National Cryptographic Algorithm Design Competition

Under Review & Manuscript

  • FigStep: Jailbreaking Large Vision-language Models via Typographic Visual Prompts
    Yichen Gong, Delong Ran, Jinyuan Liu, Conglei Wang, Tianshuo Cong$^\dagger$, Anyu Wang$^\dagger$, Sisi Duan, and Xiaoyun Wang.
    [arxiv] [code]

  • Jailbreak Attacks and Defenses Against Large Language Models: A Survey
    Sibo Yi, Yule Liu, Zhen Sun, Tianshuo Cong, Xinlei He, Jiaxing Song, Ke Xu, and Qi Li.
    [arxiv]

  • JailbreakEval: An Integrated Safety Evaluator Toolkit for Assessing Jailbreaks Against Large Language Models
    Delong Ran, Jinyuan Liu, Yichen Gong, Jingyi Zheng, Xinlei He, Tianshuo Cong$^\dagger$, and Anyu Wang.
    [arxiv] [code]

  • PEFTGuard: Detecting Backdoor Attacks Against Parameter-Efficient Fine-Tuning
    Zhen Sun, Tianshuo Cong, Yule Liu, Chenhao Lin, Xinlei He, Rongmao Chen, Xingshuo Han, and Xinyi Huang.
    [arxiv]

  • On Evaluating The Performance of Watermarked Machine-Generated Texts Under Adversarial Attacks
    Zesen Liu, Tianshuo Cong, Xinlei He, and Qi Li.
    [arxiv]

  • Robustness Over Time: Understanding Adversarial Examples’ Effectiveness on Longitudinal Versions of Large Language Models
    Yugeng Liu$^\star$, Tianshuo Cong$^\star$, Zhengyu Zhao, Michael Backes, Yun Shen, and Yang Zhang.
    [arxiv]

Others

  • 隐私计算产品通用安全分级白皮书 (2024年)
    Led by Ant Group.
    [pdf]

🐤 Services

PC Member or Journal & Conference Reviewer

  • IEEE European Symposium on Security and Privacy (EuroS&P), 2025
  • The Annual Computer Security Applications Conference (ACSAC), 2024
  • The Annual Privacy Enhancing Technologies Symposium (PETS), 2025
  • IEEE Secure and Trustworthy Machine Learning (SaTML), 2025
  • The 1st ACM Workshop on Large AI Systems and Models with Privacy and Safety Analysis (LAMPS), 2024
  • IEEE Transactions on Information Forensics and Security (TIFS)
  • IEEE Transactions on Dependable and Secure Computing (TDSC)
  • ACM Transactions on Privacy and Security (TOPS)
  • ACM Transactions on Knowledge Discovery from Data (TKDD)
  • The International Conference on Learning Representations (ICLR), 2025
  • The Annual Conference on Neural Information Processing Systems (NeurIPS), 2024
  • IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, 2025
  • The Annual AAAI Conference on Artificial Intelligence (AAAI), 2025
  • Proceedings of the ACM International Conference on Multimedia (MM), 2024
  • The European Conference on Computer Vision (ECCV), 2024
  • The International Conference on Artificial Intelligence and Statistics (AISTATS), 2025

Organizer

  • Tutorial on “Safety, Security, and Privacy of Foundation Models” @ WIFS 2024,
    Rome, Italy, Dec. 2-5, 2024 (co-organized with Prof. Xinlei He)

  • A curated reading list on safety, security, and privacy of large models: Awesome-LM-SSP Stars

Ph.D. Thesis Defense Committee Secretary

  • Tairong Huang (Tsinghua University, 2024/05)
  • Shiduo Zhang (Tsinghua University, 2024/05)
  • Xiao Sui (Shandong University, 2024/05)
  • Han Wu (Shandong University, 2024/05)

📖 Teaching

  • Lecturer of the tutorial on “Safety, Security, and Privacy of Foundation Models” at IEEE WIFS 2024, Rome, Italy.
  • Teaching Assistant of the Course “Advanced Numerical Analysis”, Fall 2019, Tsinghua University.
  • Teaching Assistant of the Course “Introduction to Information Science and Technology”, Spring 2018, Tsinghua University.