⭐ This repository hosts a curated collection of literature associated with gradient inversion attacks in federated learning. Feel free to star and fork. For further details, refer to the following paper:
Exploring the Vulnerabilities of Federated Learning:
A Deep Dive into Gradient Inversion Attacks
Pengxin Guo*, Runxi Wang*, Shuang Zeng, Jinjing Zhu, Haoning Jiang, Yanran Wang, Yuyin Zhou, Feifei Wang, Hui Xiong, and Liangqiong Qu
The existing Gradient Inversion Attacks (GIA) methods can be divided into three types: optimization-based GIA (OP-GIA), which works by minimizing the distance between received gradients and gradients computed from dummy data; generation-based GIA (GEN-GIA), which utilizes a generator to reconstruct input data; and analytics-based GIA (ANA-GIA), which aims to recover input data in closed form. Moreover, GEN-GIA can be further divided into three categories: optimizing the latent vector z, optimizing the generator’s parameters W, and training an inversion generation model. ANA-GIA can be further divided into two categories: manipulating model architecture and manipulating model parameters.
- Survey Papers
- Optimization-based GIA (OP-GIA)
- Generation-based GIA (GEN-GIA)
- Analytics-based GIA (ANA-GIA)
- Emprical Works
-
Dealing Doubt: Unveiling Threat Models in Gradient Inversion Attacks under Federated Learning, A Survey and Taxonomy [Paper]
Yichuan Shi, Olivera Kotevska, Viktor Reshniak, Abhishek Singh, and Ramesh Raskar
arXiv:2405.10376, 2024. -
The Impact of Adversarial Attacks on Federated Learning: A Survey [Paper]
Kummari Naveen Kumar, Chalavadi Krishna Mohan, and Linga Reddy Cenkeramaddi
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023. -
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions [Paper]
Rui Zhang, Song Guo, Junxiao Wang, Xin Xie, and Dacheng Tao
International Joint Conference on Artificial Intelligence (IJCAI), 2022. -
A survey on security and privacy of federated learning [Paper]
Viraaji Mothukuri, Reza M. Parizi, Seyedamin Pouriyeh, Yan Huang, Ali Dehghantanha, and Gautam Srivastava
Future Generation Computer Systemsm (FGCS), 2021.
-
Temporal Gradient Inversion Attacks with Robust Optimization [Paper]
Bowen Li, Hanlin Gu, Ruoxin Chen, Jie Li, Chentao Wu, Na Ruan, Xueming Si, and Lixin Fan
IEEE Transactions on Dependable and Secure Computing (TDSC), 2025. -
TS-Inverse: A Gradient Inversion Attack Tailored for Federated Time Series Forecasting Models [Paper]
Caspar Meijer, Jiyue Huang, Shreshtha Sharma, Elena Lazovik, and Lydia Y. Chen
arXiv:2503.20952, 2025. -
Gradient Inversion Attacks: Impact Factors Analyses and Privacy Enhancement [Paper] [Code]
Zipeng Ye, Wenjian Luo, Qi Zhou, Zhenqian Zhu, Yuhui Shi, and Yan Jia
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024. -
Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning [Paper] [Code]
Kostadin Garov, Dimitar Iliev Dimitrov, Nikola Jovanović, and Martin Vechev
International Conference on Learning Representations (ICLR), 2024. -
Towards Eliminating Hard Label Constraints in Gradient Inversion Attacks [Paper] [Code]
Yanbo Wang, Jian Liang, Ran He
International Conference on Learning Representations (ICLR), 2024. -
GI-SMN: Gradient Inversion Attack Against Federated Learning Without Prior Knowledge [Paper]
Jin Qian, Kaimin Wei, Yongdong Wu, Jilian Zhang, Jinpeng Chen, and Huan Bao
International Conference on Intelligent Computing (ICIC), 2024. -
Federated Learning under Attack: Improving Gradient Inversion for Batch of Images [Paper]
Luiz Leite, Yuri Santo, Bruno L. Dalmazo, and André Riker
arXiv:2409.17767, 2024. -
GI-NAS: Boosting Gradient Inversion Attacks through Adaptive Neural Architecture Search [Paper]
Wenbo Yu, Hao Fang, Bin Chen, Xiaohang Sui, Chuan Chen, Hao Wu, Shu-Tao Xia, and Ke Xu
arXiv:2405.20725, 2024. -
AFGI: Towards Accurate and Fast-convergent Gradient Inversion Attack in Federated Learning [Paper]
Can Liu, Jin Wang, and Yipeng Zhou, Yachao Yuan, Quanzheng Sheng, and Kejie Lu
arXiv:2403.08383, 2024. -
MGIC: A Multi-Label Gradient Inversion Attack based on Canny Edge Detection on Federated Learning [Paper]
Can Liu and Jin Wang
arXiv:2403.08284, 2024. -
Instance-wise Batch Label Restoration via Gradients in Federated Learning [Paper] [Code]
Kailang Ma, Yu Sun, Jian Cui, Dawei Li, Zhenyu Guan, and Jianwei Liu
International Conference on Learning Representations (ICLR), 2023. -
Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning Using Independent Component Analysis [Paper] [Code]
Sanjay Kariyappa, Chuan Guo, Kiwan Maeng, Wenjie Xiong, G. Edward Suh, Moinuddin K Qureshi, and Hsien-Hsin S. Lee
International Conference on Machine Learning (ICML), 2023. -
Gradient Obfuscation Gives a False Sense of Security in Federated Learning [Paper] [Code]
Kai Yue, North Carolina State University; Richeng Jin, Zhejiang University; Chau-Wai Wong, Dror Baron, and Huaiyu Dai, and North Carolina State University
USENIX Security Symposium (USENIX Security), 2023. -
Data Leakage in Federated Averaging [Paper] [Code]
Dimitar Iliev Dimitrov, Mislav Balunovic, Nikola Konstantinov, and Martin Vechev
Transactions on Machine Learning Research (TMLR), 2022. -
GradViT: Gradient Inversion of Vision Transformers [Paper]
Ali Hatamizadeh, Hongxu Yin, Holger R. Roth, Wenqi Li, Jan Kautz, Daguang Xu, and Pavlo Molchanov
IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR), 2022. -
APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers [Paper]
Jiahao Lu, Xi Sheryl Zhang, Tianli Zhao, Xiangyu He, and Jian Cheng
IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR), 2022. -
CAFE: Catastrophic Data Leakage in Vertical Federated Learning [Paper] [Code]
Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, and Tianyi Chen
Conference on Neural Information Processing Systems (NeurIPS), 2021. -
See Through Gradients: Image Batch Recovery via GradInversion [Paper] [Code]
Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M. Alvarez, Jan Kautz, and Pavlo Molchanov
IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR), 2021. -
Towards General Deep Leakage in Federated Learning [Paper]
Jiahui Geng, Yongli Mou, Feifei Li, Qing Li, Oya Beyan, Stefan Decker, and Chunming Rong
arXiv:2110.09074, 2021. -
Inverting Gradients - How easy is it to break privacy in federated learning? [Paper] [Code]
Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller
Conference on Neural Information Processing Systems (NeurIPS), 2020. -
SAPAG: A Self-Adaptive Privacy Attack From Gradients [Paper]
Yijue Wang, Jieren Deng, Dan Guo, Chenghong Wang, Xianrui Meng, Hang Liu, Caiwen Ding, and Sanguthevar Rajasekaran
arXiv:2009.06228, 2020. -
iDLG: Improved Deep Leakage from Gradients [Paper] [Code]
Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen
arXiv:2001.02610, 2020. -
Deep Leakage from Gradients [Paper] [Code]
Ligeng Zhu, Zhijian Liu, and Song Han
Conference on Neural Information Processing Systems (NeurIPS), 2019.
-
GIFD: A Generative Gradient Inversion Method with Feature Domain Optimization [Paper] [Code]
Hao Fang, Bin Chen, Xuan Wang, Zhi Wang, and Shu-Tao Xia
International Conference on Computer Vision (ICCV), 2023. -
Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage [Paper] [Code]
Zhuohang Li, Jiaxin Zhang, Luyang Liu, and Jian Liu
IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR), 2022. -
Gradient Inversion with Generative Image Prior [Paper] [Code]
Jinwoo Jeon, jaechang Kim, Kangwook Lee, Sewoong Oh, and Jungseul Ok
Conference on Neural Information Processing Systems (NeurIPS), 2021.
-
Generative Image Reconstruction From Gradients [Paper]
Ekanut Sotthiwata, Liangli Zhen, Chi Zhang, Zengxiang Li, and Rick Siow Mong Goh
IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2024. -
Generative Gradient Inversion via Over-Parameterized Networks in Federated Learning [Paper] [Code]
Chi Zhang, Zhang Xiaoman, Ekanut Sotthiwat, Yanyu Xu, Ping Liu, Liangli Zhen, and Yong Liu
International Conference on Computer Vision (ICCV), 2023. -
GRNN: Generative Regression Neural Network—A Data Leakage Attack for Federated Learning [Paper] [Code]
Hanchi Ren, Jingjing Deng, and Xianghua Xie
ACM Transactions on Intelligent Systems and Technology (TIST), 2022.
-
Fast Generation-Based Gradient Leakage Attacks against Highly Compressed Gradients [Paper] [Code]
Dongyun Xue, Haomiao Yang, Mengyu Ge, Jingwei Li, Guowen Xu, and Hongwei Li
IEEE International Conference on Computer Communications (INFOCOM), 2023. -
Learning To Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning [Paper] [Code]
Ruihan Wu, Xiangyu Chen, Chuan Guo, and Kilian Q. Weinberger
Conference on Uncertainty in Artificial Intelligence (UAI), 2023.
-
Loki: Large-scale Data Reconstruction Attack against Federated Learning through Model Manipulation [Paper] [Code]
Joshua C. Zhao, Atul Sharma, Ahmed Roushdy Elkordy, Yahya H. Ezzeldin, Salman Avestimehr, and Saurabh Bagchi
IEEE Symposium on Security and Privacy (S&P), 2024. -
Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models [Paper] [Code]
Liam H Fowl, Jonas Geiping, Wojciech Czaja, Micah Goldblum, and Tom Goldstein
International Conference on Learning Representations (ICLR), 2022.
-
Maximum Knowledge Orthogonality Reconstruction With Gradients in Federated Learning [Paper] [Code]
Feng Wang, Senem Velipasalar, and M. Cenk Gursoy
Winter Conference on Applications of Computer Vision (WACV), 2024. -
When the Curious Abandon Honesty: Federated Learning Is Not Private [Paper]
Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, and Nicolas Papernot
IEEE European Symposium on Security and Privacy (EuroS&P), 2023. -
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification [Paper]
Yuxin Wen, Jonas A. Geiping, Liam Fowl, Micah Goldblum, and Tom Goldstein
International Conference on Machine Learning (ICML), 2022. -
Eluding Secure Aggregation in Federated Learning via Model Inconsistency [Paper] [Code]
Dario Pasquini, Danilo Francati, and Giuseppe Ateniese
ACM SIGSAC Conference on Computer and Communications Security (CCS), 2022.
-
SoK: On Gradient Leakage in Federated Learning [Paper]
Jiacheng Du, Jiahui Hu, Zhibo Wang, Peng Sun, Neil Zhenqiang Gong, Kui Ren, and Chun Chen
USENIX Security Symposium (USENIX Security), 2025. -
FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [Paper]
Isaac Baglin, Xiatian Zhu, and Simon Hadfield
arXiv:2411.03019, 2025. -
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning [Paper] [Code]
Yangsibo Huang, Samyak Gupta, Zhao Song, Kai Li, and Sanjeev Arora
Conference on Neural Information Processing Systems (NeurIPS), 2021.