Double Momentum Backdoor Attack in Federated Learning
DOI:
https://doi.org/10.47611/jsrhs.v12i1.3644Keywords:
federated learning, backdoor attacks, double momentum, defense evasionAbstract
Federated learning is conceived as a privacy- preserving framework that trains deep neural networks from decentralized data. However, its decentralized nature exposes new attack surfaces. The privacy guarantees of federated learning prevent us from inspecting local data and training pipelines. These restrictions rule out many common defenses against poisoning attacks, such as data sanitization and traditional anomaly detection methods. The most devastating attacks are usually the ones that corrupt the model without altering the performance of the main task. Backdoor attacks are prominent examples of adversarial attacks that often go unnoticed in the absence of sophisticated defenses. This paper sheds light on backdoor attacks in federated learning, where we aim to manipulate the global model to misclassify the samples belonging to a particular task while also maintaining high accuracy on the main objective. Unlike existing works, we adopted a novel approach that directly manipulates the gradients’ momentums to introduce the backdoor. Specifically, the double momentum backdoor attack computes two momentums separately based on malicious and original inputs and uses them to update the model. Via experimental evaluation, we demonstrate that our attack scenario is capable of introducing the backdoor while successfully evading detection.
Downloads
References or Bibliography
F. Mirshghallah, M. Taram, P. Vepakomma, A. Singh, R. Raskar, and H. Esmaeilzadeh, “Privacy in deep learning: A survey,” arXiv preprint arXiv:2004.12254, 2020.
P. Voigt and A. v. d. Bussche, The EU General Data Protection Regulation (GDPR): A Practical Guide, 1st ed. Cham, Switzerland: Springer Publishing Company, Incorporated, 2017.
S. Nunez, “Your phone is now more powerful than your PC,” August 2020. [Online]. Available: https://insights.samsung.com/2020/ 08/07/your- phone- is- now- more- powerful- than- your- pc- 2
J. Konecˇny ́, H. B. McMahan, D. Ramage, and P. Richta ́rik, “Federated Optimization: Distributed machine learning for on-device intelligence,” arXiv preprint arXiv:1610.02527, Oct. 2016.
J. Konecˇny ́, H. B. McMahan, F. X. Yu, P. Richtarik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” in NIPS Workshop on Private Multi-Party Machine Learning, 2016.
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, vol. 54, 2017, pp. 1273–1282.
Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Trans. Intell. Syst. Technol., vol. 10, no. 2, Jan. 2019.
S. A. Rahman, H. Tout, H. Ould-Slimane, A. Mourad, C. Talhi, and M. Guizani, “A survey on federated learning: The journey from centralized to distributed on-site learning and beyond,” IEEE Internet of Things Journal, 2020.
K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konecˇny`, S. Mazzocchi, H. B. McMahan et al., “Towards federated learning at scale: System design,” arXiv preprint arXiv:1902.01046, 2019.
V. Mothukuri, R. M. Parizi, S. Pouriyeh, Y. Huang, A. Dehghantanha, and G. Srivastava, “A survey on security and privacy of federated
learning,” Future Generation Computer Systems, vol. 115, pp. 619–640,
L. Lyu, H. Yu, and Q. Yang, “Threats to federated learning: A survey,”
arXiv preprint arXiv:2003.02133, 2020.
N. Bouacida and P. Mohapatra, “Vulnerabilities in federated learning,”
IEEE Access, vol. 9, pp. 63 229–63 249, 2021.
A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, “Model poi-
soning attacks in federated learning,” in Proc. Workshop Secur. Mach.
Learn.(SecML) 32nd Conf. Neural Inf. Process. Syst.(NeurIPS), 2018. [14] M. Fang, X. Cao, J. Jia, and N. Gong, “Local model poisoning attacks to byzantine-robust federated learning,” in 29th USENIX Security
Symposium (USENIX Security 20), Aug. 2020, pp. 1605–1622.
J. Steinhardt, P. W. Koh, and P. Liang, “Certified defenses for data
poisoning attacks,” 2017, p. 3520–3532.
K. Liu, B. Dolan-Gavitt, and S. Garg, “Fine-Pruning: Defending against
backdooring attacks on deep neural networks,” in Research in Attacks,
Intrusions, and Defenses, 2018, pp. 273–294.
A. T. Suresh, B. McMahan, P. Kairouz, and Z. Sun, “Can you really
backdoor federated learning?” arXiv preprint arXiv:1911.07963, 2019. [18] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” in Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, vol.
, 26–28 Aug 2020, pp. 2938–2948.
C. Xie, K. Huang, P.-Y. Chen, and B. Li, “DBA: Distributed backdoor
attacks against federated learning,” in International Conference on
Learning Representations, 2020.
H. Wang, K. Sreenivasan, S. Rajput, H. Vishwakarma, S. Agarwal, J.-y.
Sohn, K. Lee, and D. Papailiopoulos, “Attack of the tails: Yes, you really can backdoor federated learning,” arXiv preprint arXiv:2007.05084, 2020.
A. Huang, “Dynamic backdoor attacks against federated learning,” arXiv preprint arXiv:2011.07429, 2020.
A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, “Analyzing federated learning through an adversarial lens,” in Proceedings of the 36th International Conference on Machine Learning, vol. 97, 09–15 Jun 2019, pp. 634–643.
Density Density
Density Density
T. Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” arXiv preprint arXiv:1708.06733, 2019. [Online]. Available: https://arxiv.org/abs/1708.06733
S.Shen,S.Tople,andP.Saxena,“AUROR:Defendingagainstpoisoning attacks in collaborative deep learning systems,” in Proceedings of the 32nd Annual Conference on Computer Security Applications, 2016, p. 508–519.
M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li, “Manipulating machine learning: Poisoning attacks and countermeasures for regression learning,” in 2018 IEEE Symposium on Security and Privacy (SP). IEEE, 2018, pp. 19–35.
S.Li,Y.Cheng,Y.Liu,W.Wang,andT.Chen,“Abnormalclientbehav- ior detection in federated learning,” arXiv preprint arXiv:1910.09933, 2019.
M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, p. 308–318.
L.V.Hedges,E.Tipton,andM.C.Johnson,“Robustvarianceestimation in meta-regression with dependent effect size estimates,” Research synthesis methods, vol. 1, no. 1, pp. 39–65, 2010.
“Tensorflow federated.” [Online]. Available: https://www.tensorflow.org/ federated
G.Cohen,S.Afshar,J.Tapson,andA.vanSchaik,“EMNIST:anexten- sion of MNIST to handwritten letters,” arXiv preprint arXiv:1702.05373, 2017.
A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.
W. Li and A. McCallum, “Pachinko allocation: DAG-structured mixture models of topic correlations,” in Proceedings of the 23rd international conference on Machine learning, 2006, pp. 577–584.
Published
How to Cite
Issue
Section
Copyright (c) 2023 Satwik Panigrahi, Nader Bouacida; Prasant Mohapatra
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright holder(s) granted JSR a perpetual, non-exclusive license to distriute & display this article.