THREATS AND DEFENSES OF PRIVACY IN THE FEDERATED LEARNING
Abstract
With scientific development, people have a more modern and comfortable life, and also create more data. This data is stored in different devices and application domains and society is becoming more and more aware of data privacy issues. The traditional centralized training or traditional artificial intelligence (AI) models are facing efficiency and privacy challenges. In recent years, federated learning has emerged as an alternative solution and continues to thrive in the field of artificial intelligence for responding to the demands of everyday life. Existing federated learning models have been shown to be vulnerable to attackers within or outside of the system, affecting data privacy and system security. Besides training global models, it is of paramount importance to design federated learning systems that have privacy guarantees and are resistant to different types of attacks. In this study, a comprehensive survey on privacy in federated learning is presented. Through a brief introduction to the concept of federated learning, its classification includes: 1) threats models; 2) privacy attacks and defenses. Key techniques and basic assumptions adopted by various attacks and defenses in federated learning are also introduced to help better understand the nature and conditions of attacks. Finally, future research directions to protect privacy in federated learning models are discussed in detail.