The decentralized nature of federated learning (FL) poses critical challenges related to security: Clients participating in the process may not necessarily be trustworthy and could engage in adversarial attacks, potentially undermining the integrity and reliability of the global machine learning model. Security concerns have been extensively investigated in traditional FL, where collaboratively learned models are typically deep neural networks. However, this class of models does not meet the requirement of explainability, which is considered essential for the trustworthiness of AI systems. In this work, we present an analysis on security threats to FL of explainable models, namely fuzzy rule-based classifiers (FRBCs). We outline the types of attacks a malicious client may implement, and assess, through a preliminary experimental analysis, the impact they have on FL of FRBCs in terms of global model performance. We also compare these findings with the effects of the same or similar well-established attacks in traditional FL of neural network models. Finally, we provide insights to improve the security of FRBCs learned in a federated fashion.

Security Threats to Explainable Classifiers in Federated Learning / Daole, M.; Ducange, P.; Herrera, F.; Marcelloni, F.; Renda, A.; Rodriguez-Barroso, N.. - (2025), pp. 1-8. ( 2025 International Joint Conference on Neural Networks, IJCNN 2025 Pontifical Gregorian University, ita 2025) [10.1109/IJCNN64981.2025.11227961].

Security Threats to Explainable Classifiers in Federated Learning

Renda A.;
2025-01-01

Abstract

The decentralized nature of federated learning (FL) poses critical challenges related to security: Clients participating in the process may not necessarily be trustworthy and could engage in adversarial attacks, potentially undermining the integrity and reliability of the global machine learning model. Security concerns have been extensively investigated in traditional FL, where collaboratively learned models are typically deep neural networks. However, this class of models does not meet the requirement of explainability, which is considered essential for the trustworthiness of AI systems. In this work, we present an analysis on security threats to FL of explainable models, namely fuzzy rule-based classifiers (FRBCs). We outline the types of attacks a malicious client may implement, and assess, through a preliminary experimental analysis, the impact they have on FL of FRBCs in terms of global model performance. We also compare these findings with the effects of the same or similar well-established attacks in traditional FL of neural network models. Finally, we provide insights to improve the security of FRBCs learned in a federated fashion.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11368/3124580
 Avviso

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact