Federated Learning (FL) is a decentralized machine learning (ML) approach that trains a model by preserving the privacy of training data. FL keeps its vital place in various application domains including healthcare, object detection, biometric security, etc. The conventional FL primarily focuses on aggregating homogeneous models, assuming that all participating clients possess identical models. However, this assumption is not always feasible in reality, as participating clients may have different model architectures. Therefore, the proposed approach introduces an innovative solution to deal with model heterogeneity. The proposed work extends the existing heterogeneous model aggregation technique by introducing the concept of knowledge distillation (KD). An enhanced version of KD is applied in conjunction with the existing technique MHAT. This combination allows the aggregation of knowledge acquired from all participating clients. Consequently, a global predictive model can be created by extracting information from all clients, all without directly accessing the individual client data. This proposed approach has shown a significant enhancement in the performance of FL for varying participating client specifications. The evaluation result has represented that the proposed approach shows its significant role in dealing with the issue of model heterogeneity by achieving an accuracy of 88.12% as compared to the existing approach of 84.7%.

Fed-HKD: Federated Heterogeneous Model Aggregation Using Modified Knowledge Distillation

Gupta, Harshit
;
Puliafito, Antonio
2025-01-01

Abstract

Federated Learning (FL) is a decentralized machine learning (ML) approach that trains a model by preserving the privacy of training data. FL keeps its vital place in various application domains including healthcare, object detection, biometric security, etc. The conventional FL primarily focuses on aggregating homogeneous models, assuming that all participating clients possess identical models. However, this assumption is not always feasible in reality, as participating clients may have different model architectures. Therefore, the proposed approach introduces an innovative solution to deal with model heterogeneity. The proposed work extends the existing heterogeneous model aggregation technique by introducing the concept of knowledge distillation (KD). An enhanced version of KD is applied in conjunction with the existing technique MHAT. This combination allows the aggregation of knowledge acquired from all participating clients. Consequently, a global predictive model can be created by extracting information from all clients, all without directly accessing the individual client data. This proposed approach has shown a significant enhancement in the performance of FL for varying participating client specifications. The evaluation result has represented that the proposed approach shows its significant role in dealing with the issue of model heterogeneity by achieving an accuracy of 88.12% as compared to the existing approach of 84.7%.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11570/3339670
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact