Federated Learning (FL) allows a Machine Learning (ML) model to be trained collaboratively among distributed devices while preserving the privacy of the data being used for the training. On the other hand, Hierarchical Federated Learning (HFL) is the extended architecture of FL, which consists of additional edge servers for partial aggregation. FL is very useful in privacy-preserving machine learning. However, it has some setbacks, such as statistical heterogeneity, multiple expensive global iterations, performance degradation due to insufficient data, and slow convergence. To deal with such setbacks, the work proposes three approaches with HFL. The first approach utilizes Transfer Learning with HFL, the second approach uses personalized layers in HFL by presenting a 2-tier & 3-tier architecture, and the third approach uses Split Learning (SL) with HFL by proposing an extended 3-tier architecture. The proposed work performed well with the computation at multilevel, i.e., on client, edge, and cloud, exploiting the hybrid infrastructure of IoT-Edge-cloud, i.e., compute continuum. The obtained results showed that the proposed work outperforms by increasing the accuracy of complex models from 18.10% to 76.91% with faster convergence. The work also showed better performance than the state-of-the-art models. Significant performance improvement was achieved in the presence of personalized layers in an HFL-SplitNN architecture. The proposed 3-tier architecture especially shines in the case of less homogeneous data per client. SL played a vital role with HFL in enhancing performance by providing a maximum accuracy of 82.38% with Independent & Identically Distributed Data (IID) and 52.16% with Non-IID data distribution.
Federated learning across the compute continuum: A hierarchical approach with splitNNs and personalized layers
Gupta H.;Merlino G.;Longo F.;Puliafito A.
2025-01-01
Abstract
Federated Learning (FL) allows a Machine Learning (ML) model to be trained collaboratively among distributed devices while preserving the privacy of the data being used for the training. On the other hand, Hierarchical Federated Learning (HFL) is the extended architecture of FL, which consists of additional edge servers for partial aggregation. FL is very useful in privacy-preserving machine learning. However, it has some setbacks, such as statistical heterogeneity, multiple expensive global iterations, performance degradation due to insufficient data, and slow convergence. To deal with such setbacks, the work proposes three approaches with HFL. The first approach utilizes Transfer Learning with HFL, the second approach uses personalized layers in HFL by presenting a 2-tier & 3-tier architecture, and the third approach uses Split Learning (SL) with HFL by proposing an extended 3-tier architecture. The proposed work performed well with the computation at multilevel, i.e., on client, edge, and cloud, exploiting the hybrid infrastructure of IoT-Edge-cloud, i.e., compute continuum. The obtained results showed that the proposed work outperforms by increasing the accuracy of complex models from 18.10% to 76.91% with faster convergence. The work also showed better performance than the state-of-the-art models. Significant performance improvement was achieved in the presence of personalized layers in an HFL-SplitNN architecture. The proposed 3-tier architecture especially shines in the case of less homogeneous data per client. SL played a vital role with HFL in enhancing performance by providing a maximum accuracy of 82.38% with Independent & Identically Distributed Data (IID) and 52.16% with Non-IID data distribution.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


