The rise of edge computing, ubiquitous client devices, and AI-enabled services is challenging the traditional client–server paradigm underlying web-based distributed applications. Conventional architectures essentially treat clients as passive endpoints, while computation and orchestration remain centralized in cloud infrastructures, with limited support for edge resources. At the same time, modern applications increasingly rely on heterogeneous and data-intensive AI workloads that require flexible placement, strong privacy guarantees, and efficient use of distributed resources. This dissertation argues that these requirements can be addressed by treating the computing continuum, spanning client devices, edge nodes, and cloud data centers, as a unified execution substrate beyond the traditional client–server model. The first contribution is architectural and extends the computing continuum by promoting client devices to first-class computation and deployment nodes. Using web-centric and portable runtimes such as WebAssembly and ONNX, the dissertation introduces abstractions and middleware that enable microservices and serverless functions to be executed on client devices without explicit installation or configuration. This approach improves resource utilization, reduces reliance on centralized infrastructures, and enables local processing of sensitive data. Experimental evaluations in representative application scenarios demonstrate the feasibility of dynamically forming continuum-wide execution environments using existing client and edge resources. The second contribution focuses on integrating federated and distributed AI across the computing continuum. The dissertation presents systems that support plug-and-play participation of heterogeneous client devices in federated learning processes, including browser-based environments. Novel aggregation strategies and learning objectives are introduced to mitigate the impact of unreliable or malicious clients, improving the robustness and fairness of collaboratively trained models. The third contribution addresses continuum-aware orchestration of distributed applications. The dissertation investigates the execution of microservices and serverless workflows across client, edge, and cloud environments, and proposes orchestration mechanisms that operate under end-to-end quality-of-service constraints. These mechanisms leverage predictive models and distributed, self-supervised context information to guide function placement and resource allocation across heterogeneous deployments. Finally, the proposed architectures, learning mechanisms, and orchestration strategies are evaluated on realistic computing continuum testbeds that combine client devices, edge platforms, and cloud infrastructures. The results show that treating the computing continuum as a unified execution substrate improves resource utilization and cost efficiency, reduces latency, and enhances privacy guarantees, while increasing robustness and fairness in collaborative AI workloads. Overall, this work demonstrates that modernizing distributed application architectures across the computing continuum is both feasible and beneficial, and outlines a concrete path beyond the client–server paradigm for next-generation AI-enabled distributed systems.

Beyond the Client-Server Paradigm: Modernizing Distributed Architectures Across the Computing Continuum

COLOSI, Mario
2026-04-20

Abstract

The rise of edge computing, ubiquitous client devices, and AI-enabled services is challenging the traditional client–server paradigm underlying web-based distributed applications. Conventional architectures essentially treat clients as passive endpoints, while computation and orchestration remain centralized in cloud infrastructures, with limited support for edge resources. At the same time, modern applications increasingly rely on heterogeneous and data-intensive AI workloads that require flexible placement, strong privacy guarantees, and efficient use of distributed resources. This dissertation argues that these requirements can be addressed by treating the computing continuum, spanning client devices, edge nodes, and cloud data centers, as a unified execution substrate beyond the traditional client–server model. The first contribution is architectural and extends the computing continuum by promoting client devices to first-class computation and deployment nodes. Using web-centric and portable runtimes such as WebAssembly and ONNX, the dissertation introduces abstractions and middleware that enable microservices and serverless functions to be executed on client devices without explicit installation or configuration. This approach improves resource utilization, reduces reliance on centralized infrastructures, and enables local processing of sensitive data. Experimental evaluations in representative application scenarios demonstrate the feasibility of dynamically forming continuum-wide execution environments using existing client and edge resources. The second contribution focuses on integrating federated and distributed AI across the computing continuum. The dissertation presents systems that support plug-and-play participation of heterogeneous client devices in federated learning processes, including browser-based environments. Novel aggregation strategies and learning objectives are introduced to mitigate the impact of unreliable or malicious clients, improving the robustness and fairness of collaboratively trained models. The third contribution addresses continuum-aware orchestration of distributed applications. The dissertation investigates the execution of microservices and serverless workflows across client, edge, and cloud environments, and proposes orchestration mechanisms that operate under end-to-end quality-of-service constraints. These mechanisms leverage predictive models and distributed, self-supervised context information to guide function placement and resource allocation across heterogeneous deployments. Finally, the proposed architectures, learning mechanisms, and orchestration strategies are evaluated on realistic computing continuum testbeds that combine client devices, edge platforms, and cloud infrastructures. The results show that treating the computing continuum as a unified execution substrate improves resource utilization and cost efficiency, reduces latency, and enhances privacy guarantees, while increasing robustness and fairness in collaborative AI workloads. Overall, this work demonstrates that modernizing distributed application architectures across the computing continuum is both feasible and beneficial, and outlines a concrete path beyond the client–server paradigm for next-generation AI-enabled distributed systems.
20-apr-2026
distributed systems; computing continuum; edge computing; federated learning; serverless architectures; ai-oriented orchestration
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11570/3352018
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact