FEDERATED LEARNING IN THE CLOUD: A STUDY OF PRIVACY-PRESERVING AI ARCHITECTURES AND RECENT IMPLEMENTATIONS ACROSS THE THREE MAJOR CLOUD SERVICE PROVIDERS
DOI:
https://doi.org/10.52152/801099Keywords:
Federated Learning, Cloud Computing, Privacy-Preserving AI, Secure Aggregation, Differential PrivacyAbstract
Federated Learning (FL) has become a paradigm shift in privacy-preserving artificial intelligence, especially in cloud environments where sensitive information cannot be centralized, and there are regulatory or ethical issues. This paper explores how FL can be designed and implemented on three major cloud service providers, including AWS, Google Cloud, and Microsoft Azure, with a special focus on four fundamental algorithms, namely, Federated Averaging (FedAvg), Differentially Private Stochastic Gradient Descent (DP-SGD), Secure Aggregation, and Trusted Execution Environment-based FL (TEE-FL). Experiments were performed with synthetic healthcare and financial data to measure the performance in terms of model accuracy, communication overhead, and preservation of privacy. Findings indicate that FedAvg had the best accuracy (92 percent) at the cost of no privacy guarantees, DP-SGD had the best privacy (high differential privacy) with a moderate accuracy drop (88 percent), Secure Aggregation had the best security, and TEE-FL had the best trade-offs with 90 percent accuracy and only 8 percent overhead. As compared to the related work, this study confirms that secure aggregation and hardware-assisted trust combine into hybrid approaches that offer practical scalability without adversely affecting performance. In general, the results highlight the promise of federated learning in the context of cloud ecosystems as a foundation of ethically and regulatory-compliant AI usage.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Lex localis - Journal of Local Self-Government

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.