Enhancing federated learning efficiency through dynamic model adaptation and optimization
- UNCG Author/Contributor (non-UNCG co-authors, if there are any, appear on document)
- Bhuvana Korrapati (Creator)
- Institution
- The University of North Carolina at Greensboro (UNCG )
- Web Site: http://library.uncg.edu/
- Advisor
- Jing Deng
Abstract: Federated learning (FL) in cloud computing has emerged as a groundbreaking paradigm, revolutionizing data processing and machine learning through decentralized and scalable systems. However, the integration of these technologies faces challenges in communication efficiency and data privacy preservation, which are crucial for their widespread adoption and effectiveness. This research presents a dynamic federated learning approach that incorporates model compression, PCA-based dimensionality reduction, and fine-tuning to address these challenges. The proposed method dynamically determines the optimal number of PCA components based on client data variability, effectively reducing data dimensionality. By applying model compression techniques, including pruning and quantization, the approach enhances communication efficiency without compromising the performance. Furthermore, the integration of fine-tuning as a knowledge distillation step allows the compressed models to adapt to client-specific data patterns, thereby tackling issues of skewness and overfitting. In addressing the challenges of bandwidth and latency in FL, the evaluation metrics encompass Average Communication Cost, Average Bandwidth Utilization, and Average Latency, demonstrating the approach’s effectiveness in optimizing these key performance indicators. Moreover, the framework incorporates dynamic model adaptation on both client-side and server-side, enabling personalized adjustments based on local data characteristics and client resources, while optimizing the global model’s performance. Using both the MNIST and CIFAR-10 datasets for validation, this approach demonstrates maintained accuracy with data reduction across various levels of data skewness and complexity. The proposed federated learning framework follows a comprehensive flow chart that encompasses server initialization, model distribution, client-side processing (including data dimensionality reduction, model compression, local training, fine-tuning, and dynamic adaptation), server-side aggregation, global model update, model evaluation, and iterative refinement. This research contributes to the advancing field of cloud-based FL by presenting an efficient, privacy-preserving, and scalable approach for distributed machine learning, setting a new standard for optimizing communication efficiency in decentralized data environments and paving the way for the next generation of federated learning systems that prioritize efficiency, privacy, and scalability.
Enhancing federated learning efficiency through dynamic model adaptation and optimization
PDF (Portable Document Format)
1428 KB
Created on 8/1/2024
Views: 69
Additional Information
- Publication
- Thesis
- Language: English
- Date: 2024
- Keywords
- Communication Efficiency, Data Privacy, Federated Learning, Model Compression, Principal Component Analysis (PCA), Scalable Machine Learning
- Subjects
- Federated learning (Machine learning)
- Computer security