BIPRUNEFL: COMPUTATION AND COMMUNICATION EFFICIENT FEDERATED LEARNING WITH BINARY QUANTIZATION AND PRUNING

BiPruneFL: Computation and Communication Efficient Federated Learning With Binary Quantization and Pruning

BiPruneFL: Computation and Communication Efficient Federated Learning With Binary Quantization and Pruning

Blog Article

Federated learning (FL) is a decentralized learning framework that allows a central server and multiple devices, referred to as clients, to collaboratively train a shared model without transmitting their private data to a central server.This approach helps to preserve data privacy and reduce the risk of information leakage.However, FL systems often face significant communication and computational overhead due to frequent exchanges here of model parameters and the intensive local training required on resource-constrained clients.

Existing solutions typically apply compression techniques such as quantization or pruning but only to a limited extent, constrained by the trade-off between model accuracy and compression efficiency.To address these challenges, we propose BiPruneFL, a communication- and computation-efficient FL framework that combines quantization and pruning while maintaining competitive accuracy.By leveraging recent advances in neural network pruning, BiPruneFL identifies subnetworks within binary neural networks without significantly compromising accuracy.

Additionally, we employ communication compression strategies to enable efficient model updates and computationally lightweight local training.Through experiments, we demonstrate that BiPruneFL significantly outperforms other baselines, achieving up to $88.1 imes $ and $80.

8 imes $ more efficient communication costs during upstream and downstream phases, respectively, and reducing computation costs by $3.9 imes $ to $34.9 imes $ depending on the degree of quantization.

Despite these efficiency gains, BiPruneFL Mushroom Vapes achieves accuracy comparable to, and in some cases surpassing, that of uncompressed federated learning models.

Report this page