One paper has been accepted to ICLR 2022. ICLR is one of the major international conferences on machine learning and related areas.
Title: FedPara: Low-Rank Hadamard Product For Communication-Efficient Federated Learning
Authors: Nam Hyeon-Woo (POSTECH), Moon Ye-Bin (POSTECH), Tae-Hyun Oh (POSTECH)
[Abstract]
In this work, we propose a communication-efficient parameterization, FedPara, for federated learning (FL) to overcome the burdens on frequent model uploads and downloads.
Our method re-parameterizes weight parameters of layers using low-rank weights followed by the Hadamard product.
Compared to the conventional low-rank parameterization, our FedPara method is not restricted to low-rank constraints, and thereby it has a far larger capacity.
This property enables to achieve comparable performance while requiring 3 to 10 times lower communication costs than the model with the original layers, which is not achievable by the traditional low-rank methods.
The efficiency of our method can be further improved by combining with other efficient FL optimizers. In addition, we extend our method to a personalized FL application, pFedPara, which separates parameters into global and local ones.
We show that pFedPara outperforms competing personalized FL methods with more than three times fewer parameters.
[Results]