FedDyS: Enhancing Federated Learning Efficiency with Dynamic Sample Selection
Kiyamousavi SE., Kraychev B., Koychev I.
Federated learning (FL) trains models across multiple devices while keeping data localized, addressing privacy and efficiency issues. It faces challenges like data heterogeneity-where diverse local data might hinder the performance of a global model-and high computational costs on resource-limited devices like smartphones or IoT sensors. These constraints can prolong training times, increase energy use, and hasten device wear. To address these issues, this paper introduces FedDyS, a dynamic sample selection technique that reduces computational demands and mitigates data heterogeneity by eliminating non-essential training samples. This not only shrinks the training set size on local devices but also enhances data diversity, facilitating more efficient training and preserving data privacy. Additionally, FedDyS prevents the catastrophic forgetting effect, a common challenge in FL. Our experiments show that FedDyS surpasses traditional FL methods in accuracy and convergence speed, using less than 15% of the usual number of samples, making it ideal for low-resource settings. The code of FedDyS is publicly available: https://github.com/ensiyeKiya/FedDyS