federated learning

Combating Data Imbalances in Federated Semi-supervised Learning with Dual Regulators
Federated learning has become a popular method to learn from decentralized heterogeneous data. Federated semi-supervised learning (FSSL) emerges to train models from a small fraction of labeled data due to label scarcity on decentralized clients. Existing FSSL methods assume independent and identically distributed (IID) labeled data across clients and consistent class distribution between labeled and unlabeled data within a client. This work studies a more practical and challenging scenario of FSSL, where data distribution is different not only across clients but also within a client between labeled and unlabeled data. To address this challenge, we propose a novel FSSL framework with dual regulators, FedDure.} FedDure lifts the previous assumption with a coarse-grained regulator (C-reg) and a fine-grained regulator (F-reg): C-reg regularizes the updating of the local model by tracking the learning effect on labeled data distribution; F-reg learns an adaptive weighting scheme tailored for unlabeled instances in each client. We further formulate the client model training as bi-level optimization that adaptively optimizes the model in the client with two regulators. Theoretically, we show the convergence guarantee of the dual regulators. Empirically, we demonstrate that FedDure is superior to the existing methods across a wide range of settings, notably by more than 11% on CIFAR-10 and CINIC-10 datasets.
Optimizing Federated Unsupervised Person Re-identification via Camera-aware Clustering
Person re-identification (ReID) is a critical computer vision problem which identifies individuals from non-overlapping cameras. Many recent works on person ReID achieve remarkable performance by extracting features from large amounts of data using deep neural networks. However, the growing awareness of privacy concerns limits the development of person ReID. Prior studies employ federated person ReID to learn from decentralized edges without sharing raw data, but they overlook the variation of identities in different camera views. Concerning this issue, we propose a federated unsupervised person ReID (FedUCA) that leverages camera information to improve learning from decentralized unlabeled data. Specifically, FedUCA jointly learns person ReID models by transmitting training updates instead of raw data. We generate pseudo-labels for unlabeled local datasets on edges by clustering them into multiple groups according to different cameras. We then introduce contrastive learning with an intra-camera loss and an inter-camera loss to enhance the discrimination ability. In extensive experiments on eight person ReID datasets, our proposed approach significantly outperforms the state-of-the-art federated learning based method. It improves performance by 6% to 32% on these datasets, and notably by over 25 % on large datasets. We hope this paper will shed light on optimizing federated learning across a broader range of multimedia applications.