We present COALA, a vision-centric Federated Learning (FL) platform, and a suite of benchmarks for practical FL scenarios, which we categorize as task, data, and model levels. At the task level, COALA extends support from simple classification to 15 computer vision tasks, including object detection, segmentation, pose estimation, and more. It also facilitates federated multiple-task learning, allowing clients to train on multiple tasks simultaneously. At the data level, COALA supports different degrees of data annotation availabilities, including all data being labeled, partial data being labeled, and no data being labeled. From the data distribution perspective, COALA supports continual & test-time shift, and also covers both label shift and domain shift. At the model level, COALA benchmarks FL with split models and different models in different clients.
Weiming Zhuang,
Jian Xu,
Chen Chen,
Jingtao Li,
Lingjuan Lyu