Accelerating Federated Edge Learning via Topology Optimization
Shanfeng Huang, Zezhong Zhang, Shuai Wang, Rui Wang, Kaibin Huang
Federated edge learning (FEEL) is envisioned as a promising paradigm to
achieve privacy-preserving distributed learning. However, it consumes excessive
learning time due to the existence of straggler devices. In this paper, a novel
topology-optimized federated edge learning (TOFEL) scheme is proposed to tackle
the heterogeneity issue in federated learning and to improve the
communication-and-computation efficiency. Specifically, a problem of jointly
optimizing the aggregation topology and computing speed is formulated to
minimize the weighted summation of energy consumption and latency. To solve the
mixed-integer nonlinear problem, we propose a novel solution method of
penalty-based successive convex approximation, which converges to a stationary
point of the primal problem under mild conditions. To facilitate real-time
decision making, an imitation-learning based method is developed, where deep
neural networks (DNNs) are trained offline to mimic the penalty-based method,
and the trained imitation DNNs are deployed at the edge devices for online
inference. Thereby, an efficient imitate-learning based approach is seamlessly
integrated into the TOFEL framework. Simulation results demonstrate that the
proposed TOFEL scheme accelerates the federated learning process, and achieves
a higher energy efficiency. Moreover, we apply the scheme to 3D object
detection with multi-vehicle point cloud datasets in the CARLA simulator. The
results confirm the superior learning performance of the TOFEL scheme over
conventional designs with the same resource and deadline constraints.