正在加载图片...
by PhyArith in Fig.6 (55.479 erroneous summations in all 95 955 configurations).Experiment results show that none of them passes the verification. 905 90% 85 LMMSE-XC 80院 80 a 10 (b) 8 10 0.5 cation time time 90 909 P-yArith= A4△=2 yAth= Phy-Arith,A =3 LMMSE-I LMOSE-K 85 01 (a)13 1415 16 15 1617 18 SNR (dB)】 SNR(dB】 -0PhArith.A=S LMEMSE-C -B-LMMSE-IC 7年 70 6 810 10 05 Normalized communication time -AyAh△=2 OAih△=2 A△= Fig.7.Test accuracy of LeNet-5 trained by FedSGD where learning rate LMSE-l =0.01,updates are compressed using (a)No-Compression:(b)10-bits;(c) 17 18192021 22 (d山18 19202122 23 2-bits;(d)1-bits quantization,aggregated based on PhyArith and LMMSE-IC SNR (dB) SNR (dB) 5绕 95凭 Fig.6.Outage ratio of obtaining the exact summation of all update vector chunks transmitted simultaneously in the first round of FedSGD based on PhyArith and MUD,where FedSGD is with configuration (a)4 x 8:(b) 90% 90 8 x 8:(c)12 x 8;(d)16 x 8,learning rate n=0.01,updates are compressed using 10 bits quantization 85 85% 809 0 (a) 6 8 10 (bj C.Test Accuracy of LeNet-5 Trained by FedSGD on time ed co 95凭 95凭 And then,we show the test accuracy of LeNet-5 trained 90% by FedSGD with configuration 16 x 8,where the SNR of the uplink MU-MIMO channel is set as 23.33 dB,and the 859% client-side updates are aggregated based on PhyArith and the MUD approach LMMSE-IC.Recall the results in Fig.6 that 09 the MUD approach can not handle the 16 x 8 configuration 10 6 8 10 (c) a tion tim nication time where updates from all 16 clients are transmitted to the server simultaneously.To be fair to the MUD approach,we let all Fig.8.Test accuracy of LeNet-5 trained by FedSGD where learning rate its 16 updates be separately simultaneously transmitted in n=0.01,updates are compressed using (a)No-Compression:(b)top-30%: several slots,such that they can be efficiently aggregated. (c)top-6%;(d)top-3%sparsification,and 8-bits quantization,aggregated based on PhyArith and LMMSE-IC From the experiment results shown in Fig.7 and Fig.8,we can see that to achieve the same test accuracy,our proposal PhyArith can further improve the communication efficiency APPENDIX by 1.5 to 3 times,compared with the solutions which apply A.Calculation of Entropy quantization/sparsification to compress updates,and aggregate While hol =h=+1.0,entropy H(y)is given by them via MUD. 0=高eap(票+ X.CONCLUSION )+92 exp(v+2)2 In this paper.we propose PhyArith that improves the communication efficiency of federated learning in wireless entropy H(yso,s1)is given by networks featuring uplink MU-MIMO,by directly aggregating client-side updates from the superimposed RF signal,without affecting the convergence of the training procedure.In the 6a)=店a器 future,we would like to make other server-side aggregation and entropy H(yso)is given by methods be available in PhyArith,and further improve its decoding performance. w=2a(o(二+by PhyArith in Fig. 6 (55, 479 erroneous summations in all configurations). Experiment results show that none of them passes the verification. 13 14 15 16 17 18 0 0.5 1 SNR (dB) Outage ratio (a) PhyArith, ∆ = 2 PhyArith, ∆ = 3 LMMSE-IC GM-IC 15 16 17 18 19 20 0 0.5 1 SNR (dB) (b) PhyArith, ∆ = 2 PhyArith, ∆ = 3 LMMSE-IC GM-IC 17 18 19 20 21 22 0 0.5 1 SNR (dB) Outage ratio (c) PhyArith, ∆ = 2 PhyArith, ∆ = 3 LMMSE-IC GM-IC 18 19 20 21 22 23 0 0.5 1 SNR (dB) (d) PhyArith, ∆ = 2 PhyArith, ∆ = 3 LMMSE-IC GM-IC Fig. 6. Outage ratio of obtaining the exact summation of all update vector chunks transmitted simultaneously in the first round of FedSGD based on PhyArith and MUD, where FedSGD is with configuration (a) 4 × 8; (b) 8×8; (c) 12×8; (d) 16×8, learning rate η = 0.01, updates are compressed using 10 bits quantization C. Test Accuracy of LeNet-5 Trained by FedSGD And then, we show the test accuracy of LeNet-5 trained by FedSGD with configuration 16 × 8, where the SNR of the uplink MU-MIMO channel is set as 23.33 dB, and the client-side updates are aggregated based on PhyArith and the MUD approach LMMSE-IC. Recall the results in Fig. 6 that the MUD approach can not handle the 16 × 8 configuration where updates from all 16 clients are transmitted to the server simultaneously. To be fair to the MUD approach, we let all its 16 updates be separately simultaneously transmitted in several slots, such that they can be efficiently aggregated. From the experiment results shown in Fig. 7 and Fig. 8, we can see that to achieve the same test accuracy, our proposal PhyArith can further improve the communication efficiency by 1.5 to 3 times, compared with the solutions which apply quantization/sparsification to compress updates, and aggregate them via MUD. X. CONCLUSION In this paper, we propose PhyArith that improves the communication efficiency of federated learning in wireless networks featuring uplink MU-MIMO, by directly aggregating client-side updates from the superimposed RF signal, without affecting the convergence of the training procedure. In the future, we would like to make other server-side aggregation methods be available in PhyArith, and further improve its decoding performance. 2 4 6 8 10 80% 85% 90% 95% Normalized communication time Test accuracy (a) PhyArith, ∆ = 3 LMMSE-IC 2 4 6 8 10 80% 85% 90% 95% Normalized communication time (b) PhyArith, ∆ = 3 LMMSE-IC 2 4 6 8 10 75% 80% 85% 90% Normalized communication time Test accuracy (c) PhyArith, ∆ = 3 LMMSE-IC 2 4 6 8 10 75% 80% 85% 90% Normalized communication time (d) PhyArith, ∆ = 3 LMMSE-IC Fig. 7. Test accuracy of LeNet-5 trained by FedSGD where learning rate η = 0.01, updates are compressed using (a) No-Compression; (b) 10-bits; (c) 2-bits; (d) 1-bits quantization, aggregated based on PhyArith and LMMSE-IC 2 4 6 8 10 80% 85% 90% 95% Normalized communication time Test accuracy (a) PhyArith, ∆ = 2 LMMSE-IC 2 4 6 8 10 80% 85% 90% 95% Normalized communication time (b) PhyArith, ∆ = 2 LMMSE-IC 2 4 6 8 10 80% 85% 90% 95% Normalized communication time Test accuracy (c) PhyArith, ∆ = 2 LMMSE-IC 2 4 6 8 10 80% 85% 90% 95% Normalized communication time (d) PhyArith, ∆ = 2 LMMSE-IC Fig. 8. Test accuracy of LeNet-5 trained by FedSGD where learning rate η = 0.01, updates are compressed using (a) No-Compression; (b) top-30%; (c) top-6%; (d) top-3% sparsification, and 8-bits quantization, aggregated based on PhyArith and LMMSE-IC APPENDIX A. Calculation of Entropy While |h0| = |h1| = +1.0, entropy H(y) is given by H(y) = H{ 1 4 √ 2πσ ￾ 2exp(−y 2 2σ 2 )+ exp(−(y + 2)2 2σ 2 ) + exp(−(y − 2)2 2σ 2 )  }, entropy H(y|s0, s1) is given by H(y|s0, s1) = H{ 1 √ 2πσ ￾ exp(−y 2 2σ 2 )  }. and entropy H(y|s0) is given by H(y|s0) = H{ 1 2 √ 2πσ ￾ exp(−(y + 1)2 2σ 2 )+exp(−(y − 1)2 2σ 2 )  }
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有