正在加载图片...
REFERENCES [24]E.Candes and J.Romberg,"11-magic:Recovery of sparse signals via convex programming,"2005. [1]H.B.McMahan,E.Moore,D.Ramage.S.Hampson et al, [25]Y.LeCun,L.Bottou,Y.Bengio,P.Haffner et al.,"Gradient-based "Federated learning:Collaborative machine leaming without learning applied to document recognition,"Proceedings of the IEEE. centralized training data,"https://ai.googleblog.com/2017/04/ vol.86.no.11.pp.2278-2324.1998. federated-learning-collaborative.html.2017. [26]A.M.Sayeed,"Deconstructing multiantenna fading channels,"IEEE [2]J.Konecny,H.B.McMahan,D.Ramage,and P.Richtarik,"Federated Trans.Signal Processing.vol.50.no.10.pp.2563-2579,2002. optimization:Distributed machine leaming for on-device intelligence," CoRR,vol.abs/1610.02527.2016. [3]B.McMahan,E.Moore,D.Ramage,S.Hampson,and B.A.y Arcas "Communication-efficient learning of deep networks from decentralized data,"in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics,AlSTATS 2017,pp.1273-1282. [4]R.Shokri and V.Shmatikov,"Privacy-preserving deep learning."in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security,2015,pp.1310-1321. [5]F.Seide,H.Fu,J.Droppo,G.Li,and D.Yu,"1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns."in INTERSPEECH 2014./5th Annual Conference of the International Speech Communication Association,pp.1058-1062. [6]N.Strom,"Scalable distributed DNN training using commodity GPU cloud computing,"in INTERSPEECH 2015,16th Annual Conference of the International Speech Communication Association,pp.1488-1492. [7]N.Dryden.T.Moon.S.A.Jacobs,and B.V.Essen,"Communication quantization for data-parallel training of deep neural networks,"in 2nd Workshop on Machine Learning in HPC Environments,MLHPC 2016 Pp.1-8. [8]J.Konecny,H.B.McMahan,F.X.Yu,P.Richtarik,A.T.Suresh,and D.Bacon,"Federated learning:Strategies for improving communication efficiency,"CoRR,vol.abs/1610.05492,2016. [9]D.Alistarh,D.Grubic,J.Li,R.Tomioka,and M.Vojnovic,"QSGD: communication-efficient SGD via gradient quantization and encoding," in Advances in Neural Information Processing Systems 30:Annual Conference on Neural Information Processing Systems,NIPS 2017.pp 1709-1720. [10]M.M.Amiri and D.Guindiiz,"Machine learning at the wireless edge: Distributed stochastic gradient descent over-the-air,"in IEEE Interna- tional Symposium on Information Theory.ISIT 2019,pp.1432-1436. [11]G.Zhu,D.Liu,Y.Du,C.You,J.Zhang,and K.Huang,"Towards an intelligent edge:Wireless communication meets machine leaming, CoRR,vol.abs/1809.00343.2018. [12]G.Zhu,Y.Wang,and K.Huang,"Low-latency broadband analog aggregation for federated edge learning."CoRR,vol.abs/1812.11494. 208. [13]K.Yang.T.Jiang.Y.Shi,and Z.Ding."Federated learning via over- the-air computation,"CoRR,vol.abs/1812.11750,2018. [14]E.M.Khorov,A.Kiryanov,A.I.Lyakhov,and G.Bianchi,"A tutorial on IEEE 802.1lax high efficiency wlans,"IEEE Communications Surveys and Tutorials,vol.21,no.1,pp.197-216,2019. [15]P.Guide,"IntelR 64 and ia-32 architectures software developers man- ual,"Volume 3B:System programming guide,vol.2,2011. [16]R.G.Gallager,"Low-density parity-check codes"IRE Trans.Informa- tion Theory,vol.8.no.1.pp.21-28.1962. [17]W.W.Peterson and D.T.Brown,"Cyclic codes for error detection," Proceedings of the IRE.vol.49,no.1,pp.228-235,1961. [18]F.R.Kschischang,B.J.Frey,and H.Loeliger,"Factor graphs and the sum-product algorithm,"IEEE Trans.Information Theory,vol.47,no.2 pp.498-519.2001. [19]B.Nazer and M.Gastpar,"Computation over multiple-access channels," IEEE Trans.Information Theory,vol.53,no.10,pp.3498-3516,2007 [20]S.Wu,L.Kuang.Z.Ni,J.Lu,D.Huang.and Q.Guo,"Low-complexity iterative detection for large-scale multiuser MIMO-OFDM systems using approximate message passing,"J.Sel.Topics Signal Processing.vol.8. no.5,pp.902-915,2014. [21]L.Liu,C.Yuen,Y.L.Guan,Y.Li,and Y.Su,"Convergence analysis and assurance for gaussian message passing iterative detector in massive MU-MIMO systems,"IEEE Trans.Wireless Communications,vol.15 no.9,pp.6487-6501,2016. [22]J.Boutros and G.Caire,"Iterative multiuser joint decoding:Unified framework and asymptotic analysis,"IEEE Trans.Information Theory, vol.48,no.7,Pp.1772-1773,2002 [23]D.Tse and P.Viswanath,Fundamentals of Wireless Communication. Cambridge University Press,2005.REFERENCES [1] H. B. McMahan, E. Moore, D. Ramage, S. Hampson et al., “Federated learning: Collaborative machine learning without centralized training data,” https://ai.googleblog.com/2017/04/ federated-learning-collaborative.html, 2017. [2] J. Konecny, H. B. McMahan, D. Ramage, and P. Richt ´ arik, “Federated ´ optimization: Distributed machine learning for on-device intelligence,” CoRR, vol. abs/1610.02527, 2016. [3] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, pp. 1273–1282. [4] R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015, pp. 1310–1321. [5] F. Seide, H. Fu, J. Droppo, G. Li, and D. Yu, “1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns,” in INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association, pp. 1058–1062. [6] N. Strom, “Scalable distributed DNN training using commodity GPU cloud computing,” in INTERSPEECH 2015, 16th Annual Conference of the International Speech Communication Association, pp. 1488–1492. [7] N. Dryden, T. Moon, S. A. Jacobs, and B. V. Essen, “Communication quantization for data-parallel training of deep neural networks,” in 2nd Workshop on Machine Learning in HPC Environments, MLHPC 2016, pp. 1–8. [8] J. Konecny, H. B. McMahan, F. X. Yu, P. Richt ´ arik, A. T. Suresh, and ´ D. Bacon, “Federated learning: Strategies for improving communication efficiency,” CoRR, vol. abs/1610.05492, 2016. [9] D. Alistarh, D. Grubic, J. Li, R. Tomioka, and M. Vojnovic, “QSGD: communication-efficient SGD via gradient quantization and encoding,” in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, NIPS 2017, pp. 1709–1720. [10] M. M. Amiri and D. Gund ¨ uz, “Machine learning at the wireless edge: ¨ Distributed stochastic gradient descent over-the-air,” in IEEE Interna￾tional Symposium on Information Theory, ISIT 2019, pp. 1432–1436. [11] G. Zhu, D. Liu, Y. Du, C. You, J. Zhang, and K. Huang, “Towards an intelligent edge: Wireless communication meets machine learning,” CoRR, vol. abs/1809.00343, 2018. [12] G. Zhu, Y. Wang, and K. Huang, “Low-latency broadband analog aggregation for federated edge learning,” CoRR, vol. abs/1812.11494, 2018. [13] K. Yang, T. Jiang, Y. Shi, and Z. Ding, “Federated learning via over￾the-air computation,” CoRR, vol. abs/1812.11750, 2018. [14] E. M. Khorov, A. Kiryanov, A. I. Lyakhov, and G. Bianchi, “A tutorial on IEEE 802.11ax high efficiency wlans,” IEEE Communications Surveys and Tutorials, vol. 21, no. 1, pp. 197–216, 2019. [15] P. Guide, “Intel R 64 and ia-32 architectures software developers man￾ual,” Volume 3B: System programming guide, vol. 2, 2011. [16] R. G. Gallager, “Low-density parity-check codes,” IRE Trans. Informa￾tion Theory, vol. 8, no. 1, pp. 21–28, 1962. [17] W. W. Peterson and D. T. Brown, “Cyclic codes for error detection,” Proceedings of the IRE, vol. 49, no. 1, pp. 228–235, 1961. [18] F. R. Kschischang, B. J. Frey, and H. Loeliger, “Factor graphs and the sum-product algorithm,” IEEE Trans. Information Theory, vol. 47, no. 2, pp. 498–519, 2001. [19] B. Nazer and M. Gastpar, “Computation over multiple-access channels,” IEEE Trans. Information Theory, vol. 53, no. 10, pp. 3498–3516, 2007. [20] S. Wu, L. Kuang, Z. Ni, J. Lu, D. Huang, and Q. Guo, “Low-complexity iterative detection for large-scale multiuser MIMO-OFDM systems using approximate message passing,” J. Sel. Topics Signal Processing, vol. 8, no. 5, pp. 902–915, 2014. [21] L. Liu, C. Yuen, Y. L. Guan, Y. Li, and Y. Su, “Convergence analysis and assurance for gaussian message passing iterative detector in massive MU-MIMO systems,” IEEE Trans. Wireless Communications, vol. 15, no. 9, pp. 6487–6501, 2016. [22] J. Boutros and G. Caire, “Iterative multiuser joint decoding: Unified framework and asymptotic analysis,” IEEE Trans. Information Theory, vol. 48, no. 7, pp. 1772–1773, 2002. [23] D. Tse and P. Viswanath, Fundamentals of Wireless Communication. Cambridge University Press, 2005. [24] E. Candes and J. Romberg, “l1-magic: Recovery of sparse signals via convex programming,” 2005. [25] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner et al., “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. [26] A. M. Sayeed, “Deconstructing multiantenna fading channels,” IEEE Trans. Signal Processing, vol. 50, no. 10, pp. 2563–2579, 2002
<<向上翻页
©2008-现在 cucdc.com 高等教育资讯网 版权所有