正在加载图片...
第17卷 智能系统学报 ·494· nication-efficient learning of deep networks from decent- 2021,32(2):394-410 ralized data[C]//Proceedings of the 20th International [13]MALINOVSKIY G,KOVALEV D,GASANOV E,et Conference on Artificial Intelligence and Statistics.Fort al.From local SGD to local fixed-point methods for fed Lauderdale,USA,2017:1273-1282. erated learning[C]/Proceedings of the 37th Internation- [2]WANG Hongyi,YUROCHKIN M,SUN Yuekai,et al. al Conference on Machine Learning.New York,USA, Federated learning with matched averaging [EB/OL]. 2020:6692-6701. (2020-02-25)[2021-03-09 Ihttps:/arxiv:2002.06440, [14]HANZELY F.RICHTARIK P.Federated learning of a 2020. mixture of global and local models [EB/OLl.(2020- [3]KOPPARAPU K,LIN E,ZHAO J.FedCD:Improving 02-10)[2021-03-09]https:/∥arXiv:2002.05516,2020. performance in non-IID federated learning [EB/OL]. [15]ROTHCHILD D.PANDA A.ULLAH E,et al.FetchS- (2020-07-27)[2021-03-09]https:/∥arxiv:2006.09637 GD:Communication-efficient federated learning with 2020. sketching[C]//Proceedings of the 37th International Con- [4]YU Hao,YANG Sen,ZHU Shenghuo.Parallel restarted ference on Machine Learning.New York,USA,2020: SGD with faster convergence and less communication: 8253-8265. Demystifying why model averaging works for deep learn- [16]WANG Jialei,WANG Weiran,SREBRO N.Memory ing[C]//Proceedings of the Thirty-Third AAAl Confer- and communication efficient distributed stochastic op- ence on Artificial Intelligence.Palo Alto,USA,2019: timization with minibatch-prox[Cl//Proceedings of the 5693-5700. 2017 Conference on Learning Theory.New York,USA, [5]WANG Shigiang,TUOR T,SALONIDIS T,et al.Adapt- 2017:1882-1919. ive federated learning in resource constrained edge com- [17]LI Tian,HU Shengyuan,BEIRAMI A,et al.Federated puting systems[J].IEEE journal on selected areas in com- multi-task learning for competing constraints[EB/OL]. munications,2019,37(6):1205-1221. [2021-03-09]https://openreview.net/forum?id=1ZN5 [6]YU Hao,JIN Rong,YANG Sen.On the linear speedup y4yx6T1. analysis of communication efficient momentum SGD for [18]LI Tian,SAHU A,ZAHEER M,et al.Federated optim- distributed non-convex optimization[C]//Proceedings of ization in heterogeneous networks[J.Proceeding of ma- the 36th International Conference on Machine Learning. chine learning and systems,2020,2:429-450. Long Beach,USA,2019:7184-7193. [19]ZHOU Pan,YUAN Xiaotong,XU Huan,et al.Efficient [7]JEONG E,OH S,KIM H,et al.Communication-efficient meta learning via minibatch proximal update[EB/OL]. on-device machine learning:federated distillation and (2019-12-08)[2021-03-09]https:/openreview.net/ augmentation under Non-IID private data [EB/OL]. forum?id=B1gSHVrx8S. (2018-11-28)[2021-03-09]https:∥arxiv:1811.11479, [20]PHONG L T,AONO Y,HAYASHI T,et al.Privacy- 2018. preserving deep learning via additively homomorphic [8]HUANG Li,YIN Yifeng,FU Zeng,et al.LoAdaBoost: encryption[].IEEE transactions on information forensics loss-based AdaBoost federated machine learning with re- and security,2018,13(S):1333-1345. duced computational complexity on IID and non-IID in- [21]GO A,BHAYANI R,HUANG Lei.Twitter sentiment tensive care data[J].PLoS one,2020,15(4):e0230706. classification using distant supervision[J].CS224N [9]REDDI S.CHARLES Z,ZAHEER M,et al.Adaptive project report,Stanford,2009,1(12):2009. federated optimization [EB/OL].(2021-09-08) [22]LECUN Y,BOTTOU L,BENGIO Y,et al.Gradient- [2021-10-09]htps:/∥arXiv::2003.00295,2021 based learning applied to document recognition[J].Pro- [10]YANG Kai,FAN Tao,CHEN Tianjian,et al.A quasi- ceedings of the IEEE,1998,86(11):2278-2324 newton method based vertical federated learning frame- [23]COHEN G,AFSHAR S,TAPSON J,et al.EMNIST:ex- work for logistic regression[EB/OL].(2019-12-04) tending MNIST to handwritten letters[C]//2017 Interna- [2021-09-08]htps:∥arXiv:1912.00513,2019. tional Joint Conference on Neural Networks (IJCNN). [11]DHAKAL S.PRAKASH S.YONA Y,et al.Coded fed- Anchorage,USA,2017:2921-2926. erated learning[C]//2019 IEEE Globecom Workshops [24]Tung KK.Topics in Mathematical Modeling[M].Prin- (GC Wkshps).Waikoloa,USA,2019:1-6. ceton University Press,2007. [12]WANG Cong,YANG Yuanyuan,ZHOU Pengzhan.To- [25]BALLES,LUKAS,PHILIPP HENNING.Dissecting wards efficient scheduling of federated mobile devices adam:the sign,magnitude and variance of stochastic under computational and statistical heterogeneity[J]. gradients[C]//International Conference on Machine IEEE transactions on parallel and distributed systems, Learning.PMLR,2018:404-413.nication-efficient learning of deep networks from decent￾ralized data[C]//Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. Fort Lauderdale, USA, 2017: 1273−1282. WANG Hongyi, YUROCHKIN M, SUN Yuekai, et al. Federated learning with matched averaging [EB/OL]. (2020−02−25)[2021−03−09]https://arxiv: 2002.06440, 2020. [2] KOPPARAPU K, LIN E, ZHAO J. FedCD: Improving performance in non-IID federated learning [EB/OL]. (2020−07−27) [2021−03−09]https:// arxiv: 2006.09637, 2020. [3] YU Hao, YANG Sen, ZHU Shenghuo. Parallel restarted SGD with faster convergence and less communication: Demystifying why model averaging works for deep learn￾ing[C]//Proceedings of the Thirty-Third AAAI Confer￾ence on Artificial Intelligence. Palo Alto, USA, 2019: 5693-5700. [4] WANG Shiqiang, TUOR T, SALONIDIS T, et al. Adapt￾ive federated learning in resource constrained edge com￾puting systems[J]. IEEE journal on selected areas in com￾munications, 2019, 37(6): 1205–1221. [5] YU Hao, JIN Rong, YANG Sen. On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization[C]//Proceedings of the 36th International Conference on Machine Learning. Long Beach, USA, 2019: 7184−7193. [6] JEONG E, OH S, KIM H, et al. Communication-efficient on-device machine learning: federated distillation and augmentation under Non-IID private data [EB/OL]. (2018−11−28)[2021−03−09]https:// arxiv: 1811.11479, 2018. [7] HUANG Li, YIN Yifeng, FU Zeng, et al. LoAdaBoost: loss-based AdaBoost federated machine learning with re￾duced computational complexity on IID and non-IID in￾tensive care data[J]. PLoS one, 2020, 15(4): e0230706. [8] REDDI S, CHARLES Z, ZAHEER M, et al. Adaptive federated optimization [EB/OL]. (2021−09−08) [2021−10−09]https:// arXiv: 2003.00295, 2021. [9] YANG Kai, FAN Tao, CHEN Tianjian, et al. A quasi￾newton method based vertical federated learning frame￾work for logistic regression[EB/OL]. (2019−12−04) [2021−09−08]https:// arXiv: 1912.00513, 2019. [10] DHAKAL S, PRAKASH S, YONA Y, et al. Coded fed￾erated learning[C]//2019 IEEE Globecom Workshops (GC Wkshps). Waikoloa, USA, 2019: 1−6. [11] WANG Cong, YANG Yuanyuan, ZHOU Pengzhan. To￾wards efficient scheduling of federated mobile devices under computational and statistical heterogeneity[J]. IEEE transactions on parallel and distributed systems, [12] 2021, 32(2): 394–410. MALINOVSKIY G, KOVALEV D, GASANOV E, et al. From local SGD to local fixed-point methods for fed￾erated learning[C]//Proceedings of the 37th Internation￾al Conference on Machine Learning. New York, USA, 2020: 6692−6701. [13] HANZELY F, RICHTÁRIK P. Federated learning of a mixture of global and local models [EB/OL]. (2020− 02−10)[2021−03−09]https:// arXiv: 2002.05516, 2020. [14] ROTHCHILD D, PANDA A, ULLAH E, et al. FetchS￾GD: Communication-efficient federated learning with sketching[C]//Proceedings of the 37th International Con￾ference on Machine Learning. New York, USA, 2020: 8253−8265. [15] WANG Jialei, WANG Weiran, SREBRO N. Memory and communication efficient distributed stochastic op￾timization with minibatch-prox[C]//Proceedings of the 2017 Conference on Learning Theory. New York, USA, 2017: 1882−1919. [16] LI Tian, HU Shengyuan, BEIRAMI A, et al. Federated multi-task learning for competing constraints[EB/OL]. [2021−03−09]https://openreview.net/forum?id=1ZN5 y4yx6T1. [17] LI Tian, SAHU A, ZAHEER M, et al. Federated optim￾ization in heterogeneous networks[J]. Proceeding of ma￾chine learning and systems, 2020, 2: 429–450. [18] ZHOU Pan, YUAN Xiaotong, XU Huan, et al. Efficient meta learning via minibatch proximal update[EB/OL]. (2019−12−08)[2021−03−09]https://openreview.net/ forum?id=B1gSHVrx8S. [19] PHONG L T, AONO Y, HAYASHI T, et al. Privacy￾preserving deep learning via additively homomorphic encryption[J]. IEEE transactions on information forensics and security, 2018, 13(5): 1333–1345. [20] GO A, BHAYANI R, HUANG Lei. Twitter sentiment classification using distant supervision[J]. CS224N project report, Stanford, 2009, 1(12): 2009. [21] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient￾based learning applied to document recognition[J]. Pro￾ceedings of the IEEE, 1998, 86(11): 2278–2324. [22] COHEN G, AFSHAR S, TAPSON J, et al. EMNIST: ex￾tending MNIST to handwritten letters[C]//2017 Interna￾tional Joint Conference on Neural Networks (IJCNN). Anchorage, USA, 2017: 2921−2926. [23] Tung K K. Topics in Mathematical Modeling[M]. Prin￾ceton University Press, 2007. [24] BALLES, LUKAS, PHILIPP HENNING. Dissecting adam: the sign, magnitude and variance of stochastic gradients[C]//International Conference on Machine Learning. PMLR, 2018: 404−413. [25] 第 17 卷 智 能 系 统 学 报 ·494·
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有