正在加载图片...
李江昀等:深度神经网络模型压缩综述 ·1237· 521(7553):436 Netorks Learn Syst,2019:1. [2]Krizhevsky A,Sutskever I,Hinton G E.ImageNet classification [19]Guo Y W,Yao A B.Chen Y R.Dynamic network surgery for ef- with deep convolutional neural networks//Adrances in Neural In- ficient DNNs//Adrances in Neural Information Processing Sys- formation Processing Systems.Lake Tahoe,2012:1097 tems.Barcelona,2016:1379 [3]Simonyan K,Zisserman A.Very deep convolutional networks for [20]Jia H P,Xiang X S,Fan D,et al.DropPruning for model com- large-scale image recognition [J/OL].ArXir Preprint(2015-04- pression [J/OL].ArXir Preprint (2018-12-05)[2019-03- 10)[2019-03-22].https:/aniv.org/ahs/1409.1556 22].https://arxiv.org/abs/1812.02035 [4]Szegedy C.Liu W,Jia Y Q,et al.Going deeper with convolutions [21]Li H,Kadav A,Durdanovic I,et al.Pruning filters for efficient /Proceedings of the IEEE Conference on Computer Vision and convnets [J/OL].ArXiv Preprint (2017-03-10)[2019-03- Pattern Recognition.Boston,2015:1 22].https://arxiv.org/abs/1608.08710 [5]He K M,Zhang X Y,Ren S Q,et al.Deep residual learning for [22]Hu H Y.Peng R,Tai Y W,et al.Network trimming:a data- image recognition /Proceedings of the IEEE Conference on Com- driven neuron pruning approach towards efficient deep architec- puter Vision and Pattern Recognition.Washington DC,2016:770 tures[J/0L].arXin preprint(2016-07-12)[2019-03-22]. [6]Huang G,Liu Z,van der Maaten L,et al.Densely connected https://arxiv.org/abs/1607.03250 convolutional networks /Proceedings of the IEEE Conference on [23]Tian Q,Arbel T,Clark J J.Deep LDA-pruned nets for efficient Computer Vision and Pattern Recognition.Hawaii,2017:4700 facial gender classification//Proceedings of IEEE Conference on [7]Le Q V,Ngiam J,Coates A,et al.On optimization methods for Computer Vision and Pattern Recognition Workshops.Hawai, deep learning /Proceedings of the 28th International Conference 2017:10 on International Conference on Machine Learning.Omnipress, [24]Luo J H,Wu JX,Lin W Y.ThiNet:a filter level pruning meth- 2011:265 od for deep neural network compression//Proceedings of the [8]Han Y F,Jiang T H,Ma Y P,et al.Compression of deep neural IEEE International Conference on Computer Vision.Venice, networks.Comput Appl Res,2018,35(10):2894 2017:5058 (韩云飞,蒋同海,马玉鹏,等.深度神经网络的压缩研究。 [25]He Y,Kang G L,Dong X Y,et al.Soft filter pruning for accel- 计算机应用研究,2018,35(10):2894) erating deep convolutional neural networks [J/OL].ArXi Pre- [9]Setiono R,Liu H.Neural-network feature selector.IEEE Trans print(2018-08-21)[2019-03-22].htps:/aniv.org/ahs/ Neural Netcorks,1997,8(3):654 1808.06866 [10]LeCun Y,Denker JS,Solla S A.et al.Optimal brain damage [26]He Y H,Zhang X Y,Sun J.Channel pruning for accelerating /Adrances in Neural Information Processing Systems.Denver, very deep neural networks [J/OL].ArXir Preprint (2017-08- 1989:598 21)[2019-03-22].http5:/arxiv.og/ahs/1707.06168 [11]Hassibi B,Stork D G,Wolff G J.Optimal brain surgeon and [27]Hu Y M,Sun S Y,Li JQ,et al.Multi-loss-aware channel prun- general network pruning /IEEE International Conference on ing of deep networks [J/OL].ArXie Preprint (2019-02-27) Neural Netcorks.San Francisco,1993:293 [2019-03-22].htps://axiv.org/ahs/1902.10364 [12]Hassibi B.Stork DG.Second order derivatives for network prun- [28]Zhuang Z W,Tan M K,Zhuang B H,et al.Discrimination-a- ing:optimal brain surgeon /Adrances in Neural Information Processing Systems.Denver,1993:164 ware channel pruning for deep neural networks [J/OL].ArXir [13]Han S,Pool J,Tran J,et al.Learning both weights and connec- Preprint(2019-01-14)[2019-03-22].https:/axiv.og/ tions for efficient neural network//Advances in Neural Informa- ahs/1810.11809 tion Processing Systems.Montreal,2015:1135 [29]He Y H,Han S.ADC:automated deep compression and accel- [14]Han S,Mao H,Dally W J.Deep compression:compressing eration with reinforeement learning J/OL].ArXir Preprint deep neural networks with pruning,trained quantization and huff- (2019-01-16)[2019-03-22].https:/aniv.org/abs/1802. man coding [J/OL].ArXir Preprint (2016-02-15)[2019-03- 03494v1 22].https://arxiv.org/abs/1510.00149 [30]Appuswamy R,Nayak T,Arthur J,et al.Structured convolution [15]Srinivas S,Subramanya A,Venkatesh Babu R.Training sparse matrices for energy-efficient deep learning [J/OL].ArXir Pre- neural networks//Proceedings of IEEE Conference on Computer pint(2016-06-08)[2019-03-22].https:/aiw.og/abs/ Vision and Pattern Recognition Workshops.Hawaii,2017:138 1606.02407 [16]Anwar S.Hwang K.Sung W.Structured pruning of deep convo- [31]Sindhwani V,Sainath T N,Kumar S.Structured transforms for lutional neural networks.ACM J Emerg Technol Comput Syst, small-footprint deep learning [J/OL].ArXir Preprint (2015-10- 2017,13(3):32 06)[2019-03-22].htps:/aiw.org/abs/1510.01722 [17]Wen W,Wu C P,Wang Y D,et al.Learning structured sparsity [32]Cheng Y.Yu F X,Feris R S,et al.An exploration of parameter in deep neural networks //Adrances in Neural Information Pro- redundancy in deep networks with circulant projections [JOL]. cessing Systems.Barcelona,2016:2074 ArXir Preprint (2015-10-27)[2019-03-22].https://arxiv. [18]Lin S H,Ji RR,Li Y C.et al.Toward compact ConvNets via org/abhs/1502.03436 structure-sparsity regularized filter pruning.IEEE Trans Neural [33]Chen WL,Wilson JT,Tyree S,et al.Compressing neural net-李江昀等: 深度神经网络模型压缩综述 521(7553): 436 [2] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks / / Advances in Neural In鄄 formation Processing Systems. Lake Tahoe, 2012: 1097 [3] Simonyan K, Zisserman A. Very deep convolutional networks for large鄄scale image recognition [J/ OL]. ArXiv Preprint (2015鄄鄄04鄄鄄 10) [2019鄄鄄03鄄鄄22]. https: / / arxiv. org / abs/ 1409. 1556 [4] Szegedy C, Liu W, Jia Y Q, et al. Going deeper with convolutions / / Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, 2015: 1 [5] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition / / Proceedings of the IEEE Conference on Com鄄 puter Vision and Pattern Recognition. Washington DC, 2016: 770 [6] Huang G, Liu Z, van der Maaten L, et al. Densely connected convolutional networks / / Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Hawaii, 2017: 4700 [7] Le Q V, Ngiam J, Coates A, et al. On optimization methods for deep learning / / Proceedings of the 28th International Conference on International Conference on Machine Learning. Omnipress, 2011: 265 [8] Han Y F, Jiang T H, Ma Y P, et al. Compression of deep neural networks. Comput Appl Res, 2018, 35(10): 2894 (韩云飞, 蒋同海, 马玉鹏, 等. 深度神经网络的压缩研究. 计算机应用研究, 2018, 35(10): 2894) [9] Setiono R, Liu H. Neural鄄network feature selector. IEEE Trans Neural Networks, 1997, 8(3): 654 [10] LeCun Y, Denker J S, Solla S A, et al. Optimal brain damage / / Advances in Neural Information Processing Systems. Denver, 1989: 598 [11] Hassibi B, Stork D G, Wolff G J. Optimal brain surgeon and general network pruning / / IEEE International Conference on Neural Networks. San Francisco, 1993: 293 [12] Hassibi B, Stork D G. Second order derivatives for network prun鄄 ing: optimal brain surgeon / / Advances in Neural Information Processing Systems. Denver, 1993: 164 [13] Han S, Pool J, Tran J, et al. Learning both weights and connec鄄 tions for efficient neural network / / Advances in Neural Informa鄄 tion Processing Systems. Montreal, 2015: 1135 [14] Han S, Mao H, Dally W J. Deep compression: compressing deep neural networks with pruning, trained quantization and huff鄄 man coding [J/ OL]. ArXiv Preprint (2016鄄鄄02鄄鄄15) [2019鄄鄄03鄄鄄 22]. https:/ / arxiv. org / abs/ 1510. 00149 [15] Srinivas S, Subramanya A, Venkatesh Babu R. Training sparse neural networks / / Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops. Hawaii, 2017: 138 [16] Anwar S, Hwang K, Sung W. Structured pruning of deep convo鄄 lutional neural networks. ACM J Emerg Technol Comput Syst, 2017, 13(3): 32 [17] Wen W, Wu C P, Wang Y D, et al. Learning structured sparsity in deep neural networks / / Advances in Neural Information Pro鄄 cessing Systems. Barcelona, 2016: 2074 [18] Lin S H, Ji R R, Li Y C, et al. Toward compact ConvNets via structure鄄sparsity regularized filter pruning. IEEE Trans Neural Networks Learn Syst, 2019: 1. [19] Guo Y W, Yao A B, Chen Y R. Dynamic network surgery for ef鄄 ficient DNNs / / Advances in Neural Information Processing Sys鄄 tems. Barcelona, 2016: 1379 [20] Jia H P, Xiang X S, Fan D, et al. DropPruning for model com鄄 pression [J/ OL]. ArXiv Preprint (2018鄄鄄 12鄄鄄 05 ) [2019鄄鄄 03鄄鄄 22]. https: / / arxiv. org / abs/ 1812. 02035 [21] Li H, Kadav A, Durdanovic I, et al. Pruning filters for efficient convnets [J/ OL]. ArXiv Preprint (2017鄄鄄 03鄄鄄 10) [2019鄄鄄 03鄄鄄 22]. https: / / arxiv. org / abs/ 1608. 08710 [22] Hu H Y, Peng R, Tai Y W, et al. Network trimming: a data鄄 driven neuron pruning approach towards efficient deep architec鄄 tures [J/ OL]. arXiv preprint (2016鄄鄄07鄄鄄12) [2019鄄鄄03鄄鄄22]. https: / / arxiv. org / abs/ 1607. 03250 [23] Tian Q, Arbel T, Clark J J. Deep LDA鄄pruned nets for efficient facial gender classification / / Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops. Hawaii, 2017: 10 [24] Luo J H, Wu J X, Lin W Y. ThiNet: a filter level pruning meth鄄 od for deep neural network compression / / Proceedings of the IEEE International Conference on Computer Vision. Venice, 2017: 5058 [25] He Y, Kang G L, Dong X Y, et al. Soft filter pruning for accel鄄 erating deep convolutional neural networks [ J/ OL]. ArXiv Pre鄄 print (2018鄄鄄08鄄鄄21) [2019鄄鄄03鄄鄄22]. https: / / arxiv. org / abs/ 1808. 06866 [26] He Y H, Zhang X Y, Sun J. Channel pruning for accelerating very deep neural networks [ J/ OL]. ArXiv Preprint (2017鄄鄄 08鄄鄄 21) [2019鄄鄄03鄄鄄22]. https: / / arxiv. org / abs/ 1707. 06168 [27] Hu Y M, Sun S Y, Li J Q, et al. Multi鄄loss鄄aware channel prun鄄 ing of deep networks [ J/ OL]. ArXiv Preprint (2019鄄鄄 02鄄鄄 27) [2019鄄鄄03鄄鄄22]. https: / / arxiv. org / abs/ 1902. 10364 [28] Zhuang Z W, Tan M K, Zhuang B H, et al. Discrimination鄄a鄄 ware channel pruning for deep neural networks [ J/ OL]. ArXiv Preprint (2019鄄鄄 01鄄鄄 14) [2019鄄鄄 03鄄鄄 22 ]. https: / / arxiv. org / abs/ 1810. 11809 [29] He Y H, Han S. ADC: automated deep compression and accel鄄 eration with reinforcement learning [ J/ OL ]. ArXiv Preprint (2019鄄鄄01鄄鄄16) [2019鄄鄄03鄄鄄22]. https: / / arxiv. org / abs/ 1802. 03494v1 [30] Appuswamy R, Nayak T, Arthur J, et al. Structured convolution matrices for energy鄄efficient deep learning [ J/ OL]. ArXiv Pre鄄 print (2016鄄鄄06鄄鄄08) [2019鄄鄄03鄄鄄22]. https: / / arxiv. org / abs/ 1606. 02407 [31] Sindhwani V, Sainath T N, Kumar S. Structured transforms for small鄄footprint deep learning [J/ OL]. ArXiv Preprint (2015鄄鄄10鄄鄄 06) [2019鄄鄄03鄄鄄22]. https: / / arxiv. org / abs/ 1510. 01722 [32] Cheng Y, Yu F X, Feris R S, et al. An exploration of parameter redundancy in deep networks with circulant projections [J/ OL]. ArXiv Preprint (2015鄄鄄10鄄鄄27) [2019鄄鄄03鄄鄄22]. https: / / arxiv. org / abs/ 1502. 03436 [33] Chen W L, Wilson J T, Tyree S, et al. Compressing neural net鄄 ·1237·
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有