this collecting the papers (main from arxiv.org) about Model compression:
Structure;
Distillation;
Binarization;
Quantization;
Pruning;
Low Rank.
also, some papers and links collected from below, they are all awesome resources:
- [x][papers]sun254/awesome-model-compression-and-acceleration
- [x][papers&projects]dkozlov/awesome-knowledge-distillation
- [x][papers&reading listi&blogs]memoiry/Awesome-model-compression-and-acceleration
- [x][papers]chester256/Model-Compression-Papers
- [x][papers&blogs]cedrickchee/awesome-ml-model-compression
- [x][papers]jnjaby/Model-Compression-Acceleration
- [x][papers&codes]htqin/model-quantization
- [x][papers]mrgloom/Network-Speed-and-Compression
- [others&papers&codes&projects&blogs]guan-yuan/awesome-AutoML-and-Lightweight-Models
- [papers&codes&projects&blogs]handong1587/cnn-compression-acceleration
- [papers&hardware]ZhishengWang/Embedded-Neural-Network
- [papers&hardware]fengbintu/Neural-Networks-on-Silicon
- [others&papers]ljk628/ML-Systems
- [x][papers&codes]juliagusak/model-compression-and-acceleration-progress
- [x][papers&codes]Hyungjun-K1m/Neural-Network-Compression
- [x][papers&codes]he-y/Awesome-Pruning
- [x][papers]lhyfst/knowledge-distillation-papers
- [x][papers&codes]AojunZhou/Efficient-Deep-Learning
- [intro&papers&projects]Tianyu-Hua/ModelCompression
- [intro&papers&codes&blogs]Ewenwan/MVision/CNN/Deep_Compression
- [intro&zhihu&papers]jyhengcoder/Model-Compression
- [/][intro&papers]mapleam/model-compression-and-acceleration-4-DNN
- [x][little papers]tejalal/awesome-deep-model-compression
- [x][papers&2years ago]Xreki/ModelCompression
- [x][Ref]clhne/model-compression-and-acceleration
- 【Pruning】 LeCun Y, Denker J S, Solla S A. Optimal brain damage .[C]//Advances in neural information processing systems. 1990: 598-605.
- 【Distillation】Neural Network Ensembles, L.K. Hansen, P. Salamon, 1990
- Hassibi, Babak, and David G. Stork. Second order derivatives for network pruning: Optimal brain surgeon .[C]Advances in neural information processing systems. 1993.
- J. L. Holi and J. N. Hwang. [Finite precision error analysis of neural network hardware implementations]. In Ijcnn-91- Seattle International Joint Conference on Neural Networks, pages 519–525 vol.1, 1993.
- 【Distillation】Neural Network Ensembles, Cross Validation, and Active Learning, Andres Krogh, Jesper Vedelsby, 1995
- Knowledge Acquisition from Examples Via Multiple Models, Perdo Domingos, 1997
- 【distillation】Combining labeled and unlabeled data with co-training, A. Blum, T. Mitchell, 1998
- 【Distillation】Ensemble Methods in Machine Learning, Thomas G. Dietterich, 2000
- Using A Neural Network to Approximate An Ensemble of Classifiers, Xinchuan Zeng and Tony R. Martinez, 2000
- Suzuki, Kenji, Isao Horiba, and Noboru Sugie. A simple neural network pruning algorithm with application to filter synthesis .[C] Neural Processing Letters 13.1 (2001): 43-53.. 2001
- 【Distillation】Model Compression, Rich Caruana, 2006
- 【Quantization】 Jegou, Herve, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search IEEE transactions on pattern analysis and machine intelligence 33.1 (2011): 117-128.
- 【Quantization】Vanhoucke V, Senior A, Mao M Z. Improving the speed of neural networks on CPUs[J]. 2011.
- D. Hammerstrom. [A vlsi architecture for highperformance, low-cost, on-chip learning]. In IJCNN International Joint Conference on Neural Networks, pages 537– 544 vol.2, 2012.
- M. Denil, B. Shakibi, L. Dinh, N. de Freitas, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148–2156, 2013
- 【Distillation】Mathieu M, Henaff M, LeCun Y. Fast training of convolutional networks through ffts[J]. arXiv preprint arXiv:1312.5851, 2013.
【code:Maratyszcza/NNPACK】 - Do Deep Nets Really Need to be Deep?, Lei Jimmy Ba, Rich Caruana, 2013
- K. Hwang and W. Sung. [Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1]. In 2014 IEEE Workshop on Signal Processing Systems (SiPS), pages 1–6. IEEE, 2014.
- M. Horowitz. 1.1 computing’s energy problem (and what we can do about it). In Solid-State Circuits Conference Digest of Technical Papers, pages 10–14, 2014.
- Y. Chen, N. Sun, O. Temam, T. Luo, S. Liu, S. Zhang, L. He, J.Wang, L. Li, and T. Chen. Dadiannao: A machinelearning supercomputer. In Ieee/acm International Symposium on Microarchitecture, pages 609–622, 2014.
- 【Distillation】Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. Dark knowledge .[C]Presented as the keynote in BayLearn 2 (2014).
- 【Low Rank】Jaderberg, Max, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions .[J] arXiv preprint arXiv:1405.3866 (2014).
- 【Low Rank】Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, Rob Fergus .Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation .[J] arXiv preprint arXiv:1404.00736
- 【Low rank】Jaderberg M, Vedaldi A, Zisserman A. Speeding up convolutional neural networks with low rank expansions[J]. arXiv preprint arXiv:1405.3866, 2014.
- 【Low Rank】Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, Jian Sun .Efficient and Accurate Approximations of Nonlinear Convolutional Networks .[J] arXiv preprint arXiv:1411.04229
- 【Distillation】Learning with Pseudo-Ensembles, Philip Bachman, Ouais Alsharif, Doina Precup, 2014
- 【Structure】 Jin J, Dundar A, Culurciello E. Flattened convolutional neural networks for feedforward acceleration .[J]. arXiv preprint arXiv:1412.5474, 2014.
- 【Quantization】Yunchao Gong, Liu Liu, Ming Yang, Lubomir Bourdev .Compressing Deep Convolutional Networks using Vector Quantization .[J] arXiv preprint arXiv:1412.06115
- 【Distillation】driana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, Yoshua Bengio .FitNets: Hints for Thin Deep Nets .[J] arXiv preprint arXiv:1412.06550
- 【Quantization】Compressing Deep Convolutional Networks using Vector Quantization
- 【Low Rank】 Lebedev V, Ganin Y, Rakhuba M, et al. Speeding-up convolutional neural networks using fine-tuned cp-decomposition .[J]. arXiv preprint arXiv:1412.6553, 2014.
【code:vadim-v-lebedev/cp-decomposition; jacobgil/pytorch-tensor-decompositions; medium.com/@keremturgutlu/tensor-decomposition-fast-cnn-in-your-pocket-f03e9b2a6788】 - 【Quantization】Courbariaux M, Bengio Y, David J P. Training deep neural networks with low precision multiplications)[J]. arXiv preprint arXiv:1412.7024, 2014.
- 【Hardware】Dally W. High-performance hardware for machine learning[J]. NIPS Tutorial, 2015.
- 【other】Liu B, Wang M, Foroosh H, et al. Sparse convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 806-814.
- Zhang. [Optimizing fpga-based accelerator design for deep convolutional neural networks.] In Proceedings of the 2015 ACM/SIGDA International Symposium on Field- Programmable Gate Arrays, FPGA ’15, 2015.
- M. Courbariaux, Y. Bengio, and J.-P. David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pages 3123–3131, 2015.
- 【System】Lane, Nicholas D., et al. An Early Resource Characterization of Deep Learning on Wearables, Smartphones and Internet-of-Things Devices .[C]Proceedings of the 2015 international workshop on internet of things towards applications. ACM, 2015.
- Han, Song, et al. Learning both weights and connections for efficient neural network .[C] Advances in neural information processing systems. 2015.
- 【Low Rank】 Yang Z, Moczulski M, Denil M, et al. Deep fried convnets .[C]//Proceedings of the IEEE International Conference on Computer Vision. 2015: 1476-1483.
- 【Structure】 He K, Sun J. Convolutional neural networks at constrained time cost .[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 5353-5360.
- 【Quantization】 Courbariaux, Matthieu, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. Advances in neural information processing systems. 2015.
- 【Quantization】Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan .Deep Learning with Limited Numerical Precision .[J] arXiv preprint arXiv:1502.02551
- 【Distillation】Geoffrey Hinton, Oriol Vinyals, Jeff Dean .Distilling the Knowledge in a Neural Network .[J] arXiv preprint arXiv:1503.02531
- Z. Cheng, D. Soudry, Z. Mao, and Z. Lan. Training binary multilayer neural networks for image classification using expectation backpropagation. arXiv preprint arXiv:1503.03562, 2015.
- 【Distillation】Recurrent Neural Network Training with Dark Knowledge Transfer, Zhiyuan Tang, Dong Wang, Zhiyong Zhang, 2015
- 【Low Rank】Xiangyu Zhang, Jianhua Zou, Kaiming He, Jian Sun .Accelerating Very Deep Convolutional Networks for Classification and Detection .[J] arXiv preprint arXiv:1505.06798
- 【Pruning】Song Han, Jeff Pool, John Tran, William J. Dally .Learning both Weights and Connections for Efficient Neural Networks .[J] arXiv preprint arXiv:1506.02626
【code:jack-willturner/DeepCompression-PyTorch】 - 【Distillation】Cross Modal Distillation for Supervision Transfer, Saurabh Gupta, Judy Hoffman, Jitendra Malik, 2015
- Srinivas, Suraj, and R. Venkatesh Babu. Data-free parameter pruning for deep neural networks .[J] arXiv preprint arXiv:1507.06149
- 【Pruning】Song Han, Huizi Mao, William J. Dally .Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding .[J] arXiv preprint arXiv:1510.00149
【code:songhan/Deep-Compression-AlexNet】 - 【Distillation】Distilling Model Knowledge, George Papamakarios, 2015
- Z. Lin, M. Courbariaux, R. Memisevic, and Y. Bengio. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015.
- 【Quantization】Courbariaux M, Bengio Y, David J P. Binaryconnect: Training deep neural networks with binary weights during propagations[C]//Advances in neural information processing systems. 2015: 3123-3131.
- 【Distillation】Unifying distillation and privileged information, David Lopez-Paz, Léon Bottou, Bernhard Schölkopf, Vladimir Vapnik, 2015
- T. Dettmers. 8-bit approximations for parallelism in deep learning. arXiv preprint arXiv:1511.04561, 2015.
- 【Distillation】Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks, Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, Ananthram Swami, 2015
- 【Distillation】Heterogeneous Knowledge Transfer in Video Emotion Recognition, Attribution and Summarization, Baohan Xu, Yanwei Fu, Yu-Gang Jiang, Boyang Li, Leonid Sigal, 2015
- 【other】Judd P, Albericio J, Hetherington T, et al. Reduced-precision strategies for bounded memory in deep neural nets[J]. arXiv preprint arXiv:1511.05236, 2015.
- 【Distillation】Tianqi Chen, Ian Goodfellow, Jonathon Shlens .Net2Net: Accelerating Learning via Knowledge Transfer .[J] arXiv preprint arXiv:1511.05641
- 【Low Rank】Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, Weinan E .Convolutional neural networks with low-rank regularization .[J] arXiv preprint arXiv:1511.06067
- 【Low Rank】Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, Dongjun Shin .Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications .[J] arXiv preprint arXiv:1511.06530
- 【System】Seyyed Salar Latifi Oskouei, Hossein Golestani, Matin Hashemi, Soheil Ghiasi .CNNdroid: GPU-Accelerated Execution of Trained Deep Convolutional Neural Networks on Android .[J] arXiv preprint arXiv:1511.07376
- 【Structure】mjad Almahairi, Nicolas Ballas, Tim Cooijmans, Yin Zheng, Hugo Larochelle, Aaron Courville .Dynamic Capacity Networks .[J] arXiv preprint arXiv:1511.07838
- 【Quantization】Sungho Shin, Kyuyeon Hwang, Wonyong Sung .Fixed-Point Performance Analysis of Recurrent Neural Networks .[J] arXiv preprint arXiv:1512.01322
- 【Quantization】Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng .Quantized Convolutional Neural Networks for Mobile Devices .[J] arXiv preprint arXiv:1512.06473
【code:jiaxiang-wu/quantized-cnn】 - 【Distillation】Learning Using Privileged Information: Similarity Control and Knowledge Transfer, Vladimir Vapnik, Rauf Izmailov, 2015
- 【other】Li D, Wang X, Kong D, et al. DeepRebirth: A General Approach for Accelerating Deep Neural Network Execution on Mobile Devices[J]. 2016.
- Luo P, Zhu Z, Liu Z, et al. Face model compression by distilling knowledge from neurons[C]//Thirtieth AAAI Conference on Artificial Intelligence. 2016.
- 【Distillation】Lavin A, Gray S. Fast algorithms for convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 4013-4021.
- 【Distillation】Luo, Ping, et al. MobileID: Face Model Compression by Distilling Knowledge from Neurons Thirtieth AAAI Conference on Artificial Intelligence. 2016.
- Y.Wang, J. Xu, Y. Han, H. Li, and X. Li. Deepburning: automatic generation of fpga-based learning accelerators for the neural network family. In Design Automation Conference, page 110, 2016.
- Y. Guo, A. Yao, and Y. Chen. Dynamic network surgery for efficient dnns. In Advances In Neural Information Processing Systems, pages 1379–1387, 2016.
- W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074–2082, 2016.
- 【Pruning】V. Lebedev and V. Lempitsky. Fast convnets using groupwise brain damage. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2554– 2564, 2016.
- 【Pruning】Molchanov, Pavlo, et al. Pruning convolutional neural networks for resource efficient transfer learning. arXiv preprint arXiv:1611.06440 3 (2016).
- 【Pruning】 Sun Y, Wang X, Tang X. Sparsifying neural network connections for face recognition .[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 4856-4864.
- 【Pruning】Babaeizadeh, Mohammad, Paris Smaragdis, and Roy H. Campbell. A Simple yet Effective Method to Prune Dense Layers of Neural Networks (2016).
- S. Zhang, Z. Du, L. Zhang, H. Lan, S. Liu, L. Li, Q. Guo, T. Chen, and Y. Chen. Cambricon-x: An accelerator for sparse neural networks. In Ieee/acm International Symposium on Microarchitecture, pages 1–12, 2016.
- S. I. Venieris and C. S. Bouganis. fpgaconvnet: A framework for mapping convolutional neural networks on fpgas. In IEEE International Symposium on Field-Programmable Custom Computing Machines, pages 40–47, 2016.
- S. Liu, Z. Du, J. Tao, D. Han, T. Luo, Y. Xie, Y. Chen, and T. Chen. [Cambricon: An instruction set architecture for neural networks]. SIGARCH Comput. Archit. News, 44(3), June 2016.
- Suda. Throughput-optimized opencl-based fpga accelerator for large-scale convolutional neural networks. In Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA ’16, 2016.
- P. Wang and J. Cheng. Accelerating convolutional neural networks for mobile applications. In Proceedings of the 2016 ACM on Multimedia Conference, pages 541–545. ACM, 2016.
- Qiu. Going deeper with embedded fpga platform for convolutional neural network. In Proceedings of the 2016 ACM/SIGDA International Symposium on Field- Programmable Gate Arrays, FPGA ’16, 2016.
- L. Xia, T. Tang, W. Huangfu, M. Cheng, X. Yin, B. Li, Y. Wang, and H. Yang. Switched by input: Power efficient structure for rram-based convolutional neural network. In Design Automation Conference, page 125, 2016.
- M. Alwani, H. Chen, M. Ferdman, and P. A. Milder. Fusedlayer cnn accelerators. In MICRO, 2016.
- K. Kim, J. Kim, J. Yu, J. Seo, J. Lee, and K. Choi. [Dynamic energy-accuracy trade-off using stochastic computing in deep neural networks]. In Design Automation Conference, page 124, 2016.
- J. Zhu, Z. Qian, and C. Y. Tsui. Lradnn: High-throughput and energy-efficient deep neural network accelerator using low rank approximation. In Asia and South Pacific Design Automation Conference, pages 581–586, 2016.
- J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng.Quantized convolutional neural networks for mobile devices. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
- J. Albericio, P. Judd, T. Hetherington, T. Aamodt, N. E. Jerger, and A. Moshovos. Cnvlutin: Ineffectual-neuron-free deep neural network computing. In International Symposium on Computer Architecture, pages 1–13, 2016.
- H. Sharma, J. Park, D. Mahajan, E. Amaro, J. K. Kim, C. Shao, A. Mishra, and H. Esmaeilzadeh. From highlevel deep neural models to fpgas. In Ieee/acm International Symposium on Microarchitecture, pages 1–12, 2016.
- D. Kim, J. Kung, S. Chai, S. Yalamanchili, and S. Mukhopadhyay. Neurocube: A programmable digital neuromorphic architecture with high-density 3d memory. In International Symposium on Computer Architecture, pages 380–392, 2016.
- C. Zhang, D. Wu, J. Sun, G. Sun, G. Luo, and J. Cong.Energy-efficient cnn implementation on a deeply pipelined fpga cluster. In Proceedings of the 2016 International Symposium on Low Power Electronics and Design, ISLPED ’16, 2016.
- C. Zhang, Z. Fang, P. Pan, P. Pan, and J. Cong. Caffeine: towards uniformed representation and acceleration for deep convolutional neural networks. In International Conference on Computer-Aided Design, page 12, 2016.
- 【Quantization】 Lin, Darryl, Sachin Talathi, and Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. International Conference on Machine Learning. 2016.
- 【Binarization】Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio .Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 .[J] arXiv preprint arXiv:1602.02830
【code:itayhubara/BinaryNet.pytorch; itayhubara/BinaryNet.tf】 - 【Binarization】Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi .XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks .[J] arXiv preprint arXiv:1603.05279
【code:allenai/XNOR-Net】 - 【System】Huynh, Loc Nguyen, Rajesh Krishna Balan, and Youngki Lee. DeepSense: A GPU-based deep convolutional neural network framework on commodity mobile devices Proceedings of the 2016 Workshop on Wearable Systems and Applications. ACM, 2016.
- 【System】Lane, Nicholas D., et al. DeepX: A Software Accelerator for Low-Power Deep Learning Inference on Mobile Devices .[C]Proceedings of the 15th International Conference on Information Processing in Sensor Networks. IEEE Press, 2016.
- 【System】Lane, Nicholas D., et al. DXTK: Enabling Resource-efficient Deep Learning on Mobile and Embedded Devices with the DeepX Toolkit .[J]MobiCASE. 2016.
- 【System】Han, Seungyeop, et al. MCDNN: An Approximation-Based Execution Framework for Deep Stream Processing Under Resource Constraints .[C]Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 2016.
- 【System】Bhattacharya, Sourav, and Nicholas D. Lane. Sparsification and Separation of Deep Learning Layers for Constrained Resource Inference on Wearables .[C]Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM. ACM, 2016.
- M. Kim and P. Smaragdis. Bitwise neural networks. arXiv preprint arXiv:1601.06071, 2016.
- 【System】Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, William J. Dally .EIE: Efficient Inference Engine on Compressed Deep Neural Network .[J] arXiv preprint arXiv:1602.01528
- 【Structure】【SqueezeNet】Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer .SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size .[J] arXiv preprint arXiv:1602.07360
- D. Miyashita, E. H. Lee, and B. Murmann. Convolutional neural networks using logarithmic data representation. arXiv preprint arXiv:1603.01025, 2016.
- 【Distillation】Do deep convolutional nets really need to be deep and convolutional?, Gregor Urban, Krzysztof J. Geras, Samira Ebrahimi Kahou, Ozlem Aslan, Shengjie Wang, Rich Caruana, Abdelrahman Mohamed, Matthai Philipose, Matt Richardson, 2016
- 【Structure】Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, Kilian Weinberger .Deep Networks with Stochastic Depth .[J] arXiv preprint arXiv:1603.09382
- 【Distillation】Adapting Models to Signal Degradation using Distillation, Jong-Chyi Su, Subhransu Maji,2016
- F. Li, B. Zhang, and B. Liu. Ternary weight networks . arXiv preprint arXiv:1605.04711, 2016.
【code:fengfu-chris/caffe-twns】 - 【Quantization】 Gysel, Philipp. Ristretto: Hardware-oriented approximation of convolutional neural networks. arXiv preprint arXiv:1605.06402 (2016).
- 【Structure】Roi Livni, Daniel Carmon, Amir Globerson .Learning Infinite-Layer Networks: Without the Kernel Trick .[J] arXiv preprint arXiv:1606.05316
- 【Quantization】Zen H, Agiomyrgiannakis Y, Egberts N, et al. Fast, compact, and high quality LSTM-RNN based statistical parametric speech synthesizers for mobile devices[J]. arXiv preprint arXiv:1606.06061, 2016.
- 【Binarization】Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou .DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients .[J] arXiv preprint arXiv:1606.06160
【code:tensorpack/DoReFa-Net】 - 【Distillation】Yoon Kim, Alexander M. Rush .Sequence-Level Knowledge Distillation .[J] arXiv preprint arXiv:1606.07947
- 【Structure】【Pruning】Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Enhao Gong, Shijian Tang, Erich Elsen, Peter Vajda, Manohar Paluri, John Tran, Bryan Catanzaro, William J. Dally .DSD: Dense-Sparse-Dense Training for Deep Neural Networks .[J] arXiv preprint arXiv:1607.04381
【code:songhan.github.io/DSD】 - 【Quantization】Alvarez R, Prabhavalkar R, Bakhtin A. On the efficient representation and execution of deep acoustic models[J]. arXiv preprint arXiv:1607.04683, 2016.
- 【Distillation】Knowledge Distillation for Small-footprint Highway Networks, Liang Lu, Michelle Guo, Steve Renals, 2016
- 【Pruning】Jongsoo Park, Sheng Li, Wei Wen, Ping Tak Peter Tang, Hai Li, Yiran Chen, Pradeep Dubey .Faster CNNs with Direct Sparse Convolutions and Guided Pruning .[J] arXiv preprint arXiv:1608.01409
【code:IntelLabs/SkimCaffe】 - 【Pruning】Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li .Learning Structured Sparsity in Deep Neural Networks .[J] arXiv preprint arXiv:1608.03665
【code:wenwei202/caffe/tree/scnn】 - 【Structure】 Wang M, Liu B, Foroosh H. Design of efficient convolutional layers using single intra-channel convolution, topological subdivisioning and spatial" bottleneck" structure .[J]. arXiv preprint arXiv:1608.04337, 2016.
- 【Pruning】Yiwen Guo, Anbang Yao, Yurong Chen .Dynamic Network Surgery for Efficient DNNs .[J] arXiv preprint arXiv:1608.04493
【code:yiwenguo/Dynamic-Network-Surgery】 - 【Binarization】Felix Juefei-Xu, Vishnu Naresh Boddeti, Marios Savvides .Local Binary Convolutional Neural Networks .[J] arXiv preprint arXiv:1608.06049
- 【Pruning】Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf .Pruning Filters for Efficient ConvNets .[J] arXiv preprint arXiv:1608.08710
【code:Eric-mingjie/rethinking-network-pruning】 - 【other】Tramèr F, Zhang F, Juels A, et al. Stealing machine learning models via prediction apis[C]//25th {USENIX} Security Symposium ({USENIX} Security 16). 2016: 601-618.
- 【Quantization】Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio .Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations .[J] arXiv preprint arXiv:1609.07061
- 【Quantization】Wu Y, Schuster M, Chen Z, et al. Google's neural machine translation system: Bridging the gap between human and machine translation[J]. arXiv preprint arXiv:1609.08144, 2016.
- 【Structure】【Xception】François Chollet .Xception: Deep Learning with Depthwise Separable Convolutions .[J] arXiv preprint arXiv:1610.02357
- 【Distillation】Bharat Bhusan Sau, Vineeth N. Balasubramanian .Deep Model Compression: Distilling Knowledge from Noisy Teachers .[J] arXiv preprint arXiv:1610.09650
- 【other】Li X, Qin T, Yang J, et al. LightRNN: Memory and computation-efficient recurrent neural networks[C]//Advances in Neural Information Processing Systems. 2016: 4385-4393.
- 【Quantization】Lu Hou, Quanming Yao, James T. Kwok .Loss-aware Binarization of Deep Networks .[J] arXiv preprint arXiv:1611.01600
- 【Low rank】Garipov T, Podoprikhin D, Novikov A, et al. Ultimate tensorization: compressing convolutional and fc layers alike[J]. arXiv preprint arXiv:1611.03214, 2016.
【code:timgaripov/TensorNet-TF;Bihaqo/TensorNet】 - 【Pruning】Tien-Ju Yang, Yu-Hsin Chen, Vivienne Sze .Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning .[J] arXiv preprint arXiv:1611.05128
- 【Pruning】Aghasi A, Abdi A, Nguyen N, et al. Net-trim: Convex pruning of deep neural networks with performance guarantee[C]//Advances in Neural Information Processing Systems. 2017: 3177-3186.
【code:DNNToolBox/Net-Trim-v1】 - 【Quantization】Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, Ce Zhang .The ZipML Framework for Training Models with End-to-End Low Precision: The Cans, the Cannots, and a Little Bit of Deep Learning .[J] arXiv preprint arXiv:1611.05402
- 【Structure】Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He .Aggregated Residual Transformations for Deep Neural Networks .[J] arXiv preprint arXiv:1611.05431
- A. Ren, Z. Li, C. Ding, Q. Qiu, Y. Wang, J. Li, X. Qian, and B. Yuan. Sc-dcnn: Highly-scalable deep convolutional neural network using stochastic computing. arXiv preprint arXiv:1611.05939, 2016.
- 【Pruning】Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz .Pruning Convolutional Neural Networks for Resource Efficient Inference .[J] arXiv preprint arXiv:1611.06440
【code:Tencent/PocketFlow#channel-pruning】 - 【other】Bagherinezhad H, Rastegari M, Farhadi A. Lcnn: Lookup-based convolutional neural network[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 7120-7129.
- 【Quantization】Hong S, Roh B, Kim K H, et al. Pvanet: Lightweight deep neural networks for real-time object detection[J]. arXiv preprint arXiv:1611.08588, 2016.
【code:sanghoon/pva-faster-rcnn】 - 【Quantization】Qinyao He, He Wen, Shuchang Zhou, Yuxin Wu, Cong Yao, Xinyu Zhou, Yuheng Zou .Effective Quantization Methods for Recurrent Neural Networks .[J] arXiv preprint arXiv:1611.10176
- 【Distallation】 Shen J, Vesdapunt N, Boddeti V N, et al. In teacher we trust: Learning compressed models for pedestrian detection .[J]. arXiv preprint arXiv:1612.00478, 2016.
- 【Pruning】Song Han, Junlong Kang, Huizi Mao, Yiming Hu, Xin Li, Yubin Li, Dongliang Xie, Hong Luo, Song Yao, Yu Wang, Huazhong Yang, William J. Dally .ESE: Efficient Speech Recognition Engine with Sparse LSTM on FPGA .[J] arXiv preprint arXiv:1612.00694
- 【Structure】Bichen Wu, Alvin Wan, Forrest Iandola, Peter H. Jin, Kurt Keutzer .SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving .[J] arXiv preprint arXiv:1612.01051
- 【Quantization】Zhu C, Han S, Mao H, et al. Trained ternary quantization[J]. arXiv preprint arXiv:1612.01064, 2016.
【code:czhu95/ternarynet】 - 【Quantization】Yoojin Choi, Mostafa El-Khamy, Jungwon Lee .Towards the Limit of Network Quantization .[J] arXiv preprint arXiv:1612.01543
- 【other】Joulin A, Grave E, Bojanowski P, et al. Fasttext. zip: Compressing text classification models[J]. arXiv preprint arXiv:1612.03651, 2016.
- 【Distillation】Sergey Zagoruyko, Nikos Komodakis .Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer .[J] arXiv preprint arXiv:1612.03928
- 【other】Hashemi S, Anthony N, Tann H, et al. Understanding the impact of precision quantization on the accuracy and energy of neural networks[C]//Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017. IEEE, 2017: 1474-1479.
- Umuroglu. Finn: A framework for fast, scalable binarized neural network inference.[J] arXiv preprint arXiv:1612.07119
- 【Thesis】Han S, Dally B. Efficient methods and hardware for deep learning[J]. University Lecture, 2017.
- 【Quantization】Cai Z, He X, Sun J, et al. Deep learning with low precision by half-wave gaussian quantization[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 5918-5926.
【code:zhaoweicai/hwgq】 - 【Quantization】Yonekawa H, Nakahara H. On-chip memory based binarized convolutional deep neural network applying batch normalization free technique on an fpga[C]//2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2017: 98-105.
- 【Quantization】Liang S, Yin S, Liu L, et al. FP-BNN: Binarized neural network on FPGA[J]. Neurocomputing, 2018, 275: 1072-1086.
- 【Quantization】Li Z, Ni B, Zhang W, et al. Performance guaranteed network acceleration via high-order residual quantization[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 2584-2592.
- 【Quantization】Hu Q, Wang P, Cheng J. From hashing to cnns: Training binary weight networks via hashing[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【Quantization】Lin X, Zhao C, Pan W. Towards accurate binary convolutional neural network[C]//Advances in Neural Information Processing Systems. 2017: 345-353.
- 【Binarization】Yang H, Fritzsche M, Bartz C, et al. Bmxnet: An open-source binary neural network implementation based on mxnet[C]//Proceedings of the 25th ACM international conference on Multimedia. ACM, 2017: 1209-1212.
【code:hpi-xnor/BMXNet】 - 【Structure】【ResNeXt】S. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He. ResNeXt: Aggregated residual transformations for deep neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
- 【other】Huang L, Liu X, Liu Y, et al. Centered weight normalization in accelerating training of deep neural networks[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 2803-2811.
- Park E, Ahn J, Yoo S. Weighted-entropy-based quantization for deep neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 5456-5464.
【code:EunhyeokPark/script_for_WQ】 - Guo Y, Yao A, Zhao H, et al. Network sketching: Exploiting binary structure in deep cnns](http://openaccess.thecvf.com/content_cvpr_2017/papers/Guo_Network_Sketching_Exploiting_CVPR_2017_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 5955-5963.
- D. Nguyen, D. Kim, and J. Lee. [Double MAC: doubling the performance of convolutional neural networks on modern fpgas]. In Design, Automation and Test in Europe Conference and Exhibition, DATE 2017, Lausanne, Switzerland, March 27-31, 2017, pages 890–893, 2017.
- Edward. [Lognet: Energy-efficient neural networks using logarithmic computation]. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5900–5904, 2017.
- H. Sim and J. Lee. [A new stochastic computing multiplier with application to deep convolutional neural networks]. In Design Automation Conference, page 29, 2017.
- H. Yang. Time: A training-in-memory architecture for memristor-based deep neural networks. In Design Automation Conference, page 26, 2017.
- L. Chen, J. Li, Y. Chen, Q. Deng, J. Shen, X. Liang, and L. Jiang.[ Accelerator-friendly neural-network training: Learning variations and defects in rram crossbar]. In Design, Automation and Test in Europe Conference and Exhibition, pages 19–24, 2017.
- M. Gao, J. Pu, X. Yang, M. Horowitz, and C. Kozyrakis. Tetris: Scalable and efficient neural network acceleration with 3d memory. In International Conference on Architectural Support for Programming Languages and Operating Systems, pages 751–764, 2017.
- M. Price, J. Glass, and A. P. Chandrakasan. [14.4 a scalable speech recognizer with deep-neural-network acoustic models and voice-activated power gating.] In Solid-State Circuits Conference, pages 244–245, 2017.
- N. P. Jouppi. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture, ISCA ’17, 2017.
- Nurvitadhi. Can fpgas beat gpus in accelerating nextgeneration deep neural networks? In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA ’17, 2017.
- P. Wang and J. Cheng. Fixed-point factorized networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
- S. Venkataramani, A. Ranjan, S. Banerjee, D. Das, S. Avancha, A. Jagannathan, A. Durg, D. Nagaraj, B. Kaul, P. Dubey, and A. Raghunathan. [Scaledeep: A scalable compute architecture for learning and evaluating deep networks]. SIGARCH Comput. Archit. News, 45(2):13–26, June 2017.
- Wei. Automated systolic array architecture synthesis for high throughput cnn inference on fpgas. In Proceedings of the 54th Annual Design Automation Conference 2017, DAC ’17, 2017.
- W. Tang, G. Hua, and L. Wang. How to train a compact binary neural network with high accuracy? In AAAI, pages 2625–2631, 2017.
- Xiao. Exploring heterogeneous algorithms for accelerating deep convolutional neural networks on fpgas. In Proceedings of the 54th Annual Design Automation Conference 2017, DAC ’17, 2017.
- Y. H. Chen, T. Krishna, J. S. Emer, and V. Sze. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE Journal of Solid-State Circuits, 52(1):127–138, 2017.
- Y. Ma, M. Kim, Y. Cao, S. Vrudhula, J. S. Seo, Y. Ma, M. Kim, Y. Cao, S. Vrudhula, and J. S. Seo. [End-to-end scalable fpga accelerator for deep residual networks.] In IEEE International Symposium on Circuits and Systems, pages 1–4, 2017.
- Y. Ma, Y. Cao, S. Vrudhula, and J. S. Seo. [An automatic rtl compiler for high-throughput fpga implementation of diverse deep convolutional neural networks]. In International Conference on Field Programmable Logic and Applications, pages 1–8, 2017.
- Y. Ma, Y. Cao, S. Vrudhula, and J.-s. Seo. Optimizing loop operation and dataflow in fpga acceleration of deep convolutional neural networks. In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA ’17, 2017.
- Y. Shen, M. Ferdman, and P. Milder. Escher: A cnn accelerator with flexible buffering to minimize off-chip transfer. In IEEE International Symposium on Field-Programmable Custom Computing Machines, 2017.
- Zhao. Accelerating binarized convolutional neural networks with software-programmable fpgas. In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA ’17, 2017.
- 【Distillation】Yim J, Joo D, Bae J, et al. A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learnin[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 4133-4141.
- 【Distillation】Chen G, Choi W, Yu X, et al. Learning Efficient Object Detection Models with Knowledge Distillation[C]//Advances in Neural Information Processing Systems. 2017: 742-751.
- 【Distillation】Local Affine Approximators for Improving Knowledge Transfer, Suraj Srinivas and Francois Fleuret, 2017
- 【Distillation】Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog Model, Jiasen Lu1, Anitha Kannan, Jianwei Yang, Devi Parikh, Dhruv Batra 2017
- 【Distillation】Data-Free Knowledge Distillation For Deep Neural Networks, Raphael Gontijo Lopes, Stefano Fenu, 2017
- 【Miscellaneous】Wang Y, Xu C, Xu C, et al. Beyond Filters: Compact Feature Map for Portable Deep Model[C]//Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017: 3703-3711.
- 【Miscellaneous】Kim J, Park Y, Kim G, et al. SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization[C]//Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017: 1866-1874.
- 【Pruning】He Y, Zhang X, Sun J. Channel pruning for accelerating very deep neural networks[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 1389-1397.
- 【Pruning】J. H. Ko, B. Mudassar, T. Na, and S. Mukhopadhyay. Design of an energy-efficient accelerator for training of convolutional neural networks using frequency-domain computation. In Design Automation Conference, page 59, 2017.
- 【Pruning】Neklyudov K, Molchanov D, Ashukha A, et al. Structured bayesian pruning via log-normal multiplicative noise[C]//Advances in Neural Information Processing Systems. 2017: 6775-6784.
【code;necludov/group-sparsity-sbp】 - 【Pruning】Mallya A, Lazebnik S. Packnet: Adding multiple tasks to a single network by iterative pruning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 7765-7773.
【code:arunmallya/packnet】 - 【Pruning】Vieira T, Eisner J. Learning to Prune: Exploring the Frontier of Fast and Accurate Parsing[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 263-278.
- 【Pruning】Yu J, Lukefahr A, Palframan D, et al. Scalpel: Customizing DNN Pruning to the Underlying Hardware Parallelism[C]//ACM SIGARCH Computer Architecture News. ACM, 2017, 45(2): 548-560.
- 【Pruning】Lin J, Rao Y, Lu J, et al. Runtime neural pruning[C]//Advances in Neural Information Processing Systems. 2017: 2181-2191.
- 【System】Mathur A, Lane N D, Bhattacharya S, et al.DeepEye: Resource Efficient Local Execution of Multiple Deep Vision Models using Wearable Commodity Hardware[C]//Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 2017: 68-81.
- 【System】Huynh L N, Lee Y, Balan R K. DeepMon: Mobile GPU-based Deep Learning Framework for Continuous Vision Applications[C]//Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 2017: 82-95.
- Anwar S, Hwang K, Sung W. Structured pruning of deep convolutional neural networks.[J]. ACM Journal on Emerging Technologies in Computing Systems (JETC), 2017, 13(3): 32.
- He Y, Zhang X, Sun J. Channel pruning for accelerating very deep neural networks[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 1389-1397.
- Aghasi A, Abdi A, Nguyen N, et al. Net-trim: Convex pruning of deep neural networks with performance guarantee.[C]//Advances in Neural Information Processing Systems. 2017: 3177-3186.
- 【Quantization】Meng W, Gu Z, Zhang M, et al. Two-bit networks for deep learning on resource-constrained embedded devices[J]. arXiv preprint arXiv:1701.00485, 2017.
- 【other】Ghosh T. Quicknet: Maximizing efficiency and efficacy in deep architectures[J]. arXiv preprint arXiv:1701.02291, 2017.
- 【Pruning】Wolfe N, Sharma A, Drude L, et al. The incredible shrinking neural network: New perspectives on learning representations through the lens of pruning[J]. 2016.
- 【other】Chandrasekhar V, Lin J, Liao Q, et al. Compression of deep neural networks for image instance retrieval[C]//2017 Data Compression Conference (DCC). IEEE, 2017: 300-309.
- 【other】Molchanov D, Ashukha A, Vetrov D. Variational dropout sparsifies deep neural networks[C]//Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017: 2498-2507.
【code:ars-ashuha/variational-dropout-sparsifies-dnn】 - 【Decomposition】Astrid M, Lee S I. Cp-decomposition with tensor power method for convolutional neural networks compression[C]//2017 IEEE International Conference on Big Data and Smart Computing (BigComp). IEEE, 2017: 115-118.
- 【Quantization】Zhaowei Cai, Xiaodong He, Jian Sun, Nuno Vasconcelos .Deep Learning with Low Precision by Half-wave Gaussian Quantization .[J] arXiv preprint arXiv:1702.00953
- 【Quantization】 Zhou, Aojun, et al. Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044 (2017).
- 【Pruning】Karen Ullrich, Edward Meeds, Max Welling .Soft Weight-Sharing for Neural Network Compression .[J] arXiv preprint arXiv:1702.04008
- 【Pruning】Changpinyo S, Sandler M, Zhmoginov A. The power of sparsity in convolutional neural networks[J]. arXiv preprint arXiv:1702.06257, 2017.
- 【Quantization】Shin S, Boo Y, Sung W. Fixed-point optimization of deep neural networks with adaptive step size retraining[C]//2017 IEEE International conference on acoustics, speech and signal processing (ICASSP). IEEE, 2017: 1203-1207.
- 【Quantization】Graham B. Low-precision batch-normalized activations[J]. arXiv preprint arXiv:1702.08231, 2017.
- 【Pruning】Li S, Park J, Tang P T P. Enabling sparse winograd convolution by native pruning[J]. arXiv preprint arXiv:1702.08597, 2017.
- 【other】Boulch A. Sharesnet: reducing residual network parameter number by sharing weights[J]. arXiv preprint arXiv:1702.08782, 2017.
- 【Distillation】Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results, Antti Tarvainen, Harri Valpola, 2017
- 【Distillation】Learning from Noisy Labels with Distillation, Yuncheng Li, Jianchao Yang, Yale Song, Liangliang Cao, Jiebo Luo, Li-Jia Li, 2017
- 【Survey】Sze V, Chen Y H, Yang T J, et al. Efficient processing of deep neural networks: A tutorial and survey[J]. Proceedings of the IEEE, 2017, 105(12): 2295-2329.
- 【Structure】Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li .Coordinating Filters for Faster Deep Neural Networks .[J] arXiv preprint arXiv:1703.09746
- 【Structure】【MobileNet】ndrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam .MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications .[J] arXiv preprint arXiv:1704.04861
- 【Structure】Fei Wang, Mengqing Jiang, Chen Qian, Shuo Yang, Cheng Li, Honggang Zhang, Xiaogang Wang, Xiaoou Tang .Residual Attention Network for Image Classification .[J] arXiv preprint arXiv:1704.06904
- 【Pruning】Liu W, Wen Y, Yu Z, et al. Sphereface: Deep hypersphere embedding for face recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 212-220.
【code;isthatyoung/Sphereface-prune】 - 【Quantization】Mellempudi N, Kundu A, Mudigere D, et al. Ternary neural networks with fine-grained quantization[J]. arXiv preprint arXiv:1705.01462, 2017.
- 【Structure】Louizos C, Ullrich K, Welling M. Bayesian compression for deep learning[C]//Advances in Neural Information Processing Systems. 2017: 3288-3298.
- H. Tann, S. Hashemi, I. Bahar, and S. Reda. Hardwaresoftware codesign of accurate, multiplier-free deep neural networks. arXiv preprint arXiv:1705.04288, 2017.
- 【Pruning】Dong X, Chen S, Pan S. Learning to prune deep neural networks via layer-wise optimal brain surgeon[C]//Advances in Neural Information Processing Systems. 2017: 4857-4867.
【code:csyhhu/L-OBS】 - H. Mao, S. Han, J. Pool, W. Li, X. Liu, Y. Wang, and W. J. Dally. Exploring the regularity of sparse structure in convolutional neural networks. arXiv preprint arXiv:1705.08922, 2017.
- 【System】Qingqing Cao, Niranjan Balasubramanian, Aruna Balasubramanian .MobiRNN: Efficient Recurrent Neural Network Execution on Mobile GPU .[J] arXiv preprint arXiv:1706.00878
- 【Quantization】Denis A. Gudovskiy, Luca Rigazio .ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks .[J] arXiv preprint arXiv:1706.02393
- 【Structure】Zhe Li, Xiaoyu Wang, Xutao Lv, Tianbao Yang .SEP-Nets: Small and Effective Pattern Networks .[J] arXiv preprint arXiv:1706.03912
- Zhang Y, Xiang T, Hospedales T M, et al. Deep mutual learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 4320-4328.
- 【Structure】【ShuffleNet】Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun .ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices .[J] arXiv preprint arXiv:1707.01083
- 【Survey】Miguel Á. Carreira-Perpiñán .Model compression as constrained optimization, with application to neural nets. Part I: general framework .[J] arXiv preprint arXiv:1707.01209
- 【Pruning】Zehao Huang, Naiyan Wang .Data-Driven Sparse Structure Selection for Deep Neural Networks .[J] arXiv preprint arXiv:1707.01213
【code:TuSimple/sparse-structure-selection】 - 【Distillation】Zehao Huang, Naiyan Wang .Like What You Like: Knowledge Distill via Neuron Selectivity Transfer .[J] arXiv preprint arXiv:1707.01219
- 【Distillation】Yuntao Chen, Naiyan Wang, Zhaoxiang Zhang .DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer .[J] arXiv preprint arXiv:1707.01220
- 【Structure】Ting Zhang, Guo-Jun Qi, Bin Xiao, Jingdong Wang. Interleaved Group Convolutions for Deep Neural Networks.[J] arXiv preprint arXiv:1707.02725
- 【Survey】Miguel Á. Carreira-Perpiñán, Yerlan Idelbayev .Model compression as constrained optimization, with application to neural nets. Part II: quantization .[J] arXiv preprint arXiv:1707.04319
- 【Binarization】Jeng-Hau Lin, Tianwei Xing, Ritchie Zhao, Zhiru Zhang, Mani Srivastava, Zhuowen Tu, Rajesh K. Gupta .Binarized Convolutional Neural Networks with Separable Filters for Efficient Hardware Acceleration .[J] arXiv preprint arXiv:1707.04693
- 【Pruning】Yihui He, Xiangyu Zhang, Jian Sun .Channel Pruning for Accelerating Very Deep Neural Networks .[J] arXiv preprint arXiv:1707.06168
【code:yihui-he/channel-pruning;Eric-mingjie/rethinking-network-pruning】 - 【Structure】【Pruning】Jian-Hao Luo, Jianxin Wu, Weiyao Lin .ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression .[J] arXiv preprint arXiv:1707.06342
【code:Roll920/ThiNet;Eric-mingjie/rethinking-network-pruning】 - 【Structure】Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le .Learning Transferable Architectures for Scalable Image Recognition .[J] arXiv preprint arXiv:1707.07012
- 【other】Delmas A, Sharify S, Judd P, et al. Tartan: Accelerating fully-connected and convolutional layers in deep learning networks by exploiting numerical precision variability[J]. arXiv preprint arXiv:1707.09068, 2017.
- 【Pruning】Frederick Tung, Srikanth Muralidharan, Greg Mori .Fine-Pruning: Joint Fine-Tuning and Compression of a Convolutional Network with Bayesian Optimization .[J] arXiv preprint arXiv:1707.09102
- 【Quantization】Leng C, Dou Z, Li H, et al. Extremely low bit neural network: Squeeze the last bit out with admm[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【Distillation】Rocket Launching: A Universal and Efficient Framework for Training Well-performing Light Net, Zihao Liu, Qi Liu, Tao Liu, Yanzhi Wang, Wujie Wen, 2017
- A. Parashar, M. Rhu, A. Mukkara, A. Puglielli, R. Venkatesan, B. Khailany, J. Emer, S. W. Keckler, and W. J. Dally. Scnn: An accelerator for compressed-sparse convolutional neural networks. arXiv preprint arXiv:1708.04485, 2017.
- 【Structure】Dawei Li, Xiaolong Wang, Deguang Kong .DeepRebirth: Accelerating Deep Neural Network Execution on Mobile Devices .[J] arXiv preprint arXiv:1708.04728
- 【Distillation】Revisiting knowledge transfer for training object class detectors, Jasper Uijlings, Stefan Popov, Vittorio Ferrari, 2017
- 【Pruning】Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, Changshui Zhang .Learning Efficient Convolutional Networks through Network Slimming .[J] arXiv preprint arXiv:1708.06519
【code:Eric-mingjie/network-slimming】 - 【Distillation】Zheng Xu, Yen-Chang Hsu, Jiawei Huang .Learning Loss for Knowledge Distillation with Conditional Adversarial Networks .[J] arXiv preprint arXiv:1709.00513
- 【other】Masana M, van de Weijer J, Herranz L, et al. Domain-adaptive deep network compression[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 4289-4297.
- Mishra A, Nurvitadhi E, Cook J J, et al. WRPN: wide reduced-precision networks[J]. arXiv preprint arXiv:1709.01134, 2017.
- 【Distillation】Chong Wang, Xipeng Lan, Yangang Zhang .Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verification .[J] arXiv preprint arXiv:1709.02929
- 【Structure】Mohammad Javad Shafiee, Brendan Chywl, Francis Li, Alexander Wong .Fast YOLO: A Fast You Only Look Once System for Real-time Embedded Object Detection in Video .[J] arXiv preprint arXiv:1709.05943
- 【Train】Ashok A, Rhinehart N, Beainy F, et al. N2n learning: Network to network compression via policy gradient reinforcement learning[J]. arXiv preprint arXiv:1709.06030, 2017.
- 【Pruning】Michael Zhu, Suyog Gupta .To prune, or not to prune: exploring the efficacy of pruning for model compression .[J] arXiv preprint arXiv:1710.01878
- 【Distillation】Raphael Gontijo Lopes, Stefano Fenu, Thad Starner .Data-Free Knowledge Distillation for Deep Neural Networks .[J] arXiv preprint arXiv:1710.07535
- 【Survey】Yu Cheng, Duo Wang, Pan Zhou, Tao Zhang .A Survey of Model Compression and Acceleration for Deep Neural Networks .[J] arXiv preprint arXiv:1710.09282
- 【Distillation】Zhi Zhang, Guanghan Ning, Zhihai He .Knowledge Projection for Deep Neural Networks .[J] arXiv preprint arXiv:1710.09505
- 【Structure】Mohammad Ghasemzadeh, Mohammad Samragh, Farinaz Koushanfar .ReBNet: Residual Binarized Neural Network .[J] arXiv preprint arXiv:1711.01243
- 【Distillation】Elliot J. Crowley, Gavin Gray, Amos Storkey .Moonshine: Distilling with Cheap Convolutions .[J] arXiv preprint arXiv:1711.02613
- 【Quantization】Reagen B, Gupta U, Adolf R, et al. Weightless: Lossy weight encoding for deep neural network compression[J]. arXiv preprint arXiv:1711.04686, 2017.
- 【Distillation】Mishra A, Marr D. Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy[J]. arXiv preprint arXiv:1711.05852, 2017.
- 【Pruning】Ruichi Yu, Ang Li, Chun-Fu Chen, Jui-Hsin Lai, Vlad I. Morariu, Xintong Han, Mingfei Gao, Ching-Yung Lin, Larry S. Davis .NISP: Pruning Networks using Neuron Importance Score Propagation .[J] arXiv preprint arXiv:1711.05908
- 【Pruning】iel Gordon, Elad Eban, Ofir Nachum, Bo Chen, Tien-Ju Yang, Edward Choi .MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks .[J] arXiv preprint arXiv:1711.06798
【code:google-research/morph-net】 - 【System】Stylianos I. Venieris, Christos-Savvas Bouganis .fpgaConvNet: A Toolflow for Mapping Diverse Convolutional Neural Networks on Embedded FPGAs .[J] arXiv preprint arXiv:1711.08740
- 【Structure】Gao Huang, Shichen Liu, Laurens van der Maaten, Kilian Q. Weinberger .CondenseNet: An Efficient DenseNet using Learned Group Convolutions .[J] arXiv preprint arXiv:1711.09224
- 【Quantization】Zhou Y, Moosavi-Dezfooli S M, Cheung N M, et al. Adaptive quantization for deep neural network[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- Learning Sparse Neural Networks through L0 Regularization .[J] arXiv preprint arXiv:1711.01312
- 【Train】Lin Y, Han S, Mao H, et al. Deep gradient compression: Reducing the communication bandwidth for distributed training[J]. arXiv preprint arXiv:1712.01887, 2017.
- 【Low Rank】ndrew Tulloch, Yangqing Jia .High performance ultra-low-precision convolutions on mobile devices .[J] arXiv preprint arXiv:1712.02427
- 【Train】Chen C Y, Choi J, Brand D, et al. Adacomp: Adaptive residual gradient compression for data-parallel distributed training[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【Structure】【StrassenNets】Tschannen M, Khanna A, Anandkumar A. StrassenNets: Deep learning with a multiplication budget[J]. arXiv preprint arXiv:1712.03942, 2017.
- 【Distillation】Data Distillation: Towards Omni-Supervised Learning, Ilija Radosavovic, Piotr Dollár, Ross Girshick, Georgia Gkioxari, Kaiming He, 2017
- 【Decomposition】Ye J, Wang L, Li G, et al. Learning compact recurrent neural networks with block-term tensor decomposition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 9378-9387.
- 【Quantization】Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, Dmitry Kalenichenko .Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference .[J] arXiv preprint arXiv:1712.05877
- 【Thesis】Algorithms for speeding up convolutional neural networks
- 【Survey】Cheng Y, Wang D, Zhou P, et al. Model compression and acceleration for deep neural networks: The principles, progress, and challenges[J]. IEEE Signal Processing Magazine, 2018, 35(1): 126-136.
- 【Structure】【ChannelNets】Gao H, Wang Z, Ji S. Channelnets: Compact and efficient convolutional neural networks via channel-wise convolutions[C]//Advances in Neural Information Processing Systems. 2018: 5197-5205.
【code:HongyangGao/ChannelNets】 - 【Structure】【Shift】Wu B, Wan A, Yue X, et al. Shift: A zero flop, zero parameter alternative to spatial convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 9127-9135.
【code:alvinwan/shiftresnet-cifar】 - 【Quantization】Son S, Nah S, Mu Lee K. Clustering convolutional kernels to compress deep neural networks[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 216-232.
- 【Quantization】Yu T, Yuan J, Fang C, et al. Product quantization network for fast image retrieval[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 186-201.
- 【Quantization】Achterhold J, Koehler J M, Schmeink A, et al.Variational network quantization[J]. 2018.
- 【Quantization】Martinez J, Zakhmi S, Hoos H H, et al. LSQ++: Lower running time and higher recall in multi-codebook quantization[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 491-506.
- 【Quantization】Zhou A, Yao A, Wang K, et al. Explicit loss-error-aware quantization for low-bit deep neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 9426-9435.
- 【Quantization】Nakanishi K, Maeda S, Miyato T, et al. Adaptive Sample-space & Adaptive Probability coding: a neural-network based approach for compression[J]. 2018.
- 【Quantization】Mukherjee L, Ravi S N, Peng J, et al. A Biresolution Spectral Framework for Product Quantization[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 3329-3338.
- 【Quantization】Zhou Y, Moosavi-Dezfooli S M, Cheung N M, et al. Adaptive quantization for deep neural network[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【Quantization】Li D, Wang X, Kong D. Deeprebirth: Accelerating deep neural network execution on mobile devices[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【Quantization】Wang P, Hu Q, Zhang Y, et al. Two-step quantization for low-bit neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 4376-4384.
- 【Quantization】Leng C, Dou Z, Li H, et al. Extremely low bit neural network: Squeeze the last bit out with admm[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【other】Chen T, Lin L, Zuo W, et al. Learning a wavelet-like auto-encoder to accelerate deep neural networks[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【other】He X, Cheng J. Learning Compression from Limited Unlabeled Data[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 752-769.
- 【other】Dai L, Tang L, Xie Y, et al. Designing by training: acceleration neural network for fast high-dimensional convolution[C]//Advances in Neural Information Processing Systems. 2018: 1466-1475.
- 【other】Cicek S, Fawzi A, Soatto S. Saas: Speed as a supervisor for semi-supervised learning[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 149-163.
- 【other】Chen W, Wilson J, Tyree S, et al. Compressing convolutional neural networks in the frequency domain[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016: 1475-1484.
- 【other】Lim C H. An efficient pruning algorithm for robust isotonic regression[C]//Advances in Neural Information Processing Systems. 2018: 219-229.
- 【Train】Wang N, Choi J, Brand D, et al. Training deep neural networks with 8-bit floating point numbers[C]//Advances in neural information processing systems. 2018: 7675-7684.
- 【Pruning】Fu Y, Zhang S, Li D, et al. pruning in training: learning and ranking sparse connections in deep convolutional networks[J]. 2018.
- 【Pruning】Gao W, Wei Y, Li Q, et al. pruning with hints: an efficient framework for model acceleration[J]. 2018.
- 【Pruning】Tung F, Mori G. Clip-q: Deep network compression learning by in-parallel pruning-quantization[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 7873-7882.
- 【Pruning】Zeng W, Urtasun R. MLPrune: Multi-Layer Pruning for Automated Neural Network Compression[J]. 2018.
- 【Pruning】Carreira-Perpinán M A, Idelbayev Y. “Learning-Compression” Algorithms for Neural Net Pruning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8532-8541.
- 【Pruning】Cumulative Saliency based Globally Balanced Filter Pruning For Efficient Convolutional Neural Networks
- 【Pruning】Yeh C K, Yen I E H, Chen H Y, et al. deep-trim: revisiting l1 regularization for connection pruning of deep network[J]. 2018.
- 【Pruning】Zhang X, Zhu Z, Xu Z. Learning to Search Efficient DenseNet with Layer-wise Pruning[J]. 2018.
- 【Pruning】Liu Z, Xu J, Peng X, et al. Frequency-domain dynamic pruning for convolutional neural networks[C]//Advances in Neural Information Processing Systems. 2018: 1043-1053.
- 【Pruning】Evci U, Le Roux N, Castro P, et al. Mean Replacement Pruning[J]. 2018.
- 【Pruning】Svoboda F, Liberis E, Lane N D. In search of theoretically grounded pruning[J]. 2018.
- 【Pruning】He, Yihui, et al. AMC: AutoML for model compression and acceleration on mobile devices Proceedings of the European Conference on Computer Vision (ECCV). 2018.
- 【Pruning】Chen C, Tung F, Vedula N, et al. Constraint-aware deep neural network compression[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 400-415.
【code:ChanganVR/ConstraintAwareCompression】 - 【Pruning】Carreira-Perpinán, Miguel A., and Yerlan Idelbayev. “Learning-Compression” Algorithms for Neural Net Pruning Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
- 【Pruning】Yang Q, Wen W, Wang Z, et al. Integral Pruning on Activations and Weights for Efficient Neural Networks[J]. 2018.
- 【Distillation】Self-supervised knowledge distillation using singular value decomposition, Seung Hyun Lee, Dae Ha Kim, Byung Cheol Song, 2018
- 【Distillation】Park S U, Kwak N. FEED: Feature-level Ensemble Effect for knowledge Distillation[J]. 2018.
- 【Distillation】Tao Z, Xia Q, Li Q. knowledge distill via learning neuron manifold[J]. 2018.
- 【Distillation】Exploration by random distillation -【Distillation】Wang X, Zhang R, Sun Y, et al. KDGAN: knowledge distillation with generative adversarial networks[C]//Advances in Neural Information Processing Systems. 2018: 775-786.
- Sakr C, Choi J, Wang Z, et al. True Gradient-Based Training of Deep Binary Activated Neural Networks Via Continuous Binarization[C]//2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018: 2346-2350.
- Su J, Li J, Bhattacharjee B, et al. Exploiting Invariant Structures for Compression in Neural Networks[J]. 2018.
- Suau X, Zappella L, Apostoloff N. Network compression using correlation analysis of layer responses[J]. 2018.
- Architecture Compression
- Darlow L N, Storkey A. What Information Does a ResNet Compress?[J]. 2018.
- Shwartz-Ziv R, Painsky A, Tishby N. representation compression and generalization in deep neural networks[J]. 2018.
- Zhuang B, Shen C, Tan M, et al. Towards effective low-bitwidth convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 7920-7928.
【code:nowgood/QuantizeCNNModel】 - Liu Z, Wu B, Luo W, et al. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 722-737.
【code:liuzechun/Bi-Real-net】 - G. Li, F. Li, T. Zhao, and J. Cheng. [Block convolution: Towards memory-efficeint inference of large-scale cnns on fpga]. In Design Automation and Test in Europe, 2018.
- Schindler G, Roth W, Pernkopf F, et al. N-Ary Quantization for CNN Model Compression and Inference Acceleration[J]. 2018.
- P. Wang, Q. Hu, Z. Fang, C. Zhao, and J. Cheng. Deepsearch: A fast image search framework for mobile devices. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 14, 2018.
- Q. Hu, P.Wang, and J. Cheng. From hashing to cnns: Training binary weight networks via hashing. In AAAI, February 2018. J. Cheng, J. Wu, C. Leng, Y. Wang, and Q. Hu. [Quantized cnn: A unified approach to accelerate and compress convolutional networks]. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), PP:1–14.
- 【Structure】【MobileNetV2】Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen .MobileNetV2: Inverted Residuals and Linear Bottlenecks .[J] arXiv preprint arXiv:1801.04381
【code:tensorflow/models】 - Theodore S. Nowak, Jason J. Corso .Deep Net Triage: Analyzing the Importance of Network Layers via Structural Compression .[J] arXiv preprint arXiv:1801.04651.
- Lucas Theis, Iryna Korshunova, Alykhan Tejani, Ferenc Huszár .Faster gaze prediction with dense networks and Fisher pruning .[J] arXiv preprint arXiv:1801.05787.
- Brian Trippe, Richard Turner .Overpruning in Variational Bayesian Neural Networks .[J] arXiv preprint arXiv:1801.06230.
- Qiangui Huang, Kevin Zhou, Suya You, Ulrich Neumann .Learning to Prune Filters in Convolutional Neural Networks .[J] arXiv preprint arXiv:1801.07365.
- 【Distillation】Sarah Tan, Rich Caruana, Giles Hooker, Albert Gordo .Transparent Model Distillation .[J] arXiv preprint arXiv:1801.08640.
- Congzheng Song, Yiming Sun .Kernel Distillation for Gaussian Processes .[J] arXiv preprint arXiv:1801.10273.
- Deepak Mittal, Shweta Bhardwaj, Mitesh M. Khapra, Balaraman Ravindran .Recovering from Random Pruning: On the Plasticity of Deep Convolutional Neural Networks .[J] arXiv preprint arXiv:1801.10447.
- Jialiang Guo, Bo Zhou, Xiangrui Zeng, Zachary Freyberg, Min Xu .Model compression for faster structural separation of macromolecules captured by Cellular Electron Cryo-Tomography .[J] arXiv preprint arXiv:1801.10597.
- 【Pruning】Jianbo Ye, Xin Lu, Zhe Lin, James Z. Wang .Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers .[J] arXiv preprint arXiv:1802.00124.
【code:jack-willturner/batchnorm-pruning】 - 【Quantization】Chen Xu, Jianqiang Yao, Zhouchen Lin, Wenwu Ou, Yuanbin Cao, Zhirong Wang, Hongbin Zha .Alternating Multi-bit Quantization for Recurrent Neural Networks .[J] arXiv preprint arXiv:1802.00150.
- Yixing Li, Fengbo Ren .Build a Compact Binary Neural Network through Bit-level Sensitivity and Data Pruning .[J] arXiv preprint arXiv:1802.00904.
- 【Survey】Jian Cheng, Peisong Wang, Gang Li, Qinghao Hu, Hanqing Lu .Recent Advances in Efficient Computation of Deep Convolutional Neural Networks .[J] arXiv preprint arXiv:1802.00939
- Yoojin Choi, Mostafa El-Khamy, Jungwon Lee .Universal Deep Neural Network Compression .[J] arXiv preprint arXiv:1802.02271.
- Md Zahangir Alom, Adam T Moody, Naoya Maruyama, Brian C Van Essen, Tarek M. Taha .Effective Quantization Approaches for Recurrent Neural Networks .[J] arXiv preprint arXiv:1802.02615.
- 【Distillation】Efficient Neural Architecture Search via Parameters Sharing, Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, Jeff Dean, 2018
- 【Pruning】Yihui He, Song Han .ADC: Automated Deep Compression and Acceleration with Reinforcement Learning .[J] arXiv preprint arXiv:1802.03494.
【code:Tencent/PocketFlow#channel-pruning;mit-han-lab/amc-release;mit-han-lab/amc-compressed-models】 - 【Quantization】Yukun Ding, Jinglan Liu, Yiyu Shi .On the Universal Approximability of Quantized ReLU Neural Networks .[J] arXiv preprint arXiv:1802.03646
- 【Structured】Qin Z, Zhang Z, Chen X, et al. Fd-mobilenet: Improved mobilenet with a fast downsampling strategy[C]//2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018: 1363-1367.
- Jeff Zhang, Kartheek Rangineni, Zahra Ghodsi, Siddharth Garg .ThUnderVolt: Enabling Aggressive Voltage Underscaling and Timing Error Resilience for Energy Efficient Deep Neural Network Accelerators .[J] arXiv preprint arXiv:1802.03806.
- Jeff Zhang, Tianyu Gu, Kanad Basu, Siddharth Garg .Analyzing and Mitigating the Impact of Permanent Faults on a Systolic Array Based Neural Network Accelerator .[J] arXiv preprint arXiv:1802.04657.
- 【Quantization】Wu S, Li G, Chen F, et al.Training and Inference with Integers in Deep Neural Networks .[J] arXiv preprint arXiv:1802.04680
【code:boluoweifenda/WAGE】 - Luiz M Franca-Neto .Field-Programmable Deep Neural Network (DNN) Learning and Inference accelerator: a concept .[J] arXiv preprint arXiv:1802.04899.
- 【other】Jia Z, Lin S, Qi C R, et al. Exploring hidden dimensions in parallelizing convolutional neural networks[J]. arXiv preprint arXiv:1802.04924, 2018.
- 【other】Jangho Kim, SeoungUK Park, Nojun Kwak .Paraphrasing Complex Network: Network Compression via Factor Transfer .[J] arXiv preprint arXiv:1802.04977.
- 【Quantization】Dai L, Tang L, Xie Y, et al. Designing by training: acceleration neural network for fast high-dimensional convolution[C]//Advances in Neural Information Processing Systems. 2018: 1466-1475.
- Qi Liu, Tao Liu, Zihao Liu, Yanzhi Wang, Yier Jin, Wujie Wen .Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks .[J] arXiv preprint arXiv:1802.05193.
- 【other】Sanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang .Stronger generalization bounds for deep nets via a compression approach .[J] arXiv preprint arXiv:1802.05296.
- 【Quantization】Antonio Polino, Razvan Pascanu, Dan Alistarh .Model compression via distillation and quantization .[J] arXiv preprint arXiv:1802.05668.
【code:antspy/quantized_distillation)】 - Tianyun Zhang, Shaokai Ye, Yipeng Zhang, Yanzhi Wang, Makan Fardad .Systematic Weight Pruning of DNNs using Alternating Direction Method of Multipliers .[J] arXiv preprint arXiv:1802.05747.
- 【other】Arora S, Cohen N, Hazan E. On the optimization of deep networks: Implicit acceleration by overparameterization[J]. arXiv preprint arXiv:1802.06509, 2018.
- Jiangyan Yi, Jianhua Tao, Zhengqi Wen, Bin Liu .Distilling Knowledge Using Parallel Data for Far-field Speech Recognition .[J] arXiv preprint arXiv:1802.06941.
- Matthew Sotoudeh, Sara S. Baghsorkhi .DeepThin: A Self-Compressing Library for Deep Neural Networks .[J] arXiv preprint arXiv:1802.06944.
- Ming Yu, Zhaoran Wang, Varun Gupta, Mladen Kolar .Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach .[J] arXiv preprint arXiv:1802.06967.
- Babajide O. Ayinde, Jacek M. Zurada .Building Efficient ConvNets using Redundant Feature Pruning .[J] arXiv preprint arXiv:1802.07653.
- 【Binarization】McDonnell M D. Training wide residual networks for deployment using a single bit for each weight[J]. arXiv preprint arXiv:1802.08530, 2018.
【code:szagoruyko/binary-wide-resnet】 - 【Quantization】Lu Hou, James T. Kwok .Loss-aware Weight Quantization of Deep Networks .[J] arXiv preprint arXiv:1802.08635.
- 【Deconvolution】Wenqi Wang, Yifan Sun, Brian Eriksson, Wenlin Wang, Vaneet Aggarwal .Wide Compression: Tensor Ring Nets .[J] arXiv preprint arXiv:1802.09052.
- Jinglan Liu, Jiaxin Zhang, Yukun Ding, Xiaowei Xu, Meng Jiang, Yiyu Shi .PBGen: Partial Binarization of Deconvolution-Based Generators for Edge Intelligence .[J] arXiv preprint arXiv:1802.09153.
- 【Other】Bin Dai, Chen Zhu, David Wipf .Compressing Neural Networks using the Variational Information Bottleneck .[J] arXiv preprint arXiv:1802.10399.
- Andros Tjandra, Sakriani Sakti, Satoshi Nakamura .Tensor Decomposition for Compressing Recurrent Neural Network .[J] arXiv preprint arXiv:1802.10410.
- .Learning Sparse Structured Ensembles with SG-MCMC and Network Pruning .[J] arXiv preprint arXiv:1803.00184.
- 【Quantization】.Deep Neural Network Compression with Single and Multiple Level Quantization .[J] arXiv preprint arXiv:1803.03289.
- 【Pruning】.The Lottery Ticket Hypothesis: Training Pruned Neural Networks .[J] arXiv preprint arXiv:1803.03635.
【code:google-research/lottery-ticket-hypothesis】 - 【Distillation】.Interpreting Deep Classifier by Visual Distillation of Dark Knowledge .[J] arXiv preprint arXiv:1803.04042.
- .FeTa: A DCA Pruning Algorithm with Generalization Error Guarantees .[J] arXiv preprint arXiv:1803.04239.
- 【Distillation】Multimodal Recurrent Neural Networks with Information Transfer Layers for Indoor Scene Labeling, Abrar H. Abdulnabi, Bing Shuai, Zhen Zuo, Lap-Pui Chau, Gang Wang, 2018
- 【Quantization】.Quantization of Fully Convolutional Networks for Accurate Biomedical Image Segmentation .[J] arXiv preprint arXiv:1803.04907.
- 【Distillation】Defensive Collaborative Multi-task Training - Defending against Adversarial Attack towards Deep Neural Networks, Derek Wang, Chaoran Li, Sheng Wen, Yang Xiang, Wanlei Zhou, Surya Nepal, 2018
- Dong Wang, Lei Zhou, Xueni Zhang, Xiao Bai, Jun Zhou .Exploring Linear Relationship in Feature Map Subspace for ConvNets Compression .[J] arXiv preprint arXiv:1803.05729.
- 【Distillation】Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples, Zihao Liu, Qi Liu, Tao Liu, Yanzhi Wang, Wujie Wen, 2018
- 【Distillation】Deep Co-Training for Semi-Supervised Image Recognition, Siyuan Qiao, Wei Shen, Zhishuai Zhang, Bo Wang, Alan Yuille, 2018
- Shuo Wang, Zhe Li, Caiwen Ding, Bo Yuan, Yanzhi Wang, Qinru Qiu, Yun Liang
- 【Train】Tang H, Gan S, Zhang C, et al. Communication compression for decentralized training[C]//Advances in Neural Information Processing Systems. 2018: 7652-7662. .C-LSTM: Enabling Efficient LSTM using Structured Compression Techniques on FPGAs .[J] arXiv preprint arXiv:1803.06305.
- Qing Tian, Tal Arbel, James J. Clark .Fisher Pruning of Deep Nets for Facial Trait Classification .[J] arXiv preprint arXiv:1803.08134.
- Tao Sheng, Chen Feng, Shaojie Zhuo, Xiaopeng Zhang, Liang Shen, Mickey Aleksic .A Quantization-Friendly Separable Convolution for MobileNets .[J] arXiv preprint arXiv:1803.08607.
- Maksym Kholiavchenko .Iterative Low-Rank Approximation for CNN Compression .[J] arXiv preprint arXiv:1803.08995.
- 【Distillation】Zheng Hui, Xiumei Wang, Xinbo Gao .Fast and Accurate Single Image Super-Resolution via Information Distillation Network .[J] arXiv preprint arXiv:1803.09454.
- Jongwon Choi, Hyung Jin Chang, Tobias Fischer, Sangdoo Yun, Kyuewang Lee, Jiyeoup Jeong, Yiannis Demiris, Jin Young Choi .Context-aware Deep Feature Compression for High-speed Visual Tracking .[J] arXiv preprint arXiv:1803.10537.
- 【Structure】Gholami A, Kwon K, Wu B, et al. Squeezenext: Hardware-aware neural network design[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018: 1638-1647.
- Vasileios Belagiannis, Azade Farshad, Fabio Galasso .Adversarial Network Compression .[J] arXiv preprint arXiv:1803.10750.
- Ameya Prabhu, Vishal Batchu, Sri Aurobindo Munagala, Rohit Gajawada, Anoop Namboodiri .Distribution-Aware Binarization of Neural Networks for Sketch Recognition .[J] arXiv preprint arXiv:1804.02941.
- 【Distillation】Large scale distributed neural network training through online distillation, Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E. Dahl, Geoffrey E. Hinton, 2018
- 【Pruning】Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad, Yanzhi Wang .A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers .[J] arXiv preprint arXiv:1804.03294.
【code:KaiqiZhang/admm-pruning】 - Guanglu Song, Yu Liu, Ming Jiang, Yujie Wang, Junjie Yan, Biao Leng .Beyond Trade-off: Accelerate FCN-based Face Detector with Higher Accuracy .[J] arXiv preprint arXiv:1804.05197.
- Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus .Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds .[J] arXiv preprint arXiv:1804.05345.
- Wenda Zhou, Victor Veitch, Morgane Austern, Ryan P. Adams, Peter Orbanz .Compressibility and Generalization in Large-Scale Deep Learning .[J] arXiv preprint arXiv:1804.05862.
- 【Structured】Xie G, Wang J, Zhang T, et al. Interleaved structured sparse convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8847-8856.
- Xu J, Nie Y, Wang P, et al. Training a Binary Weight Object Detector by Knowledge Transfer for Autonomous Driving[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 2379-2384.
- 【Quantization】Eunhyeok Park, Sungjoo Yoo, Peter Vajda .Value-aware Quantization for Training and Inference of Neural Networks .[J] arXiv preprint arXiv:1804.07802.
- Liyuan Liu, Xiang Ren, Jingbo Shang, Jian Peng, Jiawei Han .Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling .[J] arXiv preprint arXiv:1804.07827.
- Huan Wang, Qiming Zhang, Yuehai Wang, Roland Hu .Structured Deep Neural Network Pruning by Varying Regularization Parameters .[J] arXiv preprint arXiv:1804.09461.
- Takashi Shinozaki .Competitive Learning Enriches Learning Representation and Accelerates the Fine-tuning of CNNs .[J] arXiv preprint arXiv:1804.09859.
- Hyeong-Ju Kang .Accelerator-Aware Pruning for Convolutional Neural Networks .[J] arXiv preprint arXiv:1804.09862.
- Chenrui Zhang, Yuxin Peng .Better and Faster: Knowledge Transfer from Multiple Self-supervised Learning Tasks via Graph Distillation for Video Classification .[J] arXiv preprint arXiv:1804.10069.
- Chaim Baskin, Eli Schwartz, Evgenii Zheltonozhskii, Natan Liss, Raja Giryes, Alex M. Bronstein, Avi Mendelson .UNIQ: Uniform Noise Injection for the Quantization of Neural Networks .[J] arXiv preprint arXiv:1804.10969.
- Xuemeng Song, Fuli Feng, Xianjing Han, Xin Yang, Wei Liu, Liqiang Nie .Neural Compatibility Modeling with Attentive Knowledge Distillation .[J] arXiv preprint arXiv:1805.00313.
- Baohua Sun, Lin Yang, Patrick Dong, Wenhan Zhang, Jason Dong, Charles Young .Ultra Power-Efficient CNN Domain Specific Accelerator with 9.3TOPS/Watt for Mobile and Embedded Applications .[J] arXiv preprint arXiv:1805.00361.
- Biao Zhang, Deyi Xiong, Jinsong Su .Accelerating Neural Transformer via an Average Attention Network .[J] arXiv preprint arXiv:1805.00631.
- Brian Bartoldson, Adrian Barbu, Gordon Erlebacher .Enhancing the Regularization Effect of Weight Pruning in Artificial Neural Networks .[J] arXiv preprint arXiv:1805.01930.
- 【Quantization】Yi Wei, Xinyu Pan, Hongwei Qin, Wanli Ouyang, Junjie Yan .Quantization Mimic: Towards Very Tiny CNN for Object Detection .[J] arXiv preprint arXiv:1805.02152.
- Fuqiang Liu, C. Liu .Towards Accurate and High-Speed Spiking Neuromorphic Systems with Data Quantization-Aware Deep Networks .[J] arXiv preprint arXiv:1805.03054.
- 【Distillation】Dan Xu, Wanli Ouyang, Xiaogang Wang, Nicu Sebe .PAD-Net: Multi-Tasks Guided Prediction-and-Distillation Network for Simultaneous Depth Estimation and Scene Parsing .[J] arXiv preprint arXiv:1805.04409.
- 【Distillation】Born Again Neural Networks, Tommaso Furlanello, Zachary C. Lipton, Michael Tschannen, Laurent Itti, Anima Anandkumar, 2018
- 【Distillation】Byeongho Heo, Minsik Lee, Sangdoo Yun, Jin Young Choi .Knowledge Distillation with Adversarial Samples Supporting Decision Boundary .[J] arXiv preprint arXiv:1805.05532.
- Chenglin Yang, Lingxi Xie, Siyuan Qiao, Alan Yuille .Knowledge Distillation in Generations: More Tolerant Teachers Educate Better Students .[J] arXiv preprint arXiv:1805.05551.
- 【Quantization】Choi J, Wang Z, Venkataramani S, et al. Pact: Parameterized clipping activation for quantized neural networks[J]. arXiv preprint arXiv:1805.06085, 2018.
- Aupendu Kar, Sri Phani Krishna Karri, Nirmalya Ghosh, Ramanathan Sethuraman, Debdoot Sheet .Fully Convolutional Model for Variable Bit Length and Lossy High Density Compression of Mammograms .[J] arXiv preprint arXiv:1805.06909.
- Silvia L. Pintea, Yue Liu, Jan C. van Gemert .Recurrent knowledge distillation .[J] arXiv preprint arXiv:1805.07170.
- Thorsten Laude, Yannick Richter, Jörn Ostermann .Neural Network Compression using Transform Coding and Clustering .[J] arXiv preprint arXiv:1805.07258.
- Yoojin Choi, Mostafa El-Khamy, Jungwon Lee .Compression of Deep Convolutional Neural Networks under Joint Sparsity Constraints .[J] arXiv preprint arXiv:1805.08303.
- Panagiotis G. Mousouliotis, Loukas P. Petrou .SqueezeJet: High-level Synthesis Accelerator Design for Deep Convolutional Neural Networks .[J] arXiv preprint arXiv:1805.08695.
- 【Quantization】Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek .Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication .[J] arXiv preprint arXiv:1805.08768.
- 【Pruning】Jian-Hao Luo, Jianxin Wu .AutoPruner: An End-to-End Trainable Filter Pruning Method for Efficient Deep Model Inference .[J] arXiv preprint arXiv:1805.08941.
- Yi Yang, Andy Chen, Xiaoming Chen, Jiang Ji, Zhenyang Chen, Yan Dai .Deploy Large-Scale Deep Neural Networks in Resource Constrained IoT Devices with Local Quantization Region .[J] arXiv preprint arXiv:1805.09473.
- Jiahao Su, Jingling Li, Bobby Bhattacharjee, Furong Huang .Tensorized Spectrum Preserving Compression for Neural Networks .[J] arXiv preprint arXiv:1805.10352.
- Josh Fromm, Shwetak Patel, Matthai Philipose .Heterogeneous Bitwidth Binarization in Convolutional Neural Networks .[J] arXiv preprint arXiv:1805.10368.
- 【other】Zhou P, Feng J. Understanding generalization and optimization performance of deep CNNs[J]. arXiv preprint arXiv:1805.10767, 2018.
- Krzysztof Wróbel, Marcin Pietroń, Maciej Wielgosz, Michał Karwatowski, Kazimierz Wiatr .Convolutional neural network compression for natural language processing .[J] arXiv preprint arXiv:1805.10796.
- François Plesse, Alexandru Ginsca, Bertrand Delezoide, Françoise Prêteux .Visual Relationship Detection Based on Guided Proposals and Semantic Knowledge Distillation .[J] arXiv preprint arXiv:1805.10802.
- 【Train】Banner R, Hubara I, Hoffer E, et al. Scalable methods for 8-bit training of neural networks[C]//Advances in Neural Information Processing Systems. 2018: 5145-5153.
- Yijia Liu, Wanxiang Che, Huaipeng Zhao, Bing Qin, Ting Liu .Distilling Knowledge for Search-based Structured Prediction .[J] arXiv preprint arXiv:1805.11224.
- Dongsoo Lee, Byeongwook Kim .Retraining-Based Iterative Weight Quantization for Deep Neural Networks .[J] arXiv preprint arXiv:1805.11233.
- Yiming Hu, Siyang Sun, Jianquan Li, Xingang Wang, Qingyi Gu .A novel channel pruning method for deep neural network compression .[J] arXiv preprint arXiv:1805.11394.
- Xiaoliang Dai, Hongxu Yin, Niraj K. Jha .Grow and Prune Compact, Fast, and Accurate LSTMs .[J] arXiv preprint arXiv:1805.11797.
- Lazar Supic, Rawan Naous, Ranko Sredojevic, Aleksandra Faust, Vladimir Stojanovic .MPDCompress - Matrix Permutation Decomposition Algorithm for Deep Neural Network Compression .[J] arXiv preprint arXiv:1805.12085.
- 【Pruning】Weizhe Hua, Christopher De Sa, Zhiru Zhang, G. Edward Suh.Channel Gating Neural Networks[j] arXiv preprint arXiv:1805.12549
- Jie Zhang, Xiaolong Wang, Dawei Li, Yalin Wang .Dynamically Hierarchy Revolution: DirNet for Compressing Recurrent Neural Network on Mobile Devices .[J] arXiv preprint arXiv:1806.01248.
- Jianzhong Sheng, Chuanbo Chen, Chenchen Fu, Chun Jason Xue .EasyConvPooling: Random Pooling with Easy Convolution for Accelerating Training and Testing .[J] arXiv preprint arXiv:1806.01729.
- Yang H, Zhu Y, Liu J. Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking[J]. arXiv preprint arXiv:1806.04321, 2018.
- 【Distillation】Xu Lan, Xiatian Zhu, Shaogang Gong .Knowledge Distillation by On-the-Fly Native Ensemble .[J] arXiv preprint arXiv:1806.04606.
- Yijun Bian, Yijun Wang, Yaqiang Yao, Huanhuan Chen .Ensemble Pruning based on Objection Maximization with a General Distributed Framework .[J] arXiv preprint arXiv:1806.04899.
- Huiyuan Zhuo, Xuelin Qian, Yanwei Fu, Heng Yang, Xiangyang Xue .SCSP: Spectral Clustering Filter Pruning with Soft Self-adaption Manners .[J] arXiv preprint arXiv:1806.05320.
- Yibo Yang, Nicholas Ruozzi, Vibhav Gogate .Scalable Neural Network Compression and Pruning Using Hard Clustering and L1 Regularization .[J] arXiv preprint arXiv:1806.05355.
- Kohei Yamamoto, Kurato Maeno .PCAS: Pruning Channels with Attention Statistics .[J] arXiv preprint arXiv:1806.05382.
- Mohsen Imani, Mohammad Samragh, Yeseong Kim, Saransh Gupta, Farinaz Koushanfar, Tajana Rosing .RAPIDNN: In-Memory Deep Neural Network Acceleration Framework .[J] arXiv preprint arXiv:1806.05794.
- 【Structure】Xingyu Liu, Jeff Pool, Song Han, William J. Dally .Efficient Sparse-Winograd Convolutional Neural Networks .[J] arXiv preprint arXiv:1802.06367
- 【Structure】Sun K, Li M, Liu D, et al. Igcv3: Interleaved low-rank group convolutions for efficient deep neural networks[J]. arXiv preprint arXiv:1806.00178, 2018.
【code:homles11/IGCV3】 - Alireza Aghasi, Afshin Abdi, Justin Romberg .Fast Convex Pruning of Deep Neural Networks .[J] arXiv preprint arXiv:1806.06457.
- Maximilian Golub, Guy Lemieux, Mieszko Lis .DropBack: Continuous Pruning During Training .[J] arXiv preprint arXiv:1806.06949.
- Zhu S, Dong X, Su H. Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 4923-4932.
【code:XinDongol/BENN-PyTorch】 - 【Quantization】Krishnamoorthi R. Quantizing deep convolutional networks for efficient inference: A whitepaper[J]. arXiv preprint arXiv:1806.08342, 2018.
- 【Quantization】Junru Wu, Yue Wang, Zhenyu Wu, Zhangyang Wang, Ashok Veeraraghavan, Yingyan Lin .Deep $k$-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions .[J] arXiv preprint arXiv:1806.09228.
- Behzad Salami, Osman Unsal, Adrian Cristal .On the Resilience of RTL NN Accelerators: Fault Characterization and Mitigation .[J] arXiv preprint arXiv:1806.09679.
- 【other】Wang K C, Vicol P, Lucas J, et al. Adversarial distillation of bayesian neural network posteriors[J]. arXiv preprint arXiv:1806.10317, 2018.
- 【Quantization】Julian Faraone, Nicholas Fraser, Michaela Blott, Philip H.W. Leong .SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks .[J] arXiv preprint arXiv:1807.00301.
【code:julianfaraone/SYQ】 - Amogh Agrawal, Akhilesh Jaiswal, Bing Han, Gopalakrishnan Srinivasan, Kaushik Roy .Xcel-RAM: Accelerating Binary Neural Networks in High-Throughput SRAM Compute Arrays .[J] arXiv preprint arXiv:1807.00343.
- Jeff Zhang, Siddharth Garg .FATE: Fast and Accurate Timing Error Prediction Framework for Low Power DNN Accelerator Design .[J] arXiv preprint arXiv:1807.00480.
- Ekta Gujral, Ravdeep Pasricha, Tianxiong Yang, Evangelos E. Papalexakis .OCTen: Online Compression-based Tensor Decomposition .[J] arXiv preprint arXiv:1807.01350.
- Hamed Hakkak .Auto Deep Compression by Reinforcement Learning Based Actor-Critic Structure .[J] arXiv preprint arXiv:1807.02886.
- Salaheddin Alakkari, John Dingliana .An Acceleration Scheme for Memory Limited, Streaming PCA .[J] arXiv preprint arXiv:1807.06530.
- Seung Hyun Lee, Dae Ha Kim, Byung Cheol Song .Self-supervised Knowledge Distillation Using Singular Value Decomposition .[J] arXiv preprint arXiv:1807.06819.
- Grant P. Strimel, Kanthashree Mysore Sathyendra, Stanislav Peshterliev .Statistical Model Compression for Small-Footprint Natural Language Understanding .[J] arXiv preprint arXiv:1807.07520.
- 【Binarization】He Z, Gong B, Fan D. Optimize deep convolutional neural network with ternarized weights and high accuracy[C]//2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019: 913-921.
- Qianru Zhang, Meng Zhang, Tinghuan Chen, Zhifei Sun, Yuzhe Ma, Bei Yu .Recent Advances in Convolutional Neural Network Acceleration .[J] arXiv preprint arXiv:1807.08596.
- Armin Mehrabian, Yousra Al-Kabani, Volker J Sorger, Tarek El-Ghazawi .PCNNA: A Photonic Convolutional Neural Network Accelerator .[J] arXiv preprint arXiv:1807.08792.
- 【Pruning】Abhimanyu Dubey, Moitreya Chatterjee, Narendra Ahuja .Coreset-Based Neural Network Compression .[J] arXiv preprint arXiv:1807.09810.
【code:metro-smiles/CNN_Compression】 - 【Quantization】Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, Gang Hua .LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks .[J] arXiv preprint arXiv:1807.10029.
【code:microsoft/LQ-Nets】 - Hongyu Guo, Yongyi Mao, Richong Zhang .Aggregated Learning: A Vector Quantization Approach to Learning with Neural Networks .[J] arXiv preprint arXiv:1807.10251.
- Xavier Suau, Luca Zappella, Vinay Palakkode, Nicholas Apostoloff .Principal Filter Analysis for Guided Network Compression .[J] arXiv preprint arXiv:1807.10585.
- Jin Hee Kim, Brett Grady, Ruolong Lian, John Brothers, Jason H. Anderson .FPGA-Based CNN Inference Accelerator Synthesized from Multi-Threaded C Software .[J] arXiv preprint arXiv:1807.10695.
- Ling Liang, Lei Deng, Yueling Zeng, Xing Hu, Yu Ji, Xin Ma, Guoqi Li, Yuan Xie .Crossbar-aware neural network pruning .[J] arXiv preprint arXiv:1807.10816.
- Tianyun Zhang, Kaiqi Zhang, Shaokai Ye, Jiayu Li, Jian Tang, Wujie Wen, Xue Lin, Makan Fardad, Yanzhi Wang .ADAM-ADMM: A Unified, Systematic Framework of Structured Weight Pruning for DNNs .[J] arXiv preprint arXiv:1807.11091.
- 【Structure】【Shufflenet V2】Ma N, Zhang X, Zheng H T, et al. Shufflenet v2: Practical guidelines for efficient cnn architecture design[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 116-131.
- 【Structure】Chen Y, Kalantidis Y, Li J, et al. Multi-fiber networks for video recognition[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 352-367.
【code:cypw/PyTorch-MFNet】 - 【Low rank】Bo Peng, Wenming Tan, Zheyang Li, Shun Zhang, Di Xie, Shiliang Pu .Extreme Network Compression via Filter Group Approximation .[J] arXiv preprint arXiv:1807.11254.
- 【Structure】Tan M, Chen B, Pang R, et al. Mnasnet: Platform-aware neural architecture search for mobile[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 2820-2828.
【code:tensorflow/tpu】 - David M. Chan, Roshan Rao, Forrest Huang, John F. Canny .t-SNE-CUDA: GPU-Accelerated t-SNE and its Applications to Modern Data .[J] arXiv preprint arXiv:1807.11824.
- Ini Oguntola, Subby Olubeko, Christopher Sweeney .SlimNets: An Exploration of Deep Model Compression and Acceleration .[J] arXiv preprint arXiv:1808.00496.
- Zhanxuan Hu, Feiping Nie, Lai Tian, Rong Wang, Xuelong Li .A Comprehensive Survey for Low Rank Regularization .[J] arXiv preprint arXiv:1808.04521.
- Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin .Blended Coarse Gradient Descent for Full Quantization of Deep Neural Networks .[J] arXiv preprint arXiv:1808.05240.
- Denis A. Gudovskiy, Alec Hodgkinson, Luca Rigazio .DNN Feature Map Compression using Learned Representation over GF(2) .[J] arXiv preprint arXiv:1808.05285.
- Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Youngjun Kwak, Jae-Joon Han, Changkyu Choi .Joint Training of Low-Precision Neural Network with Quantization Interval Parameters .[J] arXiv preprint arXiv:1808.05779.
- 【Pruning】Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, Yi Yang .Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks .[J] arXiv preprint arXiv:1808.06866.
【code:he-y/soft-filter-pruning】 - Yang He, Xuanyi Dong, Guoliang Kang, Yanwei Fu, Yi Yang .Progressive Deep Neural Networks Acceleration via Soft Filter Pruning .[J] arXiv preprint arXiv:1808.07471.
- Ali Athar .An Overview of Datatype Quantization Techniques for Convolutional Neural Networks .[J] arXiv preprint arXiv:1808.07530.
- Yichen Zhou, Zhengze Zhou, Giles Hooker .Approximation Trees: Statistical Stability in Model Distillation .[J] arXiv preprint arXiv:1808.07573.
- Taiji Suzuki, Hiroshi Abe, Tomoya Murata, Shingo Horiuchi, Kotaro Ito, Tokuma Wachi, So Hirai, Masatoshi Yukishima, Tomoaki Nishimura .Spectral-Pruning: Compressing deep neural network via spectral analysis .[J] arXiv preprint arXiv:1808.08558.
- Junran Peng, Lingxi Xie, Zhaoxiang Zhang, Tieniu Tan, Jingdong Wang .Accelerating Deep Neural Networks with Spatial Bottleneck Modules .[J] arXiv preprint arXiv:1809.02601.
- Abdallah Moussawi, Kamal Haddad, Anthony Chahine .An FPGA-Accelerated Design for Deep Learning Pedestrian Detection in Self-Driving Vehicles .[J] arXiv preprint arXiv:1809.05879.
- 【Distillation】Multi-Label Image Classification via Knowledge Distillation from Weakly-Supervised Detection, Yongcheng Liu, Lu Sheng, Jing Shao, Junjie Yan, Shiming Xiang, Chunhong Pan, 2018
- Jiaxi Tang, Ke Wang .Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender System .[J] arXiv preprint arXiv:1809.07428.
- Matthias Springer .SoaAlloc: Accelerating Single-Method Multiple-Objects Applications on GPUs .[J] arXiv preprint arXiv:1809.07444.
- 【Structure】Huasong Zhong, Xianggen Liu, Yihui He, Yuchun Ma .Shift-based Primitives for Efficient Convolutional Neural Networks .[J] arXiv preprint arXiv:1809.08458
- Jeffrey L Mckinstry, Davis R. Barch, Deepika Bablani, Michael V. Debole, Steven K. Esser, Jeffrey A. Kusnitz, John V. Arthur, Dharmendra S. Modha .Low Precision Policy Distillation with Application to Low-Power, Real-time Sensation-Cognition-Action Loop with Neuromorphic Computing .[J] arXiv preprint arXiv:1809.09260.
- Raphael Tang, Jimmy Lin .Adaptive Pruning of Neural Language Models for Mobile Devices .[J] arXiv preprint arXiv:1809.10282.
- 【other】Oyallon E, Belilovsky E, Zagoruyko S, et al. Compressing the Input for CNNs with the First-Order Scattering Transform[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 301-316.
- Jacob R. Gardner, Geoff Pleiss, David Bindel, Kilian Q. Weinberger, Andrew Gordon Wilson .GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration .[J] arXiv preprint arXiv:1809.11165.
- Chaim Baskin, Natan Liss, Yoav Chai, Evgenii Zheltonozhskii, Eli Schwartz, Raja Giryes, Avi Mendelson, Alexander M. Bronstein .NICE: Noise Injection and Clamping Estimation for Neural Network Quantization .[J] arXiv preprint arXiv:1810.00162.
- Simon Alford, Ryan Robinett, Lauren Milechin, Jeremy Kepner .Pruned and Structurally Sparse Neural Networks .[J] arXiv preprint arXiv:1810.00299.
- Marton Havasi, Robert Peharz, José Miguel Hernández-Lobato .Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters .[J] arXiv preprint arXiv:1810.00440.
- Ting-Wu Chin, Cha Zhang, Diana Marculescu .Layer-compensated Pruning for Resource-constrained Convolutional Neural Networks .[J] arXiv preprint arXiv:1810.00518.
- 【Pruning】Liu L, Deng L, Hu X, et al. Dynamic sparse graph for efficient deep learning[J]. arXiv preprint arXiv:1810.00859, 2018.
【code:mtcrawshaw/dynamic-sparse-graph】 - Bai Y, Wang Y X, Liberty E. Proxquant: Quantized neural networks via proximal operators[J]. arXiv preprint arXiv:1810.00861, 2018.
【code:allenbai01/ProxQuant】 - Christos Louizos, Matthias Reisser, Tijmen Blankevoort, Efstratios Gavves, Max Welling .Relaxed Quantization for Discretized Neural Networks .[J] arXiv preprint arXiv:1810.01875.
- Animesh Koratana, Daniel Kang, Peter Bailis, Matei Zaharia .LIT: Block-wise Intermediate Representation Training for Model Compression .[J] arXiv preprint arXiv:1810.01937.
- 【Quantization】Fu C, Zhu S, Su H, et al. Towards fast and energy-efficient binarized neural network inference on fpga[J]. arXiv preprint arXiv:1810.02068, 2018.
- Anna T. Thomas, Albert Gu, Tri Dao, Atri Rudra, Christopher Ré .Learning Compressed Transforms with Low Displacement Rank .[J] arXiv preprint arXiv:1810.02309.
- 【Pruning】Namhoon Lee, Thalaiyasingam Ajanthan, Philip H. S. Torr .SNIP: Single-shot Network Pruning based on Connection Sensitivity .[J] arXiv preprint arXiv:1810.02340.
【code:namhoonlee/snip-public】 - Lukas Cavigelli, Luca Benini .Extended Bit-Plane Compression for Convolutional Neural Network Accelerators .[J] arXiv preprint arXiv:1810.03979.
- Kyle D. Julian, Mykel J. Kochenderfer, Michael P. Owen .Deep Neural Network Compression for Aircraft Collision Avoidance Systems .[J] arXiv preprint arXiv:1810.04240.
- Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle .Pruning neural networks: is it time to nip it in the bud? .[J] arXiv preprint arXiv:1810.04622.
- 【Pruning】Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, Trevor Darrell .Rethinking the Value of Network Pruning .[J] arXiv preprint arXiv:1810.05270.
【code:Eric-mingjie/rethinking-network-pruning】 - 【Pruning】Xitong Gao, Yiren Zhao, Lukasz Dudziak, Robert Mullins, Cheng-zhong Xu .Dynamic Channel Pruning: Feature Boosting and Suppression .[J] arXiv preprint arXiv:1810.05331.
【code:deep-fry/mayo】 - Ke Song, Chun Yuan, Peng Gao, Yunxu Sun .FPGA-based Acceleration System for Visual Tracking .[J] arXiv preprint arXiv:1810.05367.
- Jun Haeng Lee, Sangwon Ha, Saerom Choi, Won-Jo Lee, Seungwon Lee .Quantization for Rapid Deployment of Deep Neural Networks .[J] arXiv preprint arXiv:1810.05488.
- Ron Banner, Yury Nahshan, Elad Hoffer, Daniel Soudry .ACIQ: Analytical Clipping for Integer Quantization of neural networks .[J] arXiv preprint arXiv:1810.05723.
- Weihao Gao, Chong Wang, Sewoong Oh .Rate Distortion For Model Compression: From Theory To Practice .[J] arXiv preprint arXiv:1810.06401.
- 【Pruning】Zhuwei Qin, Fuxun Yu, Chenchen Liu, Liang Zhao, Xiang Chen .Interpretable Convolutional Filter Pruning .[J] arXiv preprint arXiv:1810.07322.
- 【Pruning】Shaokai Ye, Tianyun Zhang, Kaiqi Zhang, Jiayu Li, Kaidi Xu, Yunfei Yang, Fuxun Yu, Jian Tang, Makan Fardad, Sijia Liu, Xiang Chen, Xue Lin, Yanzhi Wang .Progressive Weight Pruning of Deep Neural Networks using ADMM .[J] arXiv preprint arXiv:1810.07378.
- Artur Jordao, Fernando Yamada, William Robson Schwartz .Pruning Deep Neural Networks using Partial Least Squares .[J] arXiv preprint arXiv:1810.07610.
- D.Babin, I.Mazurenko, D.Parkhomenko, A.Voloshko .CNN inference acceleration using dictionary of centroids .[J] arXiv preprint arXiv:1810.08612.
- Qing Qin, Jie Ren, Jialong Yu, Ling Gao, Hai Wang, Jie Zheng, Yansong Feng, Jianbin Fang, Zheng Wang .To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference .[J] arXiv preprint arXiv:1810.08899.
- Joris Roels, Jonas De Vylder, Jan Aelterman, Yvan Saeys, Wilfried Philips .Convolutional Neural Network Pruning to Accelerate Membrane Segmentation in Electron Microscopy .[J] arXiv preprint arXiv:1810.09735.
- Hsin-Pai Cheng, Yuanjun Huang, Xuyang Guo, Yifei Huang, Feng Yan, Hai Li, Yiran Chen .Differentiable Fine-grained Quantization for Deep Neural Network Compression .[J] arXiv preprint arXiv:1810.10351.
- Jack Turner, Elliot J. Crowley, Valentin Radu, José Cano, Amos Storkey, Michael O'Boyle .HAKD: Hardware Aware Knowledge Distillation .[J] arXiv preprint arXiv:1810.10460.
- Nadezhda Chirkova, Ekaterina Lobacheva, Dmitry Vetrov .Bayesian Compression for Natural Language Processing .[J] arXiv preprint arXiv:1810.10927.
- Amichai Painsky, Saharon Rosset .Lossless (and Lossy) Compression of Random Forests .[J] arXiv preprint arXiv:1810.11197.
- 【Pruning】Zhuangwei Zhuang, Mingkui Tan, Bohan Zhuang, Jing Liu, Yong Guo, Qingyao Wu, Junzhou Huang, Jinhui Zhu .Discrimination-aware Channel Pruning for Deep Neural Networks .[J] arXiv preprint arXiv:1810.11809.
【code:SCUT-AILab/DCP】 - 【other】Dongsoo Lee, Parichay Kapoor, Byeongwook Kim .DeepTwist: Learning Model Compression via Occasional Weight Distortion .[J] arXiv preprint arXiv:1810.12823.
- 【Distillation】Akhilesh Gotmare, Nitish Shirish Keskar, Caiming Xiong, Richard Socher .A Closer Look at Deep Learning Heuristics: Learning rate restarts, Warmup and Distillation .[J] arXiv preprint arXiv:1810.13243.
- Doyun Kim, Han Young Yim, Sanghyuck Ha, Changgwun Lee, Inyup Kang .Convolutional Neural Network Quantization using Generalized Gamma Distribution .[J] arXiv preprint arXiv:1810.13329.
- 【Pruning】Yang He, Ping Liu, Ziwei Wang, Yi Yang .Pruning Filter via Geometric Median for Deep Convolutional Neural Networks Acceleration .[J] arXiv preprint arXiv:1811.00250.
【code:he-y/filter-pruning-geometric-median】 - Xiaofan Xu, Mi Sun Park, Cormac Brick .Hybrid Pruning: Thinner Sparse Networks for Fast Inference on Edge Devices .[J] arXiv preprint arXiv:1811.00482.
- Anish Acharya, Rahul Goel, Angeliki Metallinou, Inderjit Dhillon .Online Embedding Compression for Text Classification using Low Rank Matrix Factorization .[J] arXiv preprint arXiv:1811.00641.
- Ahmed T. Elthakeb, Prannoy Pilligundla, Amir Yazdanbakhsh, Sean Kinzer, Hadi Esmaeilzadeh .ReLeQ: A Reinforcement Learning Approach for Deep Quantization of Neural Networks .[J] arXiv preprint arXiv:1811.01704.
- Shaokai Ye, Tianyun Zhang, Kaiqi Zhang, Jiayu Li, Jiaming Xie, Yun Liang, Sijia Liu, Xue Lin, Yanzhi Wang .A Unified Framework of DNN Weight Pruning and Weight Clustering/Quantization Using ADMM .[J] arXiv preprint arXiv:1811.01907.
- Yulhwa Kim, Hyungjun Kim, Jae-Joon Kim .Neural Network-Hardware Co-design for Scalable RRAM-based BNN Accelerators .[J] arXiv preprint arXiv:1811.02187.
- Zhuwei Qin, Fuxun Yu, ChenChen Liu, Xiang Chen .Demystifying Neural Network Filter Pruning .[J] arXiv preprint arXiv:1811.02639.
- 【Distillation】Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks, Yuenan Hou, Zheng Ma, Chunxiao Liu, Chen Change Loy, 2018
- 【Distillation】Fuxun Yu, Zhuwei Qin, Xiang Chen .Distilling Critical Paths in Convolutional Neural Networks .[J] arXiv preprint arXiv:1811.02643.
- 【Distillation】YASENN: Explaining Neural Networks via Partitioning Activation Sequences, Yaroslav Zharov, Denis Korzhenkov, Pavel Shvechikov, Alexander Tuzhilin, 2018
- Byeongho Heo, Minsik Lee, Sangdoo Yun, Jin Young Choi .Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons .[J] arXiv preprint arXiv:1811.03233.
- 【Quantization】Mingchao Yu, Zhifeng Lin, Krishna Narra, Songze Li, Youjie Li, Nam Sung Kim, Alexander Schwing, Murali Annavaram, Salman Avestimehr .GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training .[J] arXiv preprint arXiv:1811.03617.
- Ching-Yun Ko, Cong Chen, Yuke Zhang, Kim Batselier, Ngai Wong .Deep Compression of Sum-Product Networks on Tensor Networks .[J] arXiv preprint arXiv:1811.03963.
- Raden Mu'az Mun'im, Nakamasa Inoue, Koichi Shinoda .Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition .[J] arXiv preprint arXiv:1811.04531.
- Samyak Parajuli, Aswin Raghavan, Sek Chai .Generalized Ternary Connect: End-to-End Learning and Compression of Multiplication-Free Deep Neural Networks .[J] arXiv preprint arXiv:1811.04985.
- Ji Wang, Weidong Bao, Lichao Sun, Xiaomin Zhu, Bokai Cao, Philip S. Yu .Private Model Compression via Knowledge Distillation .[J] arXiv preprint arXiv:1811.05072.
- Fabien Cardinaux, Stefan Uhlich, Kazuki Yoshiyama, Javier Alonso García, Stephen Tiedemann, Thomas Kemp, Akira Nakamura .Iteratively Training Look-Up Tables for Network Quantization .[J] arXiv preprint arXiv:1811.05355.
- 【Distillation】Fast Human Pose Estimation, Feng Zhang, Xiatian Zhu, Mao Ye, 2019
- Miguel de Prado, Maurizio Denna, Luca Benini, Nuria Pazos .QUENN: QUantization Engine for low-power Neural Networks .[J] arXiv preprint arXiv:1811.05896.
- Hang Lu, Xin Wei, Ning Lin, Guihai Yan, and Xiaowei Li .Tetris: Re-architecting Convolutional Neural Network Computation for Machine Learning Accelerators .[J] arXiv preprint arXiv:1811.06841. 【Pruning】aditya Prakash, James Storer, Dinei Florencio, Cha Zhang .RePr: Improved Training of Convolutional Filters .[J] arXiv preprint arXiv:1811.07275
- Georgios Tsitsikas, Evangelos E. Papalexakis .The core consistency of a compressed tensor .[J] arXiv preprint arXiv:1811.07428.
- Yu Pan, Jing Xu, Maolin Wang, Jinmian Ye, Fei Wang, Kun Bai, Zenglin Xu .Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition .[J] arXiv preprint arXiv:1811.07503.
- Yuxin Zhang, Huan Wang, Yang Luo, Roland Hu .Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method .[J] arXiv preprint arXiv:1811.07555.
- Pengyuan Ren, Jianmin Li .Factorized Distillation: Training Holistic Person Re-identification Model by Distilling an Ensemble of Partial ReID Models .[J] arXiv preprint arXiv:1811.08073.
- Travis Desell .Accelerating the Evolution of Convolutional Neural Networks with Node-Level Mutations and Epigenetic Weight Initialization .[J] arXiv preprint arXiv:1811.08286.
- Pravendra Singh, Vinay Sameer Raja Kadi, Nikhil Verma, Vinay P. Namboodiri .Stability Based Filter Pruning for Accelerating Deep CNNs .[J] arXiv preprint arXiv:1811.08321.
- Pravendra Singh, Manikandan R, Neeraj Matiyali, Vinay P. Namboodiri .Multi-layer Pruning Framework for Compressing Single Shot MultiBox Detector .[J] arXiv preprint arXiv:1811.08342.
- Huan Wang, Qiming Zhang, Yuehai Wang, Haoji Hu .Structured Pruning for Efficient ConvNets via Incremental Regularization .[J] arXiv preprint arXiv:1811.08390.
- Mengdi Wang, Qing Zhang, Jun Yang, Xiaoyuan Cui, Wei Lin .Graph-Adaptive Pruning for Efficient Inference of Convolutional Neural Networks .[J] arXiv preprint arXiv:1811.08589.
- Yifan Yang, Qijing Huang, Bichen Wu, Tianjun Zhang, Liang Ma, Giulio Gambardella, Michaela Blott, Luciano Lavagno, Kees Vissers, John Wawrzynek, Kurt Keutzer .Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs .[J] arXiv preprint arXiv:1811.08634.
- Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, Song Han .HAQ: Hardware-Aware Automated Quantization .[J] arXiv preprint arXiv:1811.08886.
- 【Pruning】Carl Lemaire, Andrew Achkar, Pierre-Marc Jodoin .Structured Pruning of Neural Networks with Budget-Aware Regularization .[J] arXiv preprint arXiv:1811.09332.
- Yukang Chen, Gaofeng Meng, Qian Zhang, Xinbang Zhang, Liangchen Song, Shiming Xiang, Chunhong Pan .Joint Neural Architecture Search and Quantization .[J] arXiv preprint arXiv:1811.09426.
- Maxim Naumov, Utku Diril, Jongsoo Park, Benjamin Ray, Jedrzej Jablonski, Andrew Tulloch .On Periodic Functions as Regularizers for Quantization of Neural Networks .[J] arXiv preprint arXiv:1811.09862.
- Shiming Ge, Shengwei Zhao, Chenyu Li, Jia Li .Low-resolution Face Recognition in the Wild via Selective Knowledge Distillation .[J] arXiv preprint arXiv:1811.09998.
- Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, Alexei A. EfrosDataset Distillation .[J] arXiv preprint arXiv:1811.09998.
- Pravendra Singh, Vinay Kumar Verma, Piyush Rai, Vinay P. Namboodiri .Leveraging Filter Correlations for Deep Model Compression .[J] arXiv preprint arXiv:1811.10559.
- 【Structure】Mehta S, Rastegari M, Shapiro L, et al. Espnetv2: A light-weight, power efficient, and general purpose convolutional neural network[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 9190-9200.
【code:sacmehta/ESPNetv2】 - Luna M. Zhang .Effective, Fast, and Memory-Efficient Compressed Multi-function Convolutional Neural Networks for More Accurate Medical Image Classification .[J] arXiv preprint arXiv:1811.11996.
- 【Low rank】Hyeji Kim, Muhammad Umar Karim, Chong-Min Kyung .A Framework for Fast and Efficient Neural Network Compression .[J] arXiv preprint arXiv:1811.12781.
【code:Hyeji-Kim/ENC】 - Bichen Wu, Yanghan Wang, Peizhao Zhang, Yuandong Tian, Peter Vajda, Kurt Keutzer .Mixed Precision Quantization of ConvNets via Differentiable Neural Architecture Search .[J] arXiv preprint arXiv:1812.00090.
- Chenglin Yang, Lingxi Xie, Chi Su, Alan L. Yuille .Snapshot Distillation: Teacher-Student Optimization in One Generation .[J] arXiv preprint arXiv:1812.00123.
- 【Structure】Cai H, Zhu L, Han S. Proxylessnas: Direct neural architecture search on target task and hardware[J]. arXiv preprint arXiv:1812.00332, 2018.
- Yuefu Zhou, Ya Zhang, Yanfeng Wang, Qi Tian .Network Compression via Recursive Bayesian Pruning .[J] arXiv preprint arXiv:1812.00353.
- Wei-Chun Chen, Chia-Che Chang, Chien-Yu Lu, Che-Rung Lee .Knowledge Distillation with Feature Maps for Image Classification .[J] arXiv preprint arXiv:1812.00660.
- Christian Pinto, Yiannis Gkoufas, Andrea Reale, Seetharami Seelam, Steven Eliuk .Hoard: A Distributed Data Caching System to Accelerate Deep Learning Training on the Cloud .[J] arXiv preprint arXiv:1812.00669.
- Minghan Li, Tanli Zuo, Ruicheng Li, Martha White, Weishi Zheng .Accelerating Large Scale Knowledge Distillation via Dynamic Importance Sampling .[J] arXiv preprint arXiv:1812.00914.
- Ahmed Abdelatty, Pracheta Sahoo, Chiradeep Roy .Structure Learning Using Forced Pruning .[J] arXiv preprint arXiv:1812.00975.
- Sourya Dey, Kuan-Wen Huang, Peter A. Beerel, Keith M. Chugg .Pre-Defined Sparse Neural Networks with Hardware Acceleration .[J] arXiv preprint arXiv:1812.01164.
- Sascha Saralajew, Lars Holdijk, Maike Rees, Thomas Villmann .Prototype-based Neural Network Layers: Incorporating Vector Quantization .[J] arXiv preprint arXiv:1812.01214.
- KouZi Xing .Training for 'Unstable' CNN Accelerator:A Case Study on FPGA .[J] arXiv preprint arXiv:1812.01689.
- Haichuan Yang, Yuhao Zhu, Ji Liu .ECC: Energy-Constrained Deep Neural Network Compression via a Bilinear Regression Model .[J] arXiv preprint arXiv:1812.01803.
- 【Distillation】Tianhong Li, Jianguo Li, Zhuang Liu, Changshui Zhang .Knowledge Distillation from Few Samples .[J] arXiv preprint arXiv:1812.01839.
- Haipeng Jia, Xueshuang Xiang, Da Fan, Meiyu Huang, Changhao Sun, Qingliang Meng, Yang He, Chen Chen .DropPruning for Model Compression .[J] arXiv preprint arXiv:1812.02035.
- Ruishan Liu, Nicolo Fusi, Lester Mackey .Model Compression with Generative Adversarial Networks .[J] arXiv preprint arXiv:1812.02271.
- Yuhui Xu, Shuai Zhang, Yingyong Qi, Jiaxian Guo, Weiyao Lin, Hongkai Xiong .DNQ: Dynamic Network Quantization .[J] arXiv preprint arXiv:1812.02375.
- Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong .Trained Rank Pruning for Efficient Deep Neural Networks .[J] arXiv preprint arXiv:1812.02402.
- 【Distillation】MEAL: Multi-Model Ensemble via Adversarial Learning, Zhiqiang Shen, Zhankui He, Xiangyang Xue, 2019
- Ravi Teja Mullapudi, Steven Chen, Keyi Zhang, Deva Ramanan, Kayvon Fatahalian .Online Model Distillation for Efficient Video Inference .[J] arXiv preprint arXiv:1812.02699.
- Idoia Ruiz, Bogdan Raducanu, Rakesh Mehta, Jaume Amores .Optimizing Speed/Accuracy Trade-Off for Person Re-identification via Knowledge Distillation .[J] arXiv preprint arXiv:1812.02937.
- Wei Wang, Liqiang Zhu .Reliable Identification of Redundant Kernels for Convolutional Neural Network Compression .[J] arXiv preprint arXiv:1812.03608.
- Somak Aditya, Rudra Saha, Yezhou Yang, Chitta Baral .Spatial Knowledge Distillation to aid Visual Reasoning .[J] arXiv preprint arXiv:1812.03631.
- Salonik Resch, S. Karen Khatamifard, Zamshed Iqbal Chowdhury, Masoud Zabihi, Zhengyang Zhao, Jian-Ping Wang, Sachin S. Sapatnekar, Ulya R. Karpuzcu .Exploiting Processing in Non-Volatile Memory for Binary Neural Network Accelerators .[J] arXiv preprint arXiv:1812.03989.
- Georgios Georgiadis .Accelerating Convolutional Neural Networks via Activation Map Compression .[J] arXiv preprint arXiv:1812.04056.
- Thalaiyasingam Ajanthan, Puneet K. Dokania, Richard Hartley, Philip H. S. Torr .Proximal Mean-field for Neural Network Quantization .[J] arXiv preprint arXiv:1812.04353.
- Yuchao Li, Shaohui Lin, Baochang Zhang, Jianzhuang Liu, David Doermann, Yongjian Wu, Feiyue Huang, Rongrong Ji .Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression .[J] arXiv preprint arXiv:1812.04368.
- Weijie Chen, Yuan Zhang, Di Xie, Shiliang Pu .A Layer Decomposition-Recomposition Framework for Neuron Pruning towards Accurate Lightweight Networks .[J] arXiv preprint arXiv:1812.06611.
- Alexey Kruglov .Channel-wise pruning of neural networks with tapering resource constraint .[J] arXiv preprint arXiv:1812.07060.
- Mohammad Motamedi, Felix Portillo, Daniel Fong, Soheil Ghiasi .Distill-Net: Application-Specific Distillation of Deep Convolutional Neural Networks for Resource-Constrained IoT Platforms .[J] arXiv preprint arXiv:1812.07390.
- Mohammad Hossein Samavatian, Anys Bacha, Li Zhou, Radu Teodorescu .RNNFast: An Accelerator for Recurrent Neural Networks Using Domain Wall Memory .[J] arXiv preprint arXiv:1812.07609.
- Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev .Fast Adjustable Threshold For Uniform Neural Network Quantization .[J] arXiv preprint arXiv:1812.07872.
- Xin Li, Shuai Zhang, Bolan Jiang, Yingyong Qi, Mooi Choo Chuah, Ning Bi .DAC: Data-free Automatic Acceleration of Convolutional Networks .[J] arXiv preprint arXiv:1812.08374.
- 【Structured】【SlimmableNet】Yu J, Yang L, Xu N, et al. Slimmable neural networks[J]. arXiv preprint arXiv:1812.08928, 2018.
【code:JiahuiYu/slimmable_networks】 - Eunhyeok Park, Dongyoung Kim, Sungjoo Yoo, Peter Vajda .Precision Highway for Ultra Low-Precision Quantization .[J] arXiv preprint arXiv:1812.09818.
- Tailin Liang, Lei Wang, Shaobo Shi, John Glossner .Dynamic Runtime Feature Map Pruning .[J] arXiv preprint arXiv:1812.09922.
- Darabi S, Belbahri M, Courbariaux M, et al. BNN+: Improved binary network training[J]. arXiv preprint arXiv:1812.11800, 2018.
- Deepak Mittal, Shweta Bhardwaj, Mitesh M. Khapra, Balaraman Ravindran .Studying the Plasticity in Deep Convolutional Neural Networks using Random Pruning .[J] arXiv preprint arXiv:1812.10240.
- Xuan Liu, Xiaoguang Wang, Stan Matwin .Improving the Interpretability of Deep Neural Networks with Knowledge Distillation .[J] arXiv preprint arXiv:1812.10924.
- Ghouthi Boukli Hacene (ELEC), Vincent Gripon, Matthieu Arzel (ELEC), Nicolas Farrugia (ELEC), Yoshua Bengio (DIRO) .Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks .[J] arXiv preprint arXiv:1812.11337.
- Charbel Sakr, Naresh Shanbhag .Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm .[J] arXiv preprint arXiv:1812.11732.
- 【Structured】【EfficientNet】Tan M, Le Q V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks[J]. arXiv preprint arXiv:1905.11946, 2019.
【code:tensorflow/tpu】 - 【Low rank】Chen T, Lin J, Lin T, et al. Adaptive mixture of low-rank factorizations for compact neural modeling[J]. 2018.
【code:zuenko/ALRF】 - 【Pruning】Mehta D, Kim K I, Theobalt C. On implicit filter level sparsity in convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 520-528.
【code:mehtadushy/SelecSLS-Pytorch】 - 【Pruning】Peng H, Wu J, Chen S, et al. Collaborative Channel Pruning for Deep Networks[C]//International Conference on Machine Learning. 2019: 5113-5122.
- 【Pruning】Zhao C, Ni B, Zhang J, et al. Variational Convolutional Neural Network Pruning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 2780-2789.
- 【Pruning】Li J, Qi Q, Wang J, et al. OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 7046-7055.
- Alizadeh M, Fernández-Marqués J, Lane N D, et al. An Empirical study of Binary Neural Networks' Optimisation[J]. 2018.
- Wang Z, Lu J, Tao C, et al. Learning Channel-Wise Interactions for Binary Convolutional Neural Networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 568-577.
- Xu Y, Dong X, Li Y, et al. A Main/Subsidiary Network Framework for Simplifying Binary Neural Networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 7154-7162.
- Ding R, Chin T W, Liu Z, et al. Regularizing activation distribution for training binarized deep networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 11408-11417.
【code:ruizhoud/DistributionLoss】 - .Quantization Networks
- Liu C, Ding W, Xia X, et al. Circulant Binary Convolutional Networks: Enhancing the Performance of 1-bit DCNNs with Circulant Back Propagation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 2691-2699.
- Zhuang B, Shen C, Tan M, et al. Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 413-422.
- Accurate and Efficient 2-bit Quantized Neural Networks, SysML 2019, [paper]
- Jung S, Son C, Lee S, et al. Learning to quantize deep networks by optimizing quantization intervals with task loss[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 4350-4359.
- Ahmad Shawahna, Sadiq M. Sait, Aiman El-Maleh .FPGA-based Accelerators of Deep Learning Networks for Learning and Classification: A Review .[J] arXiv preprint arXiv:1901.00121.
- Shitao Tang, Litong Feng, Wenqi Shao, Zhanghui Kuang, Wei Zhang, Yimin Chen .Learning Efficient Detector with Semi-supervised Adaptive Distillation .[J] arXiv preprint arXiv:1901.00366.
- Tong Geng, Tianqi Wang, Ang Li, Xi Jin, Martin Herbordt .A Scalable Framework for Acceleration of CNN Training on Deeply-Pipelined FPGA Clusters with Weight and Workload Balancing .[J] arXiv preprint arXiv:1901.01007.
- Zehua Cheng, Zhenghua Xu .Bandwidth Reduction using Importance Weighted Pruning on Ring AllReduce .[J] arXiv preprint arXiv:1901.01544.
- Suraj Mishra, Peixian Liang, Adam Czajka, Danny Z. Chen, X. Sharon Hu .CC-Net: Image Complexity Guided Network Compression for Biomedical Image Segmentation .[J] arXiv preprint arXiv:1901.01578.
- Xue Geng, Jie Fu, Bin Zhao, Jie Lin, Mohamed M. Sabry Aly, Christopher Pal, Vijay Chandrasekhar .Dataflow-based Joint Quantization of Weights and Activations for Deep Neural Networks .[J] arXiv preprint arXiv:1901.02064.
- Jiecao Yu, Jongsoo Park, Maxim Naumov .Spatial-Winograd Pruning Enabling Sparse Winograd Convolution .[J] arXiv preprint arXiv:1901.02132.
- Hyun-Joo Jung, Jaedeok Kim, Yoonsuck Choe .How Compact?: Assessing Compactness of Representations through Layer-Wise Pruning .[J] arXiv preprint arXiv:1901.02757.
- Mohammad Samragh, Mojan Javaheripi, Farinaz Koushanfar .CodeX: Bit-Flexible Encoding for Streaming-based FPGA Acceleration of DNNs .[J] arXiv preprint arXiv:1901.05582.
【code:MohammadSamragh/CodeX)】 - Jiemin Fang, Yukang Chen, Xinbang Zhang, Qian Zhang, Chang Huang, Gaofeng Meng, Wenyu Liu, Xinggang Wang .EAT-NAS: Elastic Architecture Transfer for Accelerating Large-scale Neural Architecture Search .[J] arXiv preprint arXiv:1901.05884.
【code:JaminFong/EAT-NAS】 - Saeed Karimi-Bidhendi, Jun Guo, Hamid Jafarkhani .Using Quantization to Deploy Heterogeneous Nodes in Two-Tier Wireless Sensor Networks .[J] arXiv preprint arXiv:1901.06742.
- Jinrong Guo, Wantao Liu, Wang Wang, Qu Lu, Songlin Hu, Jizhong Han, Ruixuan Li .AccUDNN: A GPU Memory Efficient Accelerator for Training Ultra-deep Deep Neural Networks .[J] arXiv preprint arXiv:1901.06773.
- Zhiwen Zuo, Lei Zhao, Liwen Zuo, Feng Jiang, Wei Xing, Dongming Lu .On Compression of Unsupervised Neural Nets by Pruning Weak Connections .[J] arXiv preprint arXiv:1901.07066.
- Shaohui Lin, Rongrong Ji, Yuchao Li, Cheng Deng, Xuelong Li .Towards Compact ConvNets via Structure-Sparsity Regularized Filter Pruning .[J] arXiv preprint arXiv:1901.07827.
【code:ShaohuiLin/SSR】 - Sam Green, Craig M. Vineyard, Çetin Kaya Koç .Distillation Strategies for Proximal Policy Optimization .[J] arXiv preprint arXiv:1901.08128.
- Li Yue, Zhao Weibin, Shang Lin .Really should we pruning after model be totally trained? Pruning based on a small amount of training .[J] arXiv preprint arXiv:1901.08455.
- Sian Jin, Sheng Di, Xin Liang, Jiannan Tian, Dingwen Tao, Franck Cappello .DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression .[J] arXiv preprint arXiv:1901.09124.
- Sangkug Lym, Esha Choukse, Siavash Zangeneh, Wei Wen, Mattan Erez, Sujay Shanghavi .PruneTrain: Gradual Structured Pruning from Scratch for Faster Neural Network Training .[J] arXiv preprint arXiv:1901.09290.
- Yuheng Bu, Weihao Gao, Shaofeng Zou, Venugopal V. Veeravalli .Information-Theoretic Understanding of Population Risk Improvement with Model Compression .[J] arXiv preprint arXiv:1901.09421.
【code:aaron-xichen/pytorch-playgroun】 - Ritchie Zhao, Yuwei Hu, Jordan Dotzel, Christopher De Sa, Zhiru Zhang .Improving Neural Network Quantization without Retraining using Outlier Channel Splitting .[J] arXiv preprint arXiv:1901.09504.
【code:cornell-zhang/dnn-quant-ocs】 - Valentin Khrulkov, Oleksii Hrinchuk, Leyla Mirvakhabova, Ivan Oseledets .Tensorized Embedding Layers for Efficient Model Compression .[J] arXiv preprint arXiv:1901.10787.
- Sina Shahhosseini, Ahmad Albaqsami, Masoomeh Jasemi, Shaahin Hessabi, Nader Bagherzadeh .Partition Pruning: Parallelization-Aware Pruning for Deep Neural Networks .[J] arXiv preprint arXiv:1901.11391.
- Bin Liu, Yue Cao, Mingsheng Long, Jianmin Wang, Jingdong Wang .Deep Triplet Quantization .[J] arXiv preprint arXiv:1902.00153.
- Angeline Aguinaldo, Ping-Yeh Chiang, Alex Gain, Ameya Patil, Kolten Pearson, Soheil Feizi .Compressing GANs using Knowledge Distillation .[J] arXiv preprint arXiv:1902.00159.
- Shengcao Cao, Xiaofang Wang, Kris M. Kitani .Learnable Embedding Space for Efficient Neural Architecture Compression .[J] arXiv preprint arXiv:1902.00383.
- Jie Zhang, Xiaolong Wang, Dawei Li, Shalini Ghosh, Abhishek Kolagunda, Yalin Wang .MICIK: MIning Cross-Layer Inherent Similarity Knowledge for Deep Model Compression .[J] arXiv preprint arXiv:1902.00918.
- Alberto Marchisio, Muhammad Shafique .CapStore: Energy-Efficient Design and Management of the On-Chip Memory for CapsuleNet Inference Accelerators .[J] arXiv preprint arXiv:1902.01151.
- Eldad Meller, Alexander Finkelstein, Uri Almog, Mark Grobman .Same, Same But Different - Recovering Neural Network Quantization Error Through Weight Factorization .[J] arXiv preprint arXiv:1902.01917.
- Wojciech Marian Czarnecki, Razvan Pascanu, Simon Osindero, Siddhant M. Jayakumar, Grzegorz Swirszcz, Max Jaderberg .Distilling Policy Distillation .[J] arXiv preprint arXiv:1902.02186.
- Artem M. Grachev, Dmitry I. Ignatov, Andrey V. Savchenko .Compression of Recurrent Neural Networks for Efficient Language Modeling .[J] arXiv preprint arXiv:1902.02380.
- Panagiotis G. Mousouliotis, Loukas P. Petrou .Software-Defined FPGA Accelerator Design for Mobile Deep Learning Applications .[J] arXiv preprint arXiv:1902.03192.
- Yingzhen Yang, Nebojsa Jojic, Jun Huan .FSNet: Compression of Deep Convolutional Neural Networks by Filter Summary .[J] arXiv preprint arXiv:1902.03264.
- Anubhav Ashok .Architecture Compression .[J] arXiv preprint arXiv:1902.03326.
- 【Distillation】Seyed-Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Hassan Ghasemzadeh .Improved Knowledge Distillation via Teacher Assistant: Bridging the Gap Between Student and Teacher .[J] arXiv preprint arXiv:1902.03393.
【code:imirzadeh/Teacher-Assistant-Knowledge-Distillation】 - Shupeng Gui (1), Haotao Wang (2), Chen Yu (1), Haichuan Yang (1), Zhangyang Wang (2), Ji Liu (1) ((1) University of Rochester, (2) Texas A&M University) .Adversarially Trained Model Compression: When Robustness Meets Efficiency .[J] arXiv preprint arXiv:1902.03538.
- Dae-Woong Jeong, Jaehun Kim, Youngseok Kim, Tae-Ho Kim, Myungsu Chae .Effective Network Compression Using Simulation-Guided Iterative Pruning .[J] arXiv preprint arXiv:1902.04224.
- Sijia Chen, Bin Song, Xiaojiang Du, Nadra Guizani .Structured Bayesian Compression for Deep models in mobile enabled devices for connected healthcare .[J] arXiv preprint arXiv:1902.05429.
- Qian Lou, Lantao Liu, Minje Kim, Lei Jiang .AutoQB: AutoML for Network Quantization and Binarization on Mobile Devices .[J] arXiv preprint arXiv:1902.05690.
- Michael M. Saint-Antoine, Abhyudai Singh .Evaluating Pruning Methods in Gene Network Inference .[J] arXiv preprint arXiv:1902.06028.
- Chengcheng Li, Zi Wang, Xiangyang Wang, Hairong Qi .Single-shot Channel Pruning Based on Alternating Direction Method of Multipliers .[J] arXiv preprint arXiv:1902.06382.
- Zi Wang, Chengcheng Li, Dali Wang, Xiangyang Wang, Hairong Qi .Speeding up convolutional networks pruning with coarse ranking .[J] arXiv preprint arXiv:1902.06385.
- Yoni Choukroun, Eli Kravchik, Pavel Kisilev .Low-bit Quantization of Neural Networks for Efficient Inference .[J] arXiv preprint arXiv:1902.06822.
- Steven K. Esser, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, Dharmendra S. Modha .Learned Step Size Quantization .[J] arXiv preprint arXiv:1902.08153.
- Hojjat Salehinejad, Shahrokh Valaee .Ising-Dropout: A Regularization Method for Training and Compression of Deep Neural Networks .[J] arXiv preprint arXiv:1902.08673.
- Ivan Chelombiev, Conor Houghton, Cian O'Donnell .Adaptive Estimators Show Information Compression in Deep Neural Networks .[J] arXiv preprint arXiv:1902.09037.
- Yiming Hu, Siyang Sun, Jianquan Li, Jiagang Zhu, Xingang Wang, Qingyi Gu .Multi-loss-aware Channel Pruning of Deep Networks .[J] arXiv preprint arXiv:1902.10364.
- Yiming Hu, Jianquan Li, Xianlei Long, Shenhua Hu, Jiagang Zhu, Xingang Wang, Qingyi Gu .Cluster Regularized Quantization for Deep Networks Compression .[J] arXiv preprint arXiv:1902.10370.
- Mohammad Farhadi, Yezhou Yang .TKD: Temporal Knowledge Distillation for Active Perception .[J] arXiv preprint arXiv:1903.01522.
- Xiaowei Xu .On the Quantization of Cellular Neural Networks for Cyber-Physical Systems .[J] arXiv preprint arXiv:1903.02048.
- Jiasong Wu, Hongshan Ren, Youyong Kong, Chunfeng Yang, Lotfi Senhadji, Huazhong Shu .Compressing complex convolutional neural network based on an improved deep compression algorithm .[J] arXiv preprint arXiv:1903.02358.
- Yiren Zhao, Xitong Gao, Daniel Bates, Robert Mullins, Cheng-Zhong Xu .Efficient and Effective Quantization for Sparse DNNs .[J] arXiv preprint arXiv:1903.03046.
- Weiran Wang .Everything old is new again: A multi-view learning approach to learning using privileged information and distillation .[J] arXiv preprint arXiv:1903.03694.
- 【Pruning】Xin Li, Yiming Zhou, Zheng Pan, Jiashi Feng .Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search .[J] arXiv preprint arXiv:1903.03777.
【code:lixincn2015/Partial-Order-Pruning】 - 【Distillation】Yifan Liu, Ke Chen, Chris Liu, Zengchang Qin, Zhenbo Luo, Jingdong Wang .Structured Knowledge Distillation for Semantic Segmentation .[J] arXiv preprint arXiv:1903.04197.
【code:irfanICMLL/structure knowledge distillation)】 - Siavash Golkar, Michael Kagan, Kyunghyun Cho .Continual Learning via Neural Pruning .[J] arXiv preprint arXiv:1903.04476.
- 【Distillation】Knowledge Adaptation for Efficient Semantic Segmentation, Tong He, Chunhua Shen, Zhi Tian, Dong Gong, Changming Sun, Youliang Yan, 2019
- Breton Minnehan, Andreas Savakis .Cascaded Projection: End-to-End Network Compression and Acceleration .[J] arXiv preprint arXiv:1903.04988.
- 【Structured】【FE-Net】Chen W, Xie D, Zhang Y, et al. All You Need is a Few Shifts: Designing Efficient Convolutional Neural Networks for Image Classification[J]. arXiv preprint arXiv:1903.05285, 2019.
- Chen Feng, Tao Sheng, Zhiyu Liang, Shaojie Zhuo, Xiaopeng Zhang, Liang Shen, Matthew Ardi, Alexander C. Berg, Yiran Chen, Bo Chen, Kent Gauen, Yung-Hsiang Lu .Low Power Inference for On-Device Visual Recognition with a Quantization-Friendly Solution .[J] arXiv preprint arXiv:1903.06791.
- 【Pruning】Shaohui Lin, Rongrong Ji, Chenqian Yan, Baochang Zhang, Liujuan Cao, Qixiang Ye, Feiyue Huang, David Doermann .Towards Optimal Structured CNN Pruning via Generative Adversarial Learning .[J] arXiv preprint arXiv:1903.09291.
【code:ShaohuiLin/GAL】 - Shaokai Ye, Xiaoyu Feng, Tianyun Zhang, Xiaolong Ma, Sheng Lin, Zhengang Li, Kaidi Xu, Wujie Wen, Sijia Liu, Jian Tang, Makan Fardad, Xue Lin, Yongpan Liu, Yanzhi Wang .Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM .[J] arXiv preprint arXiv:1903.09769.
- 【Low rank】Julia Gusak, Maksym Kholyavchenko, Evgeny Ponomarev, Larisa Markeeva, Ivan Oseledets, Andrzej Cichocki, .One time is not enough: iterative tensor decomposition for neural network compression .[J] arXiv preprint arXiv:1903.09973.
【code:juliagusak/musco】 - Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Tim Kwang-Ting Cheng, Jian Sun .MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning .[J] arXiv preprint arXiv:1903.10258.
【code:liuzechun/MetaPruning】 - Abhishek Murthy, Himel Das, Md Ariful Islam .Robustness of Neural Networks to Parameter Quantization .[J] arXiv preprint arXiv:1903.10672.
- Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, Jimmy Lin .Distilling Task-Specific Knowledge from BERT into Simple Neural Networks .[J] arXiv preprint arXiv:1903.12136.
【code:goo.gl/Frmwqe】 - Shaokai Ye, Kaidi Xu, Sijia Liu, Hao Cheng, Jan-Henrik Lambrechts, Huan Zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, Xue Lin .Second Rethinking of Network Pruning in the Adversarial Setting .[J] arXiv preprint arXiv:1903.12561.
- Xijun Wang, Meina Kan, Shiguang Shan, Xilin Chen .Fully Learnable Group Convolution for Acceleration of Deep Neural Networks .[J] arXiv preprint arXiv:1904.00346.
- Peng Zhou, Long Mai, Jianming Zhang, Ning Xu, Zuxuan Wu, Larry S. Davis .M2KD: Multi-model and Multi-level Knowledge Distillation for Incremental Learning .[J] arXiv preprint arXiv:1904.01769.
- Baoyun Peng, Xiao Jin, Jiaheng Liu, Shunfeng Zhou, Yichao Wu, Yu Liu, Dongsheng Li, Zhaoning Zhang .Correlation Congruence for Knowledge Distillation .[J] arXiv preprint arXiv:1904.01802.
- 【Distillation】Byeongho Heo, Jeesoo Kim, Sangdoo Yun, Hyojin Park, Nojun Kwak, Jin Young Choi .A Comprehensive Overhaul of Feature Distillation .[J] arXiv preprint arXiv:1904.01866.
【code:byeongho-heo/overhaul】 - David Hartmann, Michael Wand .Progressive Stochastic Binarization of Deep Networks .[J] arXiv preprint arXiv:1904.02205.
【code:qubvel/classification_models】 - Yotam Gil, Yoav Chai, Or Gorodissky, Jonathan Berant .White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks .[J] arXiv preprint arXiv:1904.02405.
- Chaohui Yu, Jindong Wang, Yiqiang Chen, Zijing Wu .Accelerating Deep Unsupervised Domain Adaptation with Transfer Channel Pruning .[J] arXiv preprint arXiv:1904.02654.
- Miao Liu, Xin Chen, Yun Zhang, Yin Li, James M. Rehg .Paying More Attention to Motion: Attention Distillation for Learning Video Representations .[J] arXiv preprint arXiv:1904.03249.
- Chih-Yao Chiu, Hwann-Tzong Chen, Tyng-Luh Liu .C2S2: Cost-aware Channel Sparse Selection for Progressive Network Pruning .[J] arXiv preprint arXiv:1904.03508.
- Hiroki Tomoe, Tanaka Kanji .Long-Term Vehicle Localization by Recursive Knowledge Distillation .[J] arXiv preprint arXiv:1904.03551.
- 【Pruning】Xiaohan Ding, Guiguang Ding, Yuchen Guo, Jungong Han .Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure .[J] arXiv preprint arXiv:1904.03837.
【code:ShawnDing1994/Centripetal-SGD】 - Yang He, Ping Liu, Linchao Zhu, Yi Yang .Meta Filter Pruning to Accelerate Deep Convolutional Neural Networks .[J] arXiv preprint arXiv:1904.03961.
- Asaf Noy, Niv Nayman, Tal Ridnik, Nadav Zamir, Sivan Doveh, Itamar Friedman, Raja Giryes, Lihi Zelnik-Manor .ASAP: Architecture Search, Anneal and Prune .[J] arXiv preprint arXiv:1904.04123.
- Yangyang Shi, Mei-Yuh Hwang, Xin Lei, Haoyu Sheng .Knowledge Distillation For Recurrent Neural Network Language Modeling With Trust Regularization .[J] arXiv preprint arXiv:1904.04163.
- Rod Burns, John Lawson, Duncan McBain, Daniel Soutar .Accelerated Neural Networks on OpenCL Devices Using SYCL-DNN .[J] arXiv preprint arXiv:1904.04174.
- Kui Fu, Jia Li, Yafei Song, Yu Zhang, Shiming Ge, Yonghong Tian .Ultrafast Video Attention Prediction with Coupled Knowledge Distillation .[J] arXiv preprint arXiv:1904.04449.
- Vinh Tran, Yang Wang, Minh Hoai .Back to the Future: Knowledge Distillation for Human Action Anticipation .[J] arXiv preprint arXiv:1904.04868.
- Jia Li, Kui Fu, Shengwei Zhao, Shiming Ge .Spatiotemporal Knowledge Distillation for Efficient Estimation of Aerial Video Saliency .[J] arXiv preprint arXiv:1904.04992.
- 【Distillation】Wonpyo Park, Dongju Kim, Yan Lu, Minsu Cho .Relational Knowledge Distillation .[J] arXiv preprint arXiv:1904.05068.
- Shu Changyong, Li Peng, Xie Yuan, Qu Yanyun, Dai Longquan, Ma Lizhuang .Knowledge Squeezed Adversarial Network Compression .[J] arXiv preprint arXiv:1904.05100.
- Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D. Lawrence, Zhenwen Dai .Variational Information Distillation for Knowledge Transfer .[J] arXiv preprint arXiv:1904.05835.
- Jon Hoffman .Cramnet: Layer-wise Deep Neural Network Compression with Knowledge Transfer from a Teacher Network .[J] arXiv preprint arXiv:1904.05982.
- Bulat A, Kossaifi J, Tzimiropoulos G, et al. Matrix and tensor decompositions for training binary neural networks[J]. arXiv preprint arXiv:1904.07852, 2019.
- Arman Roohi, Shaahin Angizi, Deliang Fan, Ronald F DeMara .Processing-In-Memory Acceleration of Convolutional Neural Networks for Energy-Efficiency, and Power-Intermittency Resilience .[J] arXiv preprint arXiv:1904.07864.
- Yuchen Liu, Hao Xiong, Zhongjun He, Jiajun Zhang, Hua Wu, Haifeng Wang, Chengqing Zong .End-to-End Speech Translation with Knowledge Distillation .[J] arXiv preprint arXiv:1904.08075.
- Ji Lin, Chuang Gan, Song Han .Defensive Quantization: When Efficiency Meets Robustness .[J] arXiv preprint arXiv:1904.08444.
- Jangho Kim, Minsung Hyun, Inseop Chung, Nojun Kwak .Feature Fusion for Online Mutual Knowledge Distillation .[J] arXiv preprint arXiv:1904.09058.
- Xiao Jin, Baoyun Peng, Yichao Wu, Yu Liu, Jiaheng Liu, Ding Liang, Junjie Yan, Xiaolin Hu .Knowledge Distillation via Route Constrained Optimization .[J] arXiv preprint arXiv:1904.09149.
- Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao .Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding .[J] arXiv preprint arXiv:1904.09482.
- Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, Daxin Jiang .Model Compression with Multi-Task Knowledge Distillation for Web-scale Question Answering System .[J] arXiv preprint arXiv:1904.09636.
- Yochai Zur, Chaim Baskin, Evgenii Zheltonozhskii, Brian Chmiel, Itay Evron, Alex M. Bronstein, Avi Mendelson .Towards Learning of Filter-Level Heterogeneous Compression of Convolutional Neural Networks .[J] arXiv preprint arXiv:1904.09872.
- Jaedeok Kim, Chiyoun Park, Hyun-Joo Jung, Yoonsuck Choe .Differentiable Pruning Method for Neural Networks .[J] arXiv preprint arXiv:1904.10921.
- Daniel Alabi, Adam Tauman Kalai, Katrina Ligett, Cameron Musco, Christos Tzamos, Ellen Vitercik .Learning to Prune: Speeding up Repeated Computations .[J] arXiv preprint arXiv:1904.11875.
- Ting-Wu Chin, Ruizhou Ding, Cha Zhang, Diana Marculescu .LeGR: Filter Pruning via Learned Global Ranking .[J] arXiv preprint arXiv:1904.12368.
- Nathan Wycoff, Prasanna Balaprakash, Fangfang Xia .Neuromorphic Acceleration for Approximate Bayesian Inference on Neural Networks via Permanent Dropout .[J] arXiv preprint arXiv:1904.12904.
- Andrey Malinin, Bruno Mlodozeniec, Mark Gales .Ensemble Distribution Distillation .[J] arXiv preprint arXiv:1905.00076.
- Xiaolong Ma, Geng Yuan, Sheng Lin, Zhengang Li, Hao Sun, Yanzhi Wang .ResNet Can Be Pruned 60x: Introducing Network Purification and Unused Path Removal (P-RM) after Weight Pruning .[J] arXiv preprint arXiv:1905.00136.
- Bradley McDanel, Sai Qian Zhang, H. T. Kung, Xin Dong .Full-stack Optimization for Accelerating CNNs with FPGA Validation .[J] arXiv preprint arXiv:1905.00462.
- Bowen Shi, Ming Sun, Chieh-Chi Kao, Viktor Rozgic, Spyros Matsoukas, Chao Wang .Compression of Acoustic Event Detection Models with Low-rank Matrix Factorization and Quantization Training .[J] arXiv preprint arXiv:1905.00855.
- Yiwu Yao, Weiqiang Yang, Haoqi Zhu .Creating Lightweight Object Detectors with Model Compression for Deployment on Edge Devices .[J] arXiv preprint arXiv:1905.01787.
- 【Structured】【mobilenetv3】Howard A, Sandler M, Chu G, et al. Searching for mobilenetv3[J]. arXiv preprint arXiv:1905.02244, 2019.
- Bin Yang, Lin Yang, Xiaochun Li, Wenhan Zhang, Hua Zhou, Yequn Zhang, Yongxiong Ren, Yinbo Shi .2-bit Model Compression of Deep Convolutional Neural Network on ASIC Engine for Image Retrieval .[J] arXiv preprint arXiv:1905.03362.
- Jiong Zhang, Hsiang-fu Yu, Inderjit S. Dhillon .AutoAssist: A Framework to Accelerate Training of Deep Neural Networks .[J] arXiv preprint arXiv:1905.03381.
- Gael Kamdem De Teyou .Deep Learning Acceleration Techniques for Real Time Mobile Vision Applications .[J] arXiv preprint arXiv:1905.03418.
- Zhen Dong, Zhewei Yao, Amir Gholami, Michael Mahoney, Kurt Keutzer .HAWQ: Hessian AWare Quantization of Neural Networks with Mixed-Precision .[J] arXiv preprint arXiv:1905.03696.
- Pravendra Singh, Vinay Kumar Verma, Piyush Rai, Vinay P. Namboodiri .Play and Prune: Adaptive Filter Pruning for Deep Model Compression .[J] arXiv preprint arXiv:1905.04446.
- Yushu Feng, Huan Wang, Daniel T. Yi, Roland Hu .Triplet Distillation for Deep Face Recognition .[J] arXiv preprint arXiv:1905.04457.
- 【Pruning】Xiaohan Ding, Guiguang Ding, Yuchen Guo, Jungong Han, Chenggang Yan .Approximated Oracle Filter Pruning for Destructive CNN Width Optimization .[J] arXiv preprint arXiv:1905.04748.
- Sara Elkerdawy, Hong Zhang, Nilanjan Ray .Lightweight Monocular Depth Estimation Model by Joint End-to-End Filter pruning .[J] arXiv preprint arXiv:1905.05212.
- Dongsoo Lee, Se Jung Kwon, Byeongwook Kim, Parichay Kapoor, Gu-Yeon Wei .Network Pruning for Low-Rank Binary Indexing .[J] arXiv preprint arXiv:1905.05686.
- Youhei Akimoto, Nikolaus Hansen .Diagonal Acceleration for Covariance Matrix Adaptation Evolution Strategies .[J] arXiv preprint arXiv:1905.05885.
- 【Pruning】Chaoqi Wang, Roger Grosse, Sanja Fidler, Guodong Zhang .EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis .[J] arXiv preprint arXiv:1905.05934.
【code:alecwangcq/EigenDamage-Pytorch】 - Corey Lammie, Wei Xiang, Mostafa Rahimi Azghadi .Accelerating Deterministic and Stochastic Binarized Neural Networks on FPGAs Using OpenCL .[J] arXiv preprint arXiv:1905.06105.
- Chengcheng Li, Zi Wang, Dali Wang, Xiangyang Wang, Hairong Qi .Investigating Channel Pruning through Structural Redundancy Reduction - A Statistical Study .[J] arXiv preprint arXiv:1905.06498.
- Kartikeya Bhardwaj, Naveen Suda, Radu Marculescu .Dream Distillation: A Data-Independent Model Compression Framework .[J] arXiv preprint arXiv:1905.07072.
- Francesco Sovrano .Combining Experience Replay with Exploration by Random Network Distillation .[J] arXiv preprint arXiv:1905.07579.
- Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, Kaisheng Ma .Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation .[J] arXiv preprint arXiv:1905.08094.
- Gaurav Kumar Nayak, Konda Reddy Mopuri, Vaisakh Shaj, R. Venkatesh Babu, Anirban Chakraborty .Zero-Shot Knowledge Distillation in Deep Networks .[J] arXiv preprint arXiv:1905.08114.
- Shashank Singh, Ashish Khetan, Zohar Karnin .DARC: Differentiable ARchitecture Compression .[J] arXiv preprint arXiv:1905.08170.
- Simon Wiedemann, Heiner Kirchhoffer, Stefan Matlage, Paul Haase, Arturo Marban, Talmaj Marinc, David Neumann, Ahmed Osman, Detlev Marpe, Heiko Schwarz, Thomas Wiegand, Wojciech Samek .DeepCABAC: Context-adaptive binary arithmetic coding for deep neural network compression .[J] arXiv preprint arXiv:1905.08318.
- Kameron Decker Harris, Aleksandr Aravkin, Rajesh Rao, Bingni Wen Brunton .Time-varying Autoregression with Low Rank Tensors .[J] arXiv preprint arXiv:1905.08389.
- Konstantinos Pitas, Mike Davies, Pierre Vandergheynst .Revisiting hard thresholding for DNN pruning .[J] arXiv preprint arXiv:1905.08793.
- Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, Ivan Titov .Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned .[J] arXiv preprint arXiv:1905.09418.
- Xiaoxi He, Dawei Gao, Zimu Zhou, Yongxin Tong, Lothar Thiele .Disentangling Redundancy for Multi-Task Pruning .[J] arXiv preprint arXiv:1905.09676.
- Xuanyi Dong, Yi Yang .Network Pruning via Transformable Architecture Search .[J] arXiv preprint arXiv:1905.09717.
- Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein .Adversarially Robust Distillation .[J] arXiv preprint arXiv:1905.09747.
- Se Jung Kwon, Dongsoo Lee, Byeongwook Kim, Parichay Kapoor, Baeseong Park, Gu-Yeon Wei .Structured Compression by Unstructured Pruning for Sparse Quantized Neural Networks .[J] arXiv preprint arXiv:1905.10138.
- Alberto Marchisio, Beatrice Bussolino, Alessio Colucci, Muhammad Abdullah Hanif, Maurizio Martina, Guido Masera, Muhammad Shafique .X-TrainCaps: Accelerated Training of Capsule Nets through Lightweight Software Optimizations .[J] arXiv preprint arXiv:1905.10142.
- Yash Akhauri .HadaNets: Flexible Quantization Strategies for Neural Networks .[J] arXiv preprint arXiv:1905.10759.
- Hanyang Kong, Jian Zhao, Xiaoguang Tu, Junliang Xing, Shengmei Shen, Jiashi Feng .Cross-Resolution Face Recognition via Prior-Aided Face Hallucination and Residual Knowledge Distillation .[J] arXiv preprint arXiv:1905.10777.
- Xiaoliang Dai, Hongxu Yin, Niraj K. Jha .Incremental Learning Using a Grow-and-Prune Paradigm with Efficient Neural Networks .[J] arXiv preprint arXiv:1905.10952.
- Samuel Horvath, Chen-Yu Ho, Ludovit Horvath, Atal Narayan Sahu, Marco Canini, Peter Richtarik .Natural Compression for Distributed Deep Learning .[J] arXiv preprint arXiv:1905.10988.
- Hanwei Wu, Ather Gattami, Markus Flierl .Quantization-Based Regularization for Autoencoders .[J] arXiv preprint arXiv:1905.11062.
- Stefan Uhlich, Lukas Mauch, Kazuki Yoshiyama, Fabien Cardinaux, Javier Alonso Garcia, Stephen Tiedemann, Thomas Kemp, Akira Nakamura .Differentiable Quantization of Deep Neural Networks .[J] arXiv preprint arXiv:1905.11452.
- Annie Cherkaev, Waiming Tai, Jeff Phillips, Vivek Srikumar .Learning In Practice: Reasoning About Quantization .[J] arXiv preprint arXiv:1905.11478.
- Xiaocong Du, Zheng Li, Yu Cao .CGaP: Continuous Growth and Pruning for Efficient Deep Learning .[J] arXiv preprint arXiv:1905.11533.
- Ankit Jalan, Purushottam Kar .Accelerating Extreme Classification via Adaptive Feature Agglomeration .[J] arXiv preprint arXiv:1905.11769.
- Zhengguang Zhou, Wengang Zhou, Richang Hong, Houqiang Li .Online Filter Clustering and Pruning for Efficient Convnets .[J] arXiv preprint arXiv:1905.11787.
- Gonçalo Mordido, Matthijs Van Keirsbilck, Alexander Keller .Instant Quantization of Neural Networks using Monte Carlo Methods .[J] arXiv preprint arXiv:1905.12253.
- Ghouthi Boukli Hacene (IMT Atlantique - ELEC), Carlos Lassance, Vincent Gripon (IMT Atlantique - ELEC), Matthieu Courbariaux, Yoshua Bengio (DIRO) .Attention Based Pruning for Shift Networks .[J] arXiv preprint arXiv:1905.12300.
- Manuele Rusci, Alessandro Capotondi, Luca Benini .Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers .[J] arXiv preprint arXiv:1905.13082.
- Xiawu Zheng, Rongrong Ji, Lang Tang, Yan Wan, Baochang Zhang, Yongjian Wu, Yunsheng Wu, Ling Shao .Dynamic Distribution Pruning for Efficient Network Architecture Search .[J] arXiv preprint arXiv:1905.13543.
- Kunping Li .Quantization Loss Re-Learning Method .[J] arXiv preprint arXiv:1905.13568.
- S. Asim Ahmed .L0 Regularization Based Neural Network Design and Compression .[J] arXiv preprint arXiv:1905.13652.
- Thijs Vogels, Sai Praneeth Karimireddy, Martin Jaggi .PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization .[J] arXiv preprint arXiv:1905.13727.
- Bonggun Shin, Hao Yang, Jinho D. Choi .The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning .[J] arXiv preprint arXiv:1906.00095.
- Chuanguang Yang, Zhulin An, Chao Li, Boyu Diao, Yongjun Xu .Multi-objective Pruning for CNNs using Genetic Algorithm .[J] arXiv preprint arXiv:1906.00399.
- Stefano Recanatesi, Matthew Farrell, Madhu Advani, Timothy Moore, Guillaume Lajoie, Eric Shea-Brown .Dimensionality compression and expansion in Deep Neural Networks .[J] arXiv preprint arXiv:1906.00443.
- Aishwarya Bhandare, Vamsi Sripathi, Deepthi Karkada, Vivek Menon, Sun Choi, Kushal Datta, Vikram Saletore .Efficient 8-Bit Quantization of Transformer Neural Machine Language Translation Model .[J] arXiv preprint arXiv:1906.00532.
- 【Distillation】Jayashree Karlekar, Jiashi Feng, Zi Sian Wong, Sugiri Pranata .Deep Face Recognition Model Compression via Knowledge Transfer and Distillation .[J] arXiv preprint arXiv:1906.00619.
- Yaniv Blumenfeld, Dar Gilboa, Daniel Soudry .A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off .[J] arXiv preprint arXiv:1906.00771.
- Jyun-Yi Wu, Cheng Yu, Szu-Wei Fu, Chih-Ting Liu, Shao-Yi Chien, Yu Tsao .Increasing Compactness Of Deep Learning Based Speech Enhancement Models With Parameter Pruning And Quantization Techniques .[J] arXiv preprint arXiv:1906.01078.
- Marc Riera, Jose-Maria Arnau, Antonio Gonzalez .(Pen-) Ultimate DNN Pruning .[J] arXiv preprint arXiv:1906.02535.
- Alexander Finkelstein, Uri Almog, Mark Grobman .Fighting Quantization Bias With Bias .[J] arXiv preprint arXiv:1906.03193.
- Waldyn Martinez .Ensemble Pruning via Margin Maximization .[J] arXiv preprint arXiv:1906.03247.
- Tao Wang, Li Yuan, Xiaopeng Zhang, Jiashi Feng .Distilling Object Detectors with Fine-grained Feature Imitation .[J] arXiv preprint arXiv:1906.03609.
- Brian R. Bartoldson, Ari S. Morcos, Adrian Barbu, Gordon Erlebacher .The Generalization-Stability Tradeoff in Neural Network Pruning .[J] arXiv preprint arXiv:1906.03728.
- Yasutoshi Ida, Yasuhiro Fujiwara .Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining .[J] arXiv preprint arXiv:1906.03826.
- Jack Turner, Elliot J. Crowley, Gavin Gray, Amos Storkey, Michael O'Boyle .BlockSwap: Fisher-guided Block Substitution for Network Compression .[J] arXiv preprint arXiv:1906.04113.
- Kaveena Persand, Andrew Anderson, David Gregg .A Taxonomy of Channel Pruning Signals in CNNs .[J] arXiv preprint arXiv:1906.04675.
- Markus Nagel, Mart van Baalen, Tijmen Blankevoort, Max Welling .Data-Free Quantization through Weight Equalization and Bias Correction .[J] arXiv preprint arXiv:1906.04721.
- Urmish Thakker, Jesse Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina .Run-Time Efficient RNN Compression for Inference on Edge Devices .[J] arXiv preprint arXiv:1906.04886.
- Sam Shleifer, Eric Prokop .Using Small Proxy Datasets to Accelerate Hyperparameter Search .[J] arXiv preprint arXiv:1906.04887.
- Guenther Schindler, Wolfgang Roth, Franz Pernkopf, Holger Froening .Parameterized Structured Pruning for Deep Neural Networks .[J] arXiv preprint arXiv:1906.05180.
- Jian-Feng Cai, Lizhang Miao, Yang Wang, Yin Xian .Optimal low rank tensor recovery .[J] arXiv preprint arXiv:1906.05346.
- Erik Englesson, Hossein Azizpour .Efficient Evaluation-Time Uncertainty Estimation by Improved Distillation .[J] arXiv preprint arXiv:1906.05419.
- Arip Asadulaev, Igor Kuznetsov, Andrey Filchenkov .Linear Distillation Learning .[J] arXiv preprint arXiv:1906.05431.
- Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, Philip H. S. Torr .A Signal Propagation Perspective for Pruning Neural Networks at Initialization .[J] arXiv preprint arXiv:1906.06307.
- Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, Phil Blunsom .Scalable Syntax-Aware Language Models Using Knowledge Distillation .[J] arXiv preprint arXiv:1906.06438.
- Deniz Oktay, Johannes Ballé, Saurabh Singh, Abhinav Shrivastava .Model Compression by Entropy Penalized Reparameterization .[J] arXiv preprint arXiv:1906.06624.
- Jingkuan Song, Xiaosu Zhu, Lianli Gao, Xin-Shun Xu, Wu Liu, Heng Tao Shen .Deep Recurrent Quantization for Generating Sequential Binary Codes .[J] arXiv preprint arXiv:1906.06699.
- Liangjiang Wen, Xueyang Zhang, Haoli Bai, Zenglin Xu .Structured Pruning of Recurrent Neural Networks through Neuron Selection .[J] arXiv preprint arXiv:1906.06847.
- Dong Wang, Lei Zhou, Xiao Bai, Jun Zhou .A One-step Pruning-recovery Framework for Acceleration of Convolutional Neural Networks .[J] arXiv preprint arXiv:1906.07488.
- Kevin Alexander Laube, Andreas Zell .Prune and Replace NAS .[J] arXiv preprint arXiv:1906.07528.
- Qing Yang, Wei Wen, Zuoguan Wang, Hai Li .Joint Pruning on Activations and Weights for Efficient Neural Networks .[J] arXiv preprint arXiv:1906.07875.
- Zhuo Chen, Jiyuan Zhang, Ruizhou Ding, Diana Marculescu .ViP: Virtual Pooling for Accelerating CNN-based Image Classification and Object Detection .[J] arXiv preprint arXiv:1906.07912.
- Maryam Parsa, Aayush Ankit, Amirkoushyar Ziabari, Kaushik Roy .PABO: Pseudo Agent-Based Multi-Objective Bayesian Hyperparameter Optimization for Efficient Neural Accelerator Design .[J] arXiv preprint arXiv:1906.08167.
- Wei Hong, Jinke Yu Fan Zong .GAN-Knowledge Distillation for one-stage Object Detection .[J] arXiv preprint arXiv:1906.08467.
- Bethge J, Yang H, Bornstein M, et al. Back to Simplicity: How to Train Accurate BNNs from Scratch?[J]. arXiv preprint arXiv:1906.08637, 2019.
- Le Thanh Nguyen-Meidine, Eric Granger, Madhu Kiran, Louis-Antoine Blais-Morin .An Improved Trade-off Between Accuracy and Complexity with Progressive Gradient Pruning .[J] arXiv preprint arXiv:1906.08746.
- Wenxiao Wang, Cong Fu, Jishun Guo, Deng Cai, Xiaofei He .COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning .[J] arXiv preprint arXiv:1906.10337.
- 【Pruning】Pavlo Molchanov, Arun Mallya, Stephen Tyree, Iuri Frosio, Jan Kautz .Importance Estimation for Neural Network Pruning .[J] arXiv preprint arXiv:1906.10771.
【code:NVlabs/Taylor_pruning】 - Zhenchuan Yang, Chun Zhang, Weibin Zhang, Jianxiu Jin, Dongpeng Chen .Essence Knowledge Distillation for Speech Recognition .[J] arXiv preprint arXiv:1906.10834.
- Linguang Zhang, Maciej Halber, Szymon Rusinkiewicz .Accelerating Large-Kernel Convolution Using Summed-Area Tables .[J] arXiv preprint arXiv:1906.11367.
- Jonathan Frankle, David Bau .Dissecting Pruned Neural Networks .[J] arXiv preprint arXiv:1907.00262.
- Wen-Pu Cai, Wu-Jun Li .Weight Normalization based Quantization for Deep Neural Network Compression .[J] arXiv preprint arXiv:1907.00593.
- Bowen Shi, Ming Sun, Chieh-Chi Kao, Viktor Rozgic, Spyros Matsoukas, Chao Wang .Compression of Acoustic Event Detection Models With Quantized Distillation .[J] arXiv preprint arXiv:1907.00873.
- Kaijie Tu .Accelerating Deconvolution on Unmodified CNN Accelerators for Generative Adversarial Networks -- A Software Approach .[J] arXiv preprint arXiv:1907.01773.
- Yanzhi Wang, Shaokai Ye, Zhezhi He, Xiaolong Ma, Linfeng Zhang, Sheng Lin, Geng Yuan, Sia Huat Tan, Zhengang Li, Deliang Fan, Xuehai Qian, Xue Lin, Kaisheng Ma .Non-structured DNN Weight Pruning Considered Harmful .[J] arXiv preprint arXiv:1907.02124.
- 【Distillation】Seunghyun Lee, Byung Cheol Song .Graph-based Knowledge Distillation by Multi-head Attention Network .[J] arXiv preprint arXiv:1907.02226.
- Hugo Masson, Amran Bhuiyan, Le Thanh Nguyen-Meidine, Mehrsan Javan, Parthipan Siva, Ismail Ben Ayed, Eric Granger .A Survey of Pruning Methods for Efficient Person Re-identification Across Domains .[J] arXiv preprint arXiv:1907.02547.
- Xiaopeng Sun, Wen Lu, Rui Wang, Furui Bai .Distilling with Residual Network for Single Image Super Resolution .[J] arXiv preprint arXiv:1907.02843.
- Ning Liu, Xiaolong Ma, Zhiyuan Xu, Yanzhi Wang, Jian Tang, Jieping Ye .AutoSlim: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates .[J] arXiv preprint arXiv:1907.03141.
- Łukasz Dudziak, Mohamed S. Abdelfattah, Ravichander Vipperla, Stefanos Laskaridis, Nicholas D. Lane .ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning .[J] arXiv preprint arXiv:1907.03540.
- Ben Mussay, Samson Zhou, Vladimir Braverman, Dan Feldman .On Activation Function Coresets for Network Pruning .[J] arXiv preprint arXiv:1907.04018.
- Biao Qian, Yang Wang .A Targeted Acceleration and Compression Framework for Low bit Neural Networks .[J] arXiv preprint arXiv:1907.05271.
- Daquan Zhou, Xiaojie Jin, Kaixin Wang, Jianchao Yang, Jiashi Feng .Deep Model Compression via Filter Auto-sampling .[J] arXiv preprint arXiv:1907.05642.
- 【Quantization】Pierre Stock, Armand Joulin, Rémi Gribonval, Benjamin Graham, Hervé Jégou .And the Bit Goes Down: Revisiting the Quantization of Neural Networks .[J] arXiv preprint arXiv:1907.05686.
【code:facebookresearch/kill-the-bits】 - Kang-Ho Lee, JoonHyun Jeong, Sung-Ho Bae .An Inter-Layer Weight Prediction and Quantization for Deep Neural Networks based on a Smoothly Varying Weight Hypothesis .[J] arXiv preprint arXiv:1907.06835.
- Zhenhui Xu, Guolin Ke, Jia Zhang, Jiang Bian, Tie-Yan Liu .Light Multi-segment Activation for Model Compression .[J] arXiv preprint arXiv:1907.06870.
- Vojtech Mrazek, Zdenek Vasicek, Lukas Sekanina, Muhammad Abdullah Hanif, Muhammad Shafique .ALWANN: Automatic Layer-Wise Approximation of Deep Neural Network Accelerators without Retraining .[J] arXiv preprint arXiv:1907.07229.
- Besher Alhalabi, Mohamed Medhat Gaber, Shadi Basurra .EnSyth: A Pruning Approach to Synthesis of Deep Learning Ensembles .[J] arXiv preprint arXiv:1907.09286.
- Haoran Zhao, Xin Sun, Junyu Dong, Changrui Chen, Zihe Dong .Highlight Every Step: Knowledge Distillation via Collaborative Teaching .[J] arXiv preprint arXiv:1907.09643.
- Frederick Tung, Greg Mori .Similarity-Preserving Knowledge Distillation .[J] arXiv preprint arXiv:1907.09682.
- Shivangi Srivastava, Maxim Berman, Matthew B. Blaschko, Devis Tuia .Adaptive Compression-based Lifelong Learning .[J] arXiv preprint arXiv:1907.09695.
- Ning Wang, Wengang Zhou, Yibing Song, Chao Ma, Houqiang Li .Real-Time Correlation Tracking via Joint Model Compression and Transfer .[J] arXiv preprint arXiv:1907.09831.
- Yuanpei Liu, Xingping Dong, Wenguan Wang, Jianbing Shen .Teacher-Students Knowledge Distillation for Siamese Trackers .[J] arXiv preprint arXiv:1907.10586.
- Kartikeya Bhardwaj, Chingyi Lin, Anderson Sartor, Radu Marculescu .Memory- and Communication-Aware Model Compression for Distributed Deep Learning Inference on IoT .[J] arXiv preprint arXiv:1907.11804.
- Chuanjian Liu, Yunhe Wang, Kai Han, Chunjing Xu, Chang Xu .Learning Instance-wise Sparsity for Accelerating Deep Models .[J] arXiv preprint arXiv:1907.11840.
- Simon Wiedemann, Heiner Kirchoffer, Stefan Matlage, Paul Haase, Arturo Marban, Talmaj Marinc, David Neumann, Tung Nguyen, Ahmed Osman, Detlev Marpe, Heiko Schwarz, Thomas Wiegand, Wojciech Samek .DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks .[J] arXiv preprint arXiv:1907.11900.
- Jiajia Guo, Jinghe Wang, Chao-Kai Wen, Shi Jin, Geoffrey Ye Li .Compression and Acceleration of Neural Networks for Communications .[J] arXiv preprint arXiv:1907.13269.
- Xucheng Ye, Jianlei Yang, Pengcheng Dai, Yiran Chen, Weisheng Zhao .Accelerating CNN Training by Sparsifying Activation Gradients .[J] arXiv preprint arXiv:1908.00173.
- 【Distillation】Yuenan Hou, Zheng Ma, Chunxiao Liu, Chen Change Loy .Learning Lightweight Lane Detection CNNs by Self Attention Distillation .[J] arXiv preprint arXiv:1908.00821.
- Muhamad Risqi U. Saputra, Pedro P. B. de Gusmao, Yasin Almalioglu, Andrew Markham, Niki Trigoni .Distilling Knowledge From a Deep Pose Regressor Network .[J] arXiv preprint arXiv:1908.00858.
- MohammadHossein AskariHemmat, Sina Honari, Lucas Rouhier, Christian S. Perone, Julien Cohen-Adad, Yvon Savaria, Jean-Pierre David .U-Net Fixed-Point Quantization for Medical Image Segmentation .[J] arXiv preprint arXiv:1908.01073.
- Haibao Yu, Tuopu Wen, Guangliang Cheng, Jiankai Sun, Qi Han, Jianping Shi .GDRQ: Group-based Distribution Reshaping for Quantization .[J] arXiv preprint arXiv:1908.01477.
- Wei-Ting Wang, Han-Lin Li, Wei-Shiang Lin, Cheng-Ming Chiang, Yi-Min Tsai .Architecture-aware Network Pruning for Vision Quality Applications .[J] arXiv preprint arXiv:1908.02125.
- Yunxiang Zhang, Chenglong Zhao, Bingbing Ni, Jian Zhang, Haoran Deng .Exploiting Channel Similarity for Accelerating Deep Convolutional Neural Networks .[J] arXiv preprint arXiv:1908.02620.
- Boyu Zhang, Azadeh Davoodi, Yu Hen Hu .Efficient Inference of CNNs via Channel Pruning .[J] arXiv preprint arXiv:1908.03266.
- Pierre Humbert (CMLA), Julien Audiffren (CMLA), Laurent Oudre (L2TI), Nicolas Vayatis (CMLA) .Multivariate Convolutional Sparse Coding with Low Rank Tensor .[J] arXiv preprint arXiv:1908.03367.
- Chaithanya Kumar Mummadi, Tim Genewein, Dan Zhang, Thomas Brox, Volker Fischer .Group Pruning using a Bounded-Lp norm for Group Gating and Regularization .[J] arXiv preprint arXiv:1908.03463.
- Stanislav Morozov, Artem Babenko .Unsupervised Neural Quantization for Compressed-Domain Similarity Search .[J] arXiv preprint arXiv:1908.03883.
- Jogendra Nath Kundu, Nishank Lakkakula, R. Venkatesh Babu .UM-Adapt: Unsupervised Multi-Task Adaptation Using Adversarial Cross-Task Distillation .[J] arXiv preprint arXiv:1908.03884.
- Divyam Madaan, Sung Ju Hwang .Adversarial Neural Pruning .[J] arXiv preprint arXiv:1908.04355.
- Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, Junjie Yan .Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks .[J] arXiv preprint arXiv:1908.05033.
- Oren Barkan, Noam Razin, Itzik Malkiel, Ori Katz, Avi Caciularu, Noam Koenigstein .Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding .[J] arXiv preprint arXiv:1908.05161.
- Ziheng Wang, Sree Harsha Nelaturu .Accelerated CNN Training Through Gradient Approximation .[J] arXiv preprint arXiv:1908.05460.
- Chaoyang Wang, Chen Kong, Simon Lucey .Distill Knowledge from NRSfM for Weakly Supervised 3D Pose Learning .[J] arXiv preprint arXiv:1908.06377.
- Shreyas Kolala Venkataramanaiah, Yufei Ma, Shihui Yin, Eriko Nurvithadhi, Aravind Dasu, Yu Cao, Jae-sun Seo .Automatic Compiler Based FPGA Accelerator for CNN Training .[J] arXiv preprint arXiv:1908.06724.
- Yasuo Yamane, Kenichi Kobayashi .A New Fast Weighted All-pairs Shortest Path Search Algorithm Based on Pruning by Shortest Path Trees .[J] arXiv preprint arXiv:1908.06798.
- Yasuo Yamane, Kenichi Kobayashi .A New Fast Unweighted All-pairs Shortest Path Search Algorithm Based on Pruning by Shortest Path Trees .[J] arXiv preprint arXiv:1908.06806.
- Mauricio Orbes-Arteaga, Jorge Cardoso, Lauge Sørensen, Christian Igel, Sebastien Ourselin, Marc Modat, Mads Nielsen, Akshay Pai .Knowledge distillation for semi-supervised domain adaptation .[J] arXiv preprint arXiv:1908.07355.
- Zhiqiang Shen, Zhankui He, Wanyun Cui, Jiahui Yu, Yutong Zheng, Chenchen Zhu, Marios Savvides .Adversarial-Based Knowledge Distillation for Multi-Model Ensemble and Noisy Data Refinement .[J] arXiv preprint arXiv:1908.08520.
- Sunwoo Kim, Mrinmoy Maity, Minje Kim .Incremental Binarization On Recurrent Neural Networks For Single-Channel Source Separation .[J] arXiv preprint arXiv:1908.08898.
- Yawei Li, Shuhang Gu, Luc Van Gool, Radu Timofte .Learning Filter Basis for Convolutional Neural Network Compression .[J] arXiv preprint arXiv:1908.08932.
- Udit Gupta, Brandon Reagen, Lillian Pentecost, Marco Donato, Thierry Tambe, Alexander M. Rush, Gu-Yeon Wei, David Brooks .MASR: A Modular Accelerator for Sparse RNNs .[J] arXiv preprint arXiv:1908.08976.
- Xuecheng Nie, Yuncheng Li, Linjie Luo, Ning Zhang, Jiashi Feng .Dynamic Kernel Distillation for Efficient Pose Estimation in Videos .[J] arXiv preprint arXiv:1908.09216.
- Jiajun Deng, Yingwei Pan, Ting Yao, Wengang Zhou, Houqiang Li, Tao Mei .Relation Distillation Networks for Video Object Detection .[J] arXiv preprint arXiv:1908.09511.
- Ting Chen, Yizhou Sun .Differentiable Product Quantization for End-to-End Embedding Compression .[J] arXiv preprint arXiv:1908.09756.
- Xiaolong Ma, Geng Yuan, Sheng Lin, Caiwen Ding, Fuxun Yu, Tao Liu, Wujie Wen, Xiang Chen, Yanzhi Wang .Tiny but Accurate: A Pruned, Quantized and Optimized Memristor Crossbar Framework for Ultra Efficient DNN Implementation .[J] arXiv preprint arXiv:1908.10017.
- Saurabh Kumar, Biplab Banerjee, Subhasis Chaudhuri .Online Sensor Hallucination via Knowledge Distillation for Multimodal Image Classification .[J] arXiv preprint arXiv:1908.10559.
- Tong Geng, Ang Li, Tianqi Wang, Chunshu Wu, Yanfei Li, Antonino Tumeo, Martin Herbordt .UWB-GCN: Hardware Acceleration of Graph-Convolution-Network through Runtime Workload Rebalancing .[J] arXiv preprint arXiv:1908.10834.
- Angelo Garofalo, Manuele Rusci, Francesco Conti, Davide Rossi, Luca Benini .PULP-NN: Accelerating Quantized Neural Networks on Parallel Ultra-Low-Power RISC-V Processors .[J] arXiv preprint arXiv:1908.11263.
- Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner .Survey and Benchmarking of Machine Learning Accelerators .[J] arXiv preprint arXiv:1908.11348.
- Lukas Cavigelli, Georg Rutishauser, Luca Benini .EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators .[J] arXiv preprint arXiv:1908.11645.
- Geng Yuan, Xiaolong Ma, Caiwen Ding, Sheng Lin, Tianyun Zhang, Zeinab S. Jalali, Yilong Zhao, Li Jiang, Sucheta Soundarajan, Yanzhi Wang .An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM .[J] arXiv preprint arXiv:1908.11691.
- Yuke Wang, Boyuan Feng, Gushu Li, Lei Deng, Yuan Xie, Yufei Ding .AccD: A Compiler-based Framework for Accelerating Distance-related Algorithms on CPU-FPGA Platforms .[J] arXiv preprint arXiv:1908.11781.
- Sudarshan Srinivasan, Pradeep Janedula, Saurabh Dhoble, Sasikanth Avancha, Dipankar Das, Naveen Mellempudi, Bharat Daga, Martin Langhammer, Gregg Baeckler, Bharat Kaul .High Performance Scalable FPGA Accelerator for Deep Neural Networks .[J] arXiv preprint arXiv:1908.11809.
- Amey Agrawal, Rohit Karlupia .Learning Digital Circuits: A Journey Through Weight Invariant Self-Pruning Neural Networks .[J] arXiv preprint arXiv:1909.00052.
- Lei He .EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks .[J] arXiv preprint arXiv:1909.00155.
- Ye Yu, Niraj K. Jha .SPRING: A Sparsity-Aware Reduced-Precision Monolithic 3D CNN Accelerator Architecture for Training and Inference .[J] arXiv preprint arXiv:1909.00557.
- Bharti Munjal, Fabio Galasso, Sikandar Amin .Knowledge Distillation for End-to-End Person Search .[J] arXiv preprint arXiv:1909.01058.
- Yang Li, Thomas Strohmer .What Happens on the Edge, Stays on the Edge: Toward Compressive Deep Learning .[J] arXiv preprint arXiv:1909.01539.
- Sungho Shin, Yoonho Boo, Wonyong Sung .Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks .[J] arXiv preprint arXiv:1909.01688.
- Chengyi Wang, Shuangzhi Wu, Shujie Liu .Accelerating Transformer Decoding via a Hybrid of Self-attention and Recurrent Neural Network .[J] arXiv preprint arXiv:1909.02279.
- Yew Ken Chia, Sam Witteveen, Martin Andrews .Transformer to CNN: Label-scarce distillation for efficient text classification .[J] arXiv preprint arXiv:1909.03508.
- Wenming Yang, Xuechen Zhang, Yapeng Tian, Wei Wang, Jing-Hao Xue, Qingmin Liao .LCSCNet: Linear Compressing Based Skip-Connecting Network for Image Super-Resolution .[J] arXiv preprint arXiv:1909.03573.
【code:XuechenZhang123/LCSC】 - Shuang Gao, Xin Liu, Lung-Sheng Chien, William Zhang, Jose M. Alvarez .VACL: Variance-Aware Cross-Layer Regularization for Pruning Deep Residual Networks .[J] arXiv preprint arXiv:1909.04485.
- Ramchalam Kinattinkara Ramakrishnan, Eyyüb Sari, Vahid Partovi Nia .Differentiable Mask Pruning for Neural Networks .[J] arXiv preprint arXiv:1909.04567.
- Zhaoyang Zeng, Bei Liu, Jianlong Fu, Hongyang Chao, Lei Zhang .WSOD^2: Learning Bottom-up and Top-down Objectness Distillation for Weakly-supervised Object Detection .[J] arXiv preprint arXiv:1909.04972.
- Xiaolong Ma, Fu-Ming Guo, Wei Niu, Xue Lin, Jian Tang, Kaisheng Ma, Bin Ren, Yanzhi Wang .PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for Real-time Execution on Mobile Devices .[J] arXiv preprint arXiv:1909.05073.
- Jiancheng Lyu, Spencer Sheen .A Channel-Pruned and Weight-Binarized Convolutional Neural Network for Keyword Spotting .[J] arXiv preprint arXiv:1909.05623.
- Mostafa Elhoushi, Ye Henry Tian, Zihao Chen, Farhan Shafiq, Joey Yiwei Li .Accelerating Training using Tensor Decomposition .[J] arXiv preprint arXiv:1909.05675.
- Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer .Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT .[J] arXiv preprint arXiv:1909.05840.
- Shubham Jain, Sumeet Kumar Gupta, Anand Raghunathan .TiM-DNN: Ternary in-Memory accelerator for Deep Neural Networks .[J] arXiv preprint arXiv:1909.06892.
- Hyoukjun Kwon, Liangzhen Lai, Tushar Krishna, Vikas Chandra .HERALD: Optimizing Heterogeneous DNN Accelerators for Edge Devices .[J] arXiv preprint arXiv:1909.07437.
- Xiaoyu Yu, Yuwei Wang, Jie Miao, Ephrem Wu, Heng Zhang, Yu Meng, Bo Zhang, Biao Min, Dewei Chen, Jianlin Gao .A Data-Center FPGA Acceleration Platform for Convolutional Neural Networks .[J] arXiv preprint arXiv:1909.07973.
- Umar Asif, Jianbin Tang, Stefan Harrer .Ensemble Knowledge Distillation for Learning Improved and Efficient Networks .[J] arXiv preprint arXiv:1909.08097.
- Zhonghui You, Kun Yan, Jinmian Ye, Meng Ma, Ping Wang .Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks .[J] arXiv preprint arXiv:1909.08174.
【code:youzhonghui/gate-decorator-pruning】 - Rui Chen, Haizhou Ai, Chong Shang, Long Chen, Zijie Zhuang .Learning Lightweight Pedestrian Detector with Hierarchical Knowledge Distillation .[J] arXiv preprint arXiv:1909.09325.
- Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu .TinyBERT: Distilling BERT for Natural Language Understanding .[J] arXiv preprint arXiv:1909.10351.
- Rahim Entezari, Olga Saukh .Class-dependent Compression of Deep Neural Networks .[J] arXiv preprint arXiv:1909.10364.
- SeongUk Park, Nojun Kwak .FEED: Feature-level Ensemble for Knowledge Distillation .[J] arXiv preprint arXiv:1909.10754.
- Taiji Suzuki .Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network .[J] arXiv preprint arXiv:1909.11274.
- Chun Quan, Jun-Gi Jang, Hyun Dong Lee, U Kang .FALCON: Fast and Lightweight Convolution for Compressing and Accelerating CNN .[J] arXiv preprint arXiv:1909.11321.
- Zhe Xu, Ray C. C. Cheung .Accurate and Compact Convolutional Neural Networks with Trained Binarization .[J] arXiv preprint arXiv:1909.11366.
- Li Yuan, Francis E.H.Tay, Guilin Li, Tao Wang, Jiashi Feng .Revisit Knowledge Distillation: a Teacher-free Framework .[J] arXiv preprint arXiv:1909.11723.
【code:/yuanli2333/Teacher-free-Knowledge-Distillation】 - Zheng Hui, Xinbo Gao, Yunchu Yang, Xiumei Wang .Lightweight Image Super-Resolution with Information Multi-distillation Network .[J] arXiv preprint arXiv:1909.11856.
- Grégoire Morin, Ryan Razani, Vahid Partovi Nia, Eyyüb Sari .Smart Ternary Quantization .[J] arXiv preprint arXiv:1909.12205.
- Yuang Jiang, Shiqiang Wang, Bong Jun Ko, Wei-Han Lee, Leandros Tassiulas .Model Pruning Enables Efficient Federated Learning on Edge Devices .[J] arXiv preprint arXiv:1909.12326.
- Yulong Wang, Xiaolu Zhang, Lingxi Xie, Jun Zhou, Hang Su, Bo Zhang, Xiaolin Hu .Pruning from Scratch .[J] arXiv preprint arXiv:1909.12579.
- Xiaohan Ding, Guiguang Ding, Xiangxin Zhou, Yuchen Guo, Ji Liu, Jungong Han .Global Sparse Momentum SGD for Pruning Very Deep Neural Networks .[J] arXiv preprint arXiv:1909.12778.
【code:DingXiaoH/GSM-SGD】 - Jiao Xie, Shaohui Lin, Yichen Zhang, Linkai Luo .Training convolutional neural networks with cheap convolutions and online distillation .[J] arXiv preprint arXiv:1909.13063.
【code:EthanZhangYC/OD-cheap-convolution】 - Yuhang Li, Xin Dong, Wei Wang .Additive Powers-of-Two Quantization: A Non-uniform Discretization for Neural Networks .[J] arXiv preprint arXiv:1909.13144.
- Caiwen Ding, Shuo Wang, Ning Liu, Kaidi Xu, Yanzhi Wang, Yun Liang .REQ-YOLO: A Resource-Aware, Efficient Quantization Framework for Object Detection on FPGAs .[J] arXiv preprint arXiv:1909.13396.
- NVIDIA TensorRT: Programmable Inference Accelerator;
- Tencent/PocketFlow: An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications;
- dmlc/tvm: Open deep learning compiler stack for cpu, gpu and specialized accelerators;
- Tencent/ncnn: ncnn is a high-performance neural network inference framework optimized for the mobile platform;
- pytorch/glow: Compiler for Neural Network hardware accelerators;
- NervanaSystems/neon: Intel® Nervana™ reference deep learning framework committed to best performance on all hardware;
- NervanaSystems/distiller: Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research;
- MUSCO - framework for model compression using tensor decompositions (PyTorch)
- OAID/Tengine: Tengine is a lite, high performance, modular inference engine for embedded device;
- fpeder/espresso: Efficient forward propagation for BCNNs;
- Tensorflow lite: TensorFlow Lite is an open source deep learning framework for on-device inference.;
- Core ML: Reduce the storage used by the Core ML model inside your app bundle;
- pytorch-tensor-decompositions: PyTorch implementation of [1412.6553] and [1511.06530] tensor decomposition methods for convolutional layers;
- tensorflow/quantize:
- mxnet/quantization: This folder contains examples of quantizing a FP32 model with Intel® MKL-DNN or CUDNN.
- TensoRT4-Example:
- NAF-tensorflow: "Continuous Deep Q-Learning with Model-based Acceleration" in TensorFlow;
- Mayo - deep learning framework with fine- and coarse-grained pruning, network slimming, and quantization methods
- Keras compressor - compression using low-rank approximations, SVD for matrices, Tucker for tensors.
- Caffe compressor K-means based quantization
- bhavanajain/research-paper-summaries
keyword:compress prun accelera distill binarization 'low rank' quantization