An impact of tensor-based data compression methods on deep neural network accuracy
Jakub Grabek, Bogusław Cyganek
DOI: http://dx.doi.org/10.15439/2021F127
Citation: Position and Communication Papers of the 16th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 26, pages 3–11 (2021)
Abstract. In this article, an in-depth analysis of the influence of the tensor-based lossy data compression on the performance of the various deep neural architectures is presented. We show that the Tucker and the Tensor Train decomposition methods allow for very high compression ratios, while maintaining enough information in the compressed data to achieve only a negligible drop in the accuracy. The measurements were performed on the popular architectures: AlexNet, ResNet, VGG, and MNASNet. Further augmentation of the tensor decompositions with the ZFP floating-point compression algorithm allows for finding optimal parameters and even higher compressions ratio at the same recognition accuracy.
References
- Cococcioni, M., et al. Novel arithmetics in deep neural networks signal processing for autonomous driving: Challenges and opportunities. In IEEE Signal Processing Magazine, 2020, 38.1: 97-110. http://dx.doi.org/10.1109/MSP.2020.2988436
- Cyganek, B. Object Detection and Recognition in Digital Images: Theory and Practice; John Wiley & Sons: New York, NY, USA, 2013. http://dx.doi.org/10.1002/9781118618387
- Kolda, T.; Bader, B. Tensor Decompositions and Applications. SIAM Rev. 51.3 2009, 51, 455–500. http://dx.doi.org/10.1137/07070111X
- Cyganek, B., Thumbnail Tensor—A Method for Multidimensional Data Streams Clustering with an Efficient Tensor Subspace Model in the Scale-Space, Sensors, 19(19), 4088, 2019, http://dx.doi.org/10.3390/s19194088
- LI, J., LIU, Z., Multispectral transforms using convolution neural networks for remote sensing multispectral image compression. In Remote Sensing 11.7: 759, 2019. http://dx.doi.org/10.3390/rs11070759
- CHOI, Y., EL-KHAMY, M., LEE, J., Universal deep neural network compression. In IEEE Journal of Selected Topics in Signal Processing, 14.4, 2020, pp. 715-726. http://dx.doi.org/10.1109/JSTSP.2020.2975903
- Przyborowski M., et al. Toward Machine Learning on Granulated Data – a Case of Compact Autoencoder-based Representations of Satellite Images. In 2018 IEEE International Conference on Big Data (Big Data), 2018, pp. 2657-2662, http://dx.doi.org/10.1109/BigData.2018.8622562.
- Wang, N; Yeung, D. Y., Learning a deep compact image representation for visual tracking. In Advances in neural information processing systems, 2013
- Lindstrom, P., Fixed-Rate Compressed Floating-Point Arrays. In IEEE Transactions on Visualization and Computer Graphics 20(12) 2014, pp. 2674-2683, http://dx.doi.org/10.1109/TVCG.2014.2346458
- Ziv, J., Lempel, A., Compression of individual sequences via variable-rate coding. In IEEE transactions on Information Theory, 1978, 24.5: 530-536. http://dx.doi.org/10.1109/TIT.1978.1055934
- Cyganek, B., A Framework for Data Representation, Processing, and Dimensionality Reduction with the Best-Rank Tensor Decomposition. Proceedings of the ITI 2012 34th International Conference Information Technology Interfaces, June 25-28, 2012, Cavtat, Croatia, pp. 325-330, http://dx.doi.org/10.2498/iti.2012.0466, 2012.
- De Lathauwer, L.; De Moor, B.; Vandewalle, J. On the best rank-1 and rank-(R1, R2,..., Rn) approximation of higher-order tensors. Siam J. Matrix Anal. Appl. 2000, 21, 1324–1342. http://dx.doi.org/10.1137/S0895479898346995
- Ballé, J., Laparra, V., Simoncelli, E. P., End-to-end optimized image compression. In arXiv preprint https://arxiv.org/abs/1611.01704, 2016.
- Zhang, L., et al. Compression of hyperspectral remote sensing images by tensor approach. In Neurocomputing, 147, 2015, pp. 358-363. http://dx.doi.org/10.1016/j.neucom.2014.06.052
- Aidini, A., Tsagkatakis, G., Tsakalides, P., Compression of high-dimensional multispectral image time series using tensor decomposition learning. In: 2019 27th European Signal Processing Conference (EUSIPCO). IEEE, 2019. p. 1-5. http://dx.doi.org/10.23919/EUSIPCO.2019.8902838
- Watkins, Y. Z., Sayeh, M. R., Image data compression and noisy channel error correction using deep neural network. In Procedia Computer Science, 95, 2016, pp. 145-152. http://dx.doi.org/10.1016/j.procs.2016.09.305
- Friedland, G., et al. On the Impact of Perceptual Compression on Deep Learning. In 2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE, 2020, p. 219-224. http://dx.doi.org/10.1109/MIPR49039.2020.00052
- Dejean-Servières, M., et al. Study of the impact of standard image compression techniques on performance of image classification with a convolutional neural network. 2017. PhD Thesis. INSA Rennes; Univ Rennes; IETR; Institut Pascal.
- Ullrich, K., Meeds, E., Welling, M., Soft weight-sharing for neural network compression. In arXiv preprint https://arxiv.org/abs/1702.04008, 2017.
- JIN, S. et al. DeepSZ: A novel framework to compress deep neural networks by using error-bounded lossy compression. In Proceedings of the 28th International Symposium on High-Performance Parallel and Distributed Computing, 2019 p. 159-170. http://dx.doi.org/10.1145/3307681.3326608
- Deng, Lei, et al. Model compression and hardware acceleration for neural networks: A comprehensive survey. In Proceedings of the IEEE, 2020, 108.4: 485-532. http://dx.doi.org/10.1109/JPROC.2020.2976475
- Muti, D.; Bourennane, S. Multidimensional filtering based on a tensor approach. Signal Process. 2005, 85, 2338–2353. http://dx.doi.org/10.1016/j.sigpro.2004.11.029
- Cyganek, B.; Smołka, B. Real-time framework for tensor-based image enhancement for object classification. Proc. SPIE 2016, 9897, 98970Q. http://dx.doi.org/10.1117/12.2227797
- Cyganek, B.; Krawczyk, B.; Wozniak, M. Multidimensional Data Classification with Chordal Distance Based Kernel and Support Vector Machines. Eng. Appl. Artif. Intell. 2015, 46, 10–22. http://dx.doi.org/10.1016/j.engappai.2015.08.001
- Cyganek, B.; Wozniak, M. Tensor-Based Shot Boundary Detection in Video Streams. New Gener. Comput. 2017, 35, 311–340. http://dx.doi.org/10.1007/s00354-017-0024-0
- Marot, J.; Fossati, C.; Bourennane, S. Fast subspace-based tensor data filtering. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3869–3872. http://dx.doi.org/10.1109/ICIP.2009.5414048
- Khoromskij, B. N., Khoromskaia, V., Multigrid accelerated tensor approximation of function related multidimensional arrays. In SIAM J. Sci. Comput., 31, 2009, pp. 3002–3026. http://dx.doi.org/10.1137/080730408
- Oseledets, I. V., Savostianov, D. V., Tyrtyshnikov, E. E., Tucker dimensionality reduction of three-dimensional arrays in linear time. In SIAM J. Matrix Anal. Appl., 30, 2008, pp. 939–956. http://dx.doi.org/10.1137/060655894
- Lee, N., Cichocki, A., Fundamental tensor operations for large-scale data analysis using tensor network formats. In Multidimensional Syst. Signal Process., vol. 29, no. 3, 2017, pp. 921–960 http://dx.doi.org/10.1007/s11045-017-0481-0
- Hubener, R., Nebendahl, V., Dur, W., Concatenated tensor network states. In New J. Phys., 12, 2010, 025004. http://dx.doi.org/10.1088/1367-2630/12/2/025004
- Van Loan, C. F., Tensor network computations in quantum chemistry Technical report, available online at www.cs.cornell.edu/cv/OtherPdf/ZeuthenCVL.pdf, 2008.
- Oseledets, I., Tensor-Train Decomposition. In SIAM J. Scientific Computing. 33., 2011, pp. 2295-2317. http://dx.doi.org/10.1137/090752286.
- Lindstrom, P., Fixed-Rate Compressed Floating-Point Arrays. In IEEE Transactions on Visualization and Computer Graphics vol. 20; 2014, http://dx.doi.org/10.1109/TVCG.2014.2346458.
- Lemley, J., Deep Learning for Consumer Devices and Services: Pushing the limits for machine learning, artificial intelligence, and computer vision. In IEEE Consumer Electronics Magazine vol. 6, Iss. 2; 2017 http://dx.doi.org/10.1109/MCE.2016.2640698
- Krizhevsky, A., Sutskever, I., Hinton, G. E., ImageNet classification with deep convolutional neural networks. In Communications of the ACM. 60 (6) pp. 84–90. http://dx.doi.org/10.1145/3065386
- Simonyan, K., Zisserman, A. Very deep convolutional networks for large-scale image recognition. In arXiv preprint https://arxiv.org/abs/1409.1556. 2014
- He, Kaiming, et al. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition 2016, pp. 770-778 http://dx.doi.org/10.1109/CVPR.2016.90
- Krizhevsky, A., et al., ImageNet classification with deep convolutional neural networks. In Proc. 25th Int. Conf. Neural Inf. Process. Syst. (NIPS), vol. 1., Red Hook, NY, USA: Curran Associates, 2012, pp. 1097–1105. http://dx.doi.org/10.1145/3065386
- Simonyan K. and Zisserman A., Very deep convolutional networks for large-scale image recognition. In Proc. 3rd Int. Conf. Learn. Represent. (ICLR), San Diego, CA, USA, Y. Bengio and Y. LeCun, Eds., 2015, pp. 1–14.
- Xie S., et al., Aggregated residual transformations for deep neural networks.In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 5987–5995. http://dx.doi.org/10.1109/CVPR.2017.634
- Szegedy, S. et al., Inception-v4, inception-resnet and the impact of residual connections on learning. In Proc. 31st AAAI Conf. Artif. Intell., San Francisco, CA, USA, S. P. Singh and S. Markovitch, Eds., 2017, pp. 4278–4284.
- Tan, M., et al. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition 2019, pp. 2820-2828 doi: 10.1109/CVPR.2019.00293
- Kossaifi, J.; Panagakis, Y.; Kumar, A.; Pantic, M. TensorLy: Tensor Learning in Python. arXiv preprint 2018, https://arxiv.org/abs/1610.09555.
- Howard, J., imagenette dataset, https://github.com/fastai/imagenette/
- Oseledets, I. V., Tensor-train decomposition. In SIAM J. Sci. Comput., vol. 33, no. 5, 2011, pp. 2295–2317 http://dx.doi.org/10.1137/090752286