Logo PTI
Polish Information Processing Society
Logo FedCSIS

Annals of Computer Science and Information Systems, Volume 21

Proceedings of the 2020 Federated Conference on Computer Science and Information Systems

Interpolation merge as augmentation technique in the problem of ship classification

,

DOI: http://dx.doi.org/10.15439/2020F11

Citation: Proceedings of the 2020 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 21, pages 443446 ()

Full text

Abstract. Quite a common problem during training the classifier is a small number of samples in the training database, which can significantly affect the obtained results. To increase them, data augmentation can be used, which generates new samples based on existing ones, most often using simple transformations. In this paper, we propose a new approach to generate such samples using image processing techniques and discrete interpolation method. The described technique creates a new image sample using at least two others in the same class. To verify the proposed approach, we performed tests using different architectures of convolution neural networks for the ship classification problem.

References

  1. M. A. Kutlugün, Y. Sirin, and M. Karakaya, “The effects of augmented training dataset on performance of convolutional neural networks in face recognition system,” in 2019 Federated Conference on Computer Science and Information Systems (FedCSIS). IEEE, 2019, pp. 929–932.
  2. W. Zhang, P. M. Chu, K. Huang, and K. Cho, “Driving data generation using affinity propagation, data augmentation, and convolutional neural network in communication system,” International Journal of Communication Systems, p. e3982, 2019.
  3. A. Teramoto, A. Yamada, Y. Kiriyama, T. Tsukamoto, K. Yan, L. Zhang, K. Imaizumi, K. Saito, and H. Fujita, “Automated classification of benign and malignant cells from lung cytological images using deep convolutional neural network,” Informatics in Medicine Unlocked, vol. 16, p. 100205, 2019.
  4. N. J. Tustison, B. B. Avants, Z. Lin, X. Feng, N. Cullen, J. F. Mata, L. Flors, J. C. Gee, T. A. Altes, J. P. Mugler III et al., “Convolutional neural networks with template-based data augmentation for functional lung image quantification,” Academic radiology, vol. 26, no. 3, pp. 412–423, 2019.
  5. F. Gao, T. Huang, J. Sun, J. Wang, A. Hussain, and E. Yang, “A new algorithm for sar image target recognition based on an improved deep convolutional neural network,” Cognitive Computation, vol. 11, no. 6, pp. 809–824, 2019.
  6. K. Cho et al., “Retrieval-augmented convolutional neural networks against adversarial examples,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 11 563–11 571.
  7. J. M. Haut, M. E. Paoletti, J. Plaza, A. Plaza, and J. Li, “Hyperspectral image classification using random occlusion data augmentation,” IEEE Geoscience and Remote Sensing Letters, 2019.
  8. G. Chen, C. Li, W. Wei, W. Jing, M. Woźniak, T. Blažauskas, and R. Damaševičius, “Fully convolutional neural network with augmented atrous spatial pyramid pool and fully connected fusion path for high resolution remote sensing image segmentation,” Applied Sciences, vol. 9, no. 9, p. 1816, 2019.
  9. L. Mou, Y. Hua, and X. X. Zhu, “A relation-augmented fully convolutional network for semantic segmentation in aerial scenes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2019, pp. 12 416–12 425.
  10. N. Wawrzyniak and A. Stateczny, “Automatic watercraft recognition and identification on water areas covered by video monitoring as extension for sea and river traffic supervision systems,” Polish Maritime Research, vol. 25, no. s1, pp. 5–13, 2018.
  11. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” pp. 1097–1105, 2012.
  12. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint https://arxiv.org/abs/1409.1556, 2014.
  13. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.