Logo PTI Logo FedCSIS

Proceedings of the 18th Conference on Computer Science and Intelligence Systems

Annals of Computer Science and Information Systems, Volume 35

Automatic Colorization of Digital Movies using Decolorization Models and SSIM Index

, ,

DOI: http://dx.doi.org/10.15439/2023F3017

Citation: Proceedings of the 18th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 35, pages 843853 ()

Full text

Abstract. Re-colorization of images or movies is a challenging problem due to the infinite RGB solutions for a monochrome object. In general, the process is assisted by humans, either by providing colorization hints or relevant training data for ML/AI algorithms. Our intention is to develop a mechanism for fully unguided (and with no training data used) colorization of movies. In other words, we aim to create acceptable colored counterparts of movies in domains where only monochrome visualizations physically exist (e.g. IR, UV, MRI, etc. data). Following our past approach to image colorization, the method assumes arbitrary rgb2gray models and utilizes a few probabilistic heuristics. Additionally, we maintain the temporal stability of colorization by locally using structural similarity (SSIM) between adjacent frames. The paper explains the details of the method, presents exemplary results and compares them to the state-of-the art solutions.

References

  1. I. Zeger, S. Grgic, J. Vukovic, and G. Sisul, “Grayscale image colorization methods: Overview and evaluation,” IEEE Access, vol. 9, pp. 113 326–113 346, 2021. http://dx.doi.org/10.1109/ACCESS.2021.3104515
  2. A. Salmona, L. Bouza, and J. Delon, “Deoldify: A review and implementation of an automatic colorization method,” Image Processing On Line, vol. 12, pp. 347–368, 2022. http://dx.doi.org/10.5201/ipol.2022.403
  3. R. Irony, D. Cohen-Or, and D. Lischinski, “Colorization by example,” in Eurographics Symposium on Rendering (2005). The Eurographics Association, 2005. http://dx.doi.org/10.2312/EGWR/EGSR05/201-210. ISBN 3-905673-23-1. ISSN 1727-3463
  4. R. Gupta, A. Chia, D. Rajan, E. Ng, and Z. Huang, “Image colorization using similar images,” in 20th ACM Int. Conf. on Multimedia (MM’12), 2012, pp. 369–378.
  5. A. Levin, D. Lischinski, and Y. Weiss, “Colorization using optimization,” ACM Transactions on Graphics, vol. 23, pp. 689–694, 06 2004. http://dx.doi.org/10.1145/1015706.1015780
  6. A. Popowicz and B. Smolka, “Fast image colourisation using the isolines concept,” Multimedia Tools and Applications, vol. 75, pp. 15 987–16 009, 2017. http://dx.doi.org/10.1007/s11042-016-3892-2
  7. R. Zhang, P. Isola, and A. Efros, “Colorful image colorization,” in Computer Vision – ECCV 2016. Springer, 2016. http://dx.doi.org/10.1007/978-3-319-46487-9_40 pp. 649–666.
  8. E. Farella, S. Malek, and F. Remondino, “Colorizing the past: Deep learning for the automatic colorization of historical aerial images,” Journal of Imaging, vol. 8, p. 269, 10 2022. http://dx.doi.org/10.3390/jimaging8100269
  9. S. Iizuka, E. Simo-Serra, and H. Ishikawa, “Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification,” ACM Transactions on Graphics, vol. 35, pp. 1–11, 07 2016. http://dx.doi.org/10.1145/2897824.2925974
  10. J. Su, H. Chu, and J. Huang, “Instance-aware image colorization,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), jun 2020. http://dx.doi.org/10.1109/CVPR42600.2020.00799 pp. 7965–7974.
  11. H. Lee, D. Kim, D. Lee, J. Kim, and J. Lee, “Bridging the domain gap towards generalization in automatic colorization,” in Computer Vision – ECCV 2022. Springer, 2022. http://dx.doi.org/10.1007/978-3-031-19790-1_32 pp. 527–543.
  12. A. Deshpande, J. Rock, and D. Forsyth, “Learning large-scale automatic image colorization,” in 2015 IEEE International Conference on Computer Vision (ICCV), 2015. http://dx.doi.org/10.1109/ICCV.2015.72 pp. 567–575.
  13. A. Royer, A. Kolesnikov, and C. Lampert, “Probabilistic image colorization,” in Proc. British Machine Vision Conference (BMVC). BMVA Press, September 2017. http://dx.doi.org/10.5244/C.31.85 pp. 85.1–85.12.
  14. C. Lei and Q. Chen, “Fully automatic video colorization with self-regularization and diversity,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019. http://dx.doi.org/10.1109/CVPR.2019.00387 pp. 3748–3756.
  15. M. Hofinger, E. Kobler, A. Effland, and T. Pock, “Learned variational video color propagation,” in Computer Vision – ECCV 2022, S. Avidan, G. Brostow, M. Cissé, G. M. Farinella, and T. Hassner, Eds. Cham: Springer Nature Switzerland, 2022. ISBN 978-3-031-20050-2 pp. 512–530.
  16. M. G. Blanch, N. O’Connor, and M. Mrak, “Scene-adaptive temporal stabilisation for video colourisation using deep video priors,” in Computer Vision – ECCV 2022 Workshops, L. Karlinsky, T. Michaeli, and K. Nishino, Eds. Cham: Springer Nature Switzerland, 2023. ISBN 978-3-031-25069-9 pp. 644–659.
  17. A. Śluzek, “Do we always need ai for image colorization?” Proceedings of the 4rd Polish Conference on Artificial Intelligence, Lodz, Poland, April 2023, in print.
  18. A. Śluzek, “On unguided automatic colorization of monochrome images,” in Proc. 31 Int. Conf. in Central Europe on Computer Graphics, Visualization and Computer Vision, WSCG 2023, Plzen, Czechia., May 2023, in print.
  19. ITU-R, “Parameter values for the hdtv standards for production and international programme exchange,” Geneva, Recommendation BT.709-6, 2015.
  20. E. Jessee and E. Wiebe, “Visual perception and the hsv color system: Exploring color in the communications technology classroom,” Technology Teacher, vol. 68, no. 1, pp. 7–11, 2008.
  21. S.-Y. Chen, J.-Q. Zhang, Y.-Y. Zhao, P. L. Rosin, Y.-K. Lai, and L. Gao, “A review of image and video colorization: From analogies to deep learning,” Visual Informatics, vol. 6, no. 3, pp. 51–68, 2022. http://dx.doi.org/10.1016/j.visinf.2022.05.003
  22. W.-S. Lai, J.-B. Huang, O. Wang, E. Shechtman, E. Yumer, and M.-H. Yang, “Learning blind video temporal consistency,” in Computer Vision – ECCV 2018, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds. Cham: Springer International Publishing, 2018. ISBN 978-3-030-01267-0 pp. 179–195.
  23. K.-H. Thung and P. Raveendran, “A survey of image quality measures,” in 2009 International Conference for Technical Postgraduates (TECH-POS), 2009. http://dx.doi.org/10.1109/TECHPOS.2009.5412098 pp. 1–4.
  24. P. Jagalingam and A. V. Hegde, “A review of quality metrics for fused image,” Aquatic Procedia, vol. 4, pp. 133–142, 2015. http://dx.doi.org/https://doi.org/10.1016/j.aqpro.2015.02.019
  25. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. http://dx.doi.org/10.1109/TIP.2003.819861
  26. D. Hasler and S. Suesstrunk, “Measuring colorfulness in natural images,” in Human Vision and Electronic Imaging VIII, vol. 5007. SPIE, 2003. http://dx.doi.org/10.1117/12.477378 pp. 87–95.
  27. E. Gebhardt and M. Wolf, “Camel dataset for visual and thermal infrared multiple object detection and tracking,” in 2018 15th IEEE Int. Conf. AVSS, 2018. http://dx.doi.org/10.1109/AVSS.2018.8639094 pp. 1–6.