Logo PTI
Polish Information Processing Society
Logo FedCSIS

Annals of Computer Science and Information Systems, Volume 11

Proceedings of the 2017 Federated Conference on Computer Science and Information Systems

Towards Real-time Motion Estimation in High-Definition Video Based on Points of Interest

,

DOI: http://dx.doi.org/10.15439/2017F417

Citation: Proceedings of the 2017 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 11, pages 6770 ()

Full text

Abstract. Currently used motion estimation is usually based on a computation of optical flow from individual images or short sequences. As these methods do not require an extraction of the visual description in points of interest, correspondence can be deduced only by the position of such points.

References

  1. M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” International Journal of Computer Vision, vol. 74, no. 1, pp. 59–73, Aug 2007. http://dx.doi.org/10.1007/s11263-006-0002-3
  2. Z. Yuan, P. Yan, and S. Li, “Super resolution based on scale invariant feature transform,” in International Conference on Audio, Language and Image Processing, 2008. http://dx.doi.org/10.1109/ICALIP.2008.4590265
  3. W. Zheng, H. Tang et al., Emotion Recognition from Arbitrary View Facial Images. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, pp. 490–503. ISBN 978-3-642-15567-3
  4. A. Censi, A. Fusiello, and V. Roberto, “Image stabilization by features tracking,” in Proceedings 10th International Conference on Image Analysis and Processing, 1999. http://dx.doi.org/10.1109/ICIAP.1999.797671
  5. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25, 2012, pp. 1097–1105.
  6. S. Gould, J. Arfvidsson et al., “Peripheral-foveal vision for real-time object recognition and tracking in video,” in Proceedings of the 20th International Joint Conference on Artifical Intelligence, 2007.
  7. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” IJCAI’81, pp. 674–679, 1981. [Online]. Available: http://dl.acm.org/citation.cfm?id=1623264.1623280
  8. J. Shi and C. Tomasi, “Good features to track,” in 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Jun 1994. http://dx.doi.org/10.1109/CVPR.1994.323794. ISSN 1063-6919 pp. 593–600.
  9. C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the Alvey Vision Conference. Alvety Vision Club, 1988. http://dx.doi.org/10.5244/C.2.23 pp. 23.1–23.6.
  10. J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679–698, Nov 1986. http://dx.doi.org/10.1109/TPAMI.1986.4767851
  11. S. M. Smith and J. M. Brady, “Susan—a new approach to low level image processing,” International Journal of Computer Vision, vol. 23, no. 1, pp. 45–78, May 1997. http://dx.doi.org/10.1023/A:1007963824710
  12. E. Rosten and T. Drummond, “Fusing points and lines for high performance tracking,” in IEEE International Conference on Computer Vision (ICCV), 2005. http://dx.doi.org/10.1109/ICCV.2005.104. ISSN 1550-5499
  13. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, Nov 2004. http://dx.doi.org/10.1023/B:VISI.0000029664.99615.94
  14. H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” 9th European Conference on Computer Vision (ECCV 2006), pp. 404–417, 2006. http://dx.doi.org/10.1007/11744023_32
  15. E. Rublee, V. Rabaud et al., “Orb: An efficient alternative to sift or surf,” in 2011 International Conference on Computer Vision, Nov 2011. http://dx.doi.org/10.1109/ICCV.2011.6126544. ISSN 1550-5499 pp. 2564–2571.
  16. E. Mair, G. D. Hager et al., “Adaptive and generic corner detection based on the accelerated segment test,” 11th European Conference on Computer Vision (ECCV), 2010. http://dx.doi.org/10.1007/978-3-642-15552-9_14
  17. S. Leutenegger, M. Chli, and R. Y. Siegwart, “Brisk: Binary robust invariant scalable keypoints,” in IEEE International Conference on Computer Vision (ICCV). IEEE, 2011. http://dx.doi.org/10.1109/ICCV.2011.6126542
  18. A. Canclini, M. Cesana et al., “Evaluation of low-complexity visual feature detectors and descriptors,” in 2013 18th International Conference on Digital Signal Processing (DSP), July 2013. doi: 10.1109/ICDSP.2013.6622757. ISSN 1546-1874 pp. 1–7.
  19. P. Pulc, E. Rosenzveig, and M. Holeňa, “Image processing in collaborative open narrative systems,” in Fourth International Workshop on Computational Intelligence and Data Mining (WCIDM 2016), vol. 1649. CEUR, 2016. ISSN 1613-0073 pp. 155–162.
  20. M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” in In VISAPP International Conference on Computer Vision Theory and Applications, 2009. http://dx.doi.org/10.5220/0001787803310340 pp. 331–340.
  21. S. S. Beauchemin and J. L. Barron, “The computation of optical flow,” ACM Comput. Surv., vol. 27, no. 3, pp. 433–466, Sep. 1995. http://dx.doi.org/10.1145/212094.212141
  22. J. L. Bentley, “Multidimensional binary search trees used for associative searching,” Commun. ACM, vol. 18, no. 9, pp. 509–517, Sep. 1975. http://dx.doi.org/10.1145/361002.361007
  23. K. C. Zatloukal, M. H. Johnson, and R. E. Ladner, “Nearest neighbor search for data compression,” in Data Structures, Near Neighbor Searches, and Methodology, 1999.
  24. J. L. Blanco-Claraco, “nanoflann,” 2011. [Online]. Available: https://github.com/jlblancoc/nanoflann