Logo PTI
Polish Information Processing Society
Logo FedCSIS

Annals of Computer Science and Information Systems, Volume 8

Proceedings of the 2016 Federated Conference on Computer Science and Information Systems

Mouth features extraction for emotion classification


DOI: http://dx.doi.org/10.15439/2016F390

Citation: Proceedings of the 2016 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 8, pages 16851692 ()

Full text

Abstract. Face emotions analysis is one of the fundamental techniques that might be exploited in a natural human-computer interaction process and thus is one of the most studied topics in current computer vision literature. In consequence face features extraction is an indispensable element of the face emotion analysis as it influences decision making performance. The paper concentrates on classification of human poses based on mouth. Mouth features extraction, which next to eye region features become one of the most representative face regions in the context of emotions retrieval. Additionally, in the paper original, gradient based, mouth features extraction method was presented. Evaluation of the method was performed for a subset of the Yale images database and accuracy of classification for single emotion is over 70 \%.


  1. B. Fasel and J. Luettin, “Automatic facial expression analysis: a survey,” Pattern recognition, vol. 36, no. 1, pp. 259–275, 2003.
  2. P. Ekman and W. Friesen, “Constants across cultures in the face and emotion,” Journal of Personality and Social Psychology, vol. 17, no. 2, pp. 124–129, 1971.
  3. P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711–720, 1997.
  4. S. Liang, J. Wu, S. M. Weinberg, and L. G. Shapiro, “Improved detection of landmarks on 3d human face data,” in 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), July 2013. http://dx.doi.org/10.1109/EMBC.2013.6611039. ISSN 1094-687X pp. 6482–6485.
  5. S. Mishra and A. Dhole, “A survey on facial expression recognition techniques,” International Journal of Science and Research, vol. 4, no. 4, pp. 1247–1250, 2015.
  6. E. Hjelmås and B. K. Low, “Face detection: A survey,” Computer vision and image understanding, vol. 83, no. 3, pp. 236–274, 2001.
  7. C. Zhang and Z. Zhang, “A survey of recent advances in face detection,” Tech. rep., Microsoft Research, Tech. Rep., 2010.
  8. V. Pali, S. Goswami, and L. Bhaiya, “An extensive survey on feature extraction techniques for facial image processing,” in Sixth International Conference on Computational Intelligence and Communication Networks, 2014. http://dx.doi.org/10.1109/CICN.2014.43 pp. 142–148.
  9. M. Castrillón-Santana, D. HernÁndez-Sosa, and J. Lorenzo-Navarro, “Combining face and facial feature detectors for face detection performance improvement,” in Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, ser. Lecture Notes in Computer Science, L. Alvarez, M. Mejail, L. Gomez, and J. Jacobo, Eds., 2012, vol. 7441, pp. 82–89.
  10. T. Kanade, Computer recognition of human faces. Birkhäuser, 1977, vol. 47.
  11. R. Brunelli and T. Poggio, “Face recognition: Features versus templates,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 10, pp. 1042–1052, 1993.
  12. M. Castrillón, O. Déniz, D. Hernández, and J. Lorenzo, “A comparison of face and facial feature detectors based on the violajones general object detection framework,” Machine Vision and Applications, vol. 22, no. 3, pp. 481–494, 2011.
  13. M.-T. Yang, Y.-J. Cheng, and Y.-C. Shih, Facial Expression Recognition for Learning Status Analysis. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 131–138. ISBN 978-3-642-21619-0. [Online]. Available: http://dx.doi.org/10.1007/978-3-642-21619-0-18
  14. A. Panning, A. K. Al-Hamadi, R. Niese, and B. Michaelis, “Facial expression recognition based on haar-like feature detection,” Pattern Recognition and Image Analysis, vol. 18, no. 3, pp. 447–452, 2008.
  15. Q. Wang, C. Zhao, and J. Yang, “Robust facial feature location on gray intensity face,” in PSIVT 2009, ser. LNCS, T. Wada, F. Huang, and S. Lin, Eds., vol. 5414. Springer, 2009. http://dx.doi.org/10.1007/978-3-540-92957-4-47 p. 542549.
  16. R. Lienhart and J. Maydt, “An extended set of haar-like features for rapid object detection,” in Image Processing. 2002. Proceedings. 2002 International Conference on, vol. 1, 2002, pp. I–900.
  17. P. Viola and M. Jones, “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, pp. 137–154, 2004.
  18. A. Pentland, B. Moghaddam, and T. Starner, “View-based and modular eigenspaces for face recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, Seattle, NA, USA, 1994, pp. 84–91.
  19. W. Yang, C. Sun, W. Zheng, and K. Ricanek, “Gender classification using 3D statistical models,” Multimedia Tools and Applications, Mar 2016. http://dx.doi.org/10.1007/s11042-016-3446-7.
  20. Y.-H. Lee, C. G. Kim, Y. Kim, and T. K. Whangbo, “Facial landmarks detection using improved active shape model on android platform,” Multimedia Tools and Applications, vol. 74, no. 20, p. 88218830, Jun 2013. http://dx.doi.org/10.1007/s11042-013-1565-y.
  21. H. Schneiderman and T. Kanade, “A statistical method for 3d object detection applied to faces and cars,” in Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on, vol. 1. IEEE, 2000, pp. 746–751.
  22. M. A. Berbar, “Three robust features extraction approaches for facial gender classification,” Vis Comput, vol. 30, no. 1, p. 1931, Jan 2013. http://dx.doi.org/10.1007/s00371-013-0774-8.
  23. M. Kass, A. Witkin, and D. Terzepulos, “Snakes: Active contour models,” in First IEEE International Conference on Computer Vision, 1987, pp. 259–268.
  24. A. Hussain, M. S. Khan, M. Nazir, and M. A. Iqbal, “Survey of various feature extraction and classification techniques for facial expression recognition,” in Proceedings of the 11th WSEAS international conference on Electronics, Hardware, Wireless and Optical Communications, and proceedings of the 11th WSEAS international conference on Signal Processing, Robotics and Automation, and proceedings of the 4th WSEAS international conference on Nanotechnology, 2012, pp. 138–142.
  25. T. Jabid, M. H. Kabir, and O. Chae, “Robust facial expression recognition based on local directional pattern,” ETRI journal, vol. 32, no. 5, pp. 784–794, 2010.
  26. S. Yan, S. Shan, X. Chen, and W. Gao, “Locally assembled binary (lab) feature with feature-centric cascade for fast and accurate face detection,” in Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, 2008, pp. 1–7.
  27. Y.-K. Kim, J. G. Lim, and M.-H. Kim, “Comparison of lip image feature extraction methods for improvement of isolated word recognition rate,” Aug 2015. http://dx.doi.org/10.14257/astl.2015.107.14.
  28. S.-I. Chien and I. Choi, “Face and facial landmarks location based on log-polar mapping,” Lecture Notes in Computer Science, p. 379386, 2000. http://dx.doi.org/10.1007/3-540-45482-9-38.
  29. I. Matthews, T. Cootes, J. Bangham, S. Cox, and R. Harvey, “Extraction of visual features for lipreading,” IEEE Trans. Pattern Anal. Machine Intell., vol. 24, no. 2, p. 198213, 2002. http://dx.doi.org/10.1109/34.982900.
  30. X.-g. Shen and W. Wu, “An algorithm of lips secondary positioning and feature extraction based on ycbcr color space,” in International Conference on Advances in Mechanical Engineering and Industrial Informatics. Atlantis Press, 2015. http://dx.doi.org/10.2991/ameii-15.2015.271.
  31. T. W. Lewis and D. M. W. Powers, “Lip feature extraction using red exclusion,” in Selected Papers from the Pan-Sydney Workshop on Visualisation - Volume 2, ser. VIP ’00. Darlinghurst, Australia, Australia: Australian Computer Society, Inc., 2001. ISBN 0-909-92580-1 pp. 61–67. [Online]. Available: http://dl.acm.org/citation.cfm?id=563752.563761
  32. C. He, H. Mao, and L. Jin, “Realistic smile expression recognition using biologically inspired features,” in AI 2011, ser. LNAI, D. Wang and M. Reynolds, Eds., vol. 7106. Springer, 2011, p. 590599.
  33. C. Su, J. Deng, Y. Yang, and G. Wang, “Expression recognition methods based on feature fusion,” in BI 2010, ser. LNAI, Y. Yao et al., Eds., vol. 6334. Springer, 2010, p. 346356.
  34. M. Krzyśko, W. Wołyński, T. Górecki, and M. Skorzybut, “Systemy uczące się,” Rozpoznawanie wzorców, analiza skupień i redukcja wymiarowości. WNT, Warszawa, 2008.
  35. I. H. Witten, E. Frank, and M. A. Hall, Data Mining: Practical Machine Learning Tools and Techniques, 3rd ed. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2011. ISBN 0123748569, 9780123748560
  36. M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten, “The weka data mining software,” ACM SIGKDD Explorations Newsletter, vol. 11, no. 1, p. 10, Nov 2009. http://dx.doi.org/10.1145/1656274.1656278.
  37. L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001. http://dx.doi.org/10.1023/A:1010933404324.
  38. D. W. Aha, D. Kibler, and M. K. Albert, “Instance-based learning algorithms,” Machine Learning, vol. 6, no. 1, p. 3766, 1991. http://dx.doi.org/10.1023/a:1022689900470.
  39. C.-C. Chang and C.-J. Lin, “Libsvm—a library for support vector machines,” ACM Trans. Intell. Syst. Technol., vol. 2, no. 3, p. 127, Apr 2011. http://dx.doi.org/10.1145/1961189.1961199.